Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Hyper Transport Technology
Post: #1

This describes AMD s Hyper Transport? technology, a new I/O architecture for personal computers, workstations, servers, high-performance networking and communications systems, and embedded applications. This scalable architecture can provide significantly increased bandwidth over existing bus architectures and can simplify in-the-box connectivity by replacing legacy buses and bridges. The programming model used in Hyper Transport technology is compatible with existing models and requires little or no changes to existing operating system and driver software.

It provides a universal connection designed to reduce the number of buses within the system. It is designed to enable the chips inside of PCs and networking and communications devices to communicate with each other up to 48 times faster than with existing technologies. Hyper Transport technology is truly the universal solution for in-the-box connectivity.
>> It is a new I/O architecture for personal computers, workstations, servers, embedded applications etc.
>> It is a scalable architecture can provide significantly increased.
bandwidth over existing bus architectures .
>> It simplify in-the-box connectivity by replacing legacy buses and bridges.
>> The programming model used in Hyper Transport technology is compatible with existing models and requires little or no changes to existing operating system and driver software.

Hyper Transport technology provides high speeds while maintaining full software and operating system compatibility with the Peripheral Component Interconnect (PCI) interface that is used in most systems today. In older multi-drop bus architectures like PCI, the addition of hardware devices affects the overall electrical characteristics and bandwidth of the entire bus. Even with PCI-X1.0, the maximum supported clock speed of 133MHz must be reduced when more than one PCI-X device is attached. Hyper Transport technology uses a point-to-point link that is connected between two devices, enabling the overall speed of the link to transfer data much faster
Post: #2
The demand for faster processors, memory and I/O is a familiar . refrain in market applications ranging from personal computers and servers to networking systems and from video games to office automation equipment. Once information is digitized, the speed at which it is processed becomes the foremost detenninate of product success. Faster system speed leads to faster processing. Faster processing leads to faster system performance. Faster system performance results in greater success in the marketplace. This obvious logic has led a generation of processor and memory designers to focus on one overriding objective -squeezing more speed from processors and memory devices. Processor designers have responded with faster clock rates and super pipelined architectures that use levelland level 2 caches to feed faster execution units even faster. Memory designers have responded with dual data rate memories that allow data access on both the leading and trailing clock edges doubling data access. I/O developers have responded by designing faster and wider I/O channels and introducing new protocols to meet anticipated I/O needs. Today, processors hit the market with 2+ GHz clock rates, memory devices provide subS ns access times and standard 110 buses are 32-and 64-bit wide, with new higher speed protocols on the horizon. Increased processor speeds, faster memories, and wider I/O channels are not always practical answers to the need for speed. The main problem is integration of more and faster system elements. Faster execution units, signal lines onto the physical printed circuit board. One aspect of the integration problem is the physical problems posed by speed. Faster signal speeds lead to manufacturing problems due to loss of signal integrity and greater susceptibility to noise. Very high-speed digital signals tend to become high frequency radio waves exhibiting the same problematic characteristics of high-frequency analog signals. This wreaks havoc on printed circuit board's manufactured using standard, low-cost materials and technologies. Signal integrity problems caused by signal crosstalk, signal and clock skew and signal reflections increase dramatically as clock speed increases. The other aspect of the integration problem is the 110 bottleneck that develops when mUltiple high-speed execution units are combined for greater performance. While faster execution units relieve processor performance bottlenecks, the bottleneck moves to the 110 links. Now more data sits idling, waiting for the processor and I/O buses to clear and movement of large amounts of data from one subsystem to another slows down the overall system performance ratings.

I/O Band width problem High pin count High power consumption While microprocessor performance continues to double every eighteen months, the Perfonnance of the I/O bus architecture has lagged, doubling in performance approximately every three years, as illustrated in This 110 bottleneck constrains system performance, resulting in diminished actual Performance gains as the processor and memory sub systems evolve. Over the past 20 years, a number of legacy buses, such as ISA, VL-Bus, AGP, LPC, PCI-32/33, and PCI-X, have emerged that must be bridged together to support a varying array of devices. Servers and workstations require multiple high-speed buses, including PCI-64/66, AGP Pro, and SNA buses like InfiniBand. The hodge-podge of buses increases system complexity, adds many transistors devoted to bus arbitration and bridge logic, while delivering less than optimal performance. A number of new technologies are responsible for the increasing demand for additional bandwidth. High-resolution, texture-mapped 3D graphics and highdefinition streaming video are escalating bandwidth needs between CPUs and graphics processors. Technologies like high-speed networking (Gigabit . Ethernet, InfiniBand, etc.) and wireless communications (Bluetooth) are allowing more devices to exchange growing amounts of data at rapidly increasing speeds. Software technologies are evolving, resulting in breakthrough methods of utilizing multiple system processors. As processor speeds rise, so wi 11 the need for very fast, high-volume inter-processor data traffic. While these new technologies quickly exceed the capabilities of today's PCI bus, existing interface functions like MP3 audio, v.90 modems, USB, 1394, and JO/ JOOEthemet are left to compete for the remaining bandwidth. These functions are now commonly integrated into core logic products. Higher integration is increasing the number of pins needed to bring these multiple buses into and out of the chip packages. Nearly all of these existing buses are single ended, requiring additional power and ground pins to provide sufficient current return paths. High pin counts increase RF radiation, which makes jt djfficult for system designers to meet FCC and VDE requjrements. Reducing pin count helps system designers to reduce power consumption and meet thermal requirements. response to these problems, AMD began developing the Hyper Transport„¢ I/O link architecture in I 997. Hyper Transport technology has been designed to provide system architects with significantly more bandwidth, low-latency responses, lower pin counts, compatibility with legacy PC buses, extensibility to new SNA buses, and transparency to operating system software, with little impact on peripheral drivers. As CPUs advanced in terms of clock speed and processing power, the I/O subsystem that supports the processor could not keep up. In fact, different links developed at different rates within the subsystem. The basic elements found on a motherboard include the CPU, Northbridge, Southbridge, PCI bus, and system memory. Other components are found on a motherboard, such as network controllers, USB ports, etc., but most generally communicate with the rest of the system through the Southbridge. Many of the links above have advanced over the years. They each began with standard PCI-like perfonnance (33MHz 32-bit wide, for just over 1Gbps throughput), but each has developed differently over time. The link between the CPU and Northbridge has progressed to a 133MHz(effectively a 266MHz as itis sampled twice per clock cycle) 64-bit wide bus. This provides a throughput of close to 17Gbps . . The Northbridge to system memory link has advanced to support PC2) OOmemory: it is a 64-bit wide 133MHz (also sampled twice per clock cycle) bus. This link also has a bandwidth of almost 17Gbps . . The Northbridge to graphics controller connection has stayed at 32bits wide and grown to a 66MHz bus, but with 4xAGP it is sampled four times per clock. 8xAGP (sampling the data eight times per clock) will pull the throughput of this link even with the other two at nearly 17Gbps. Until recently, however, the Northbridge-Southbridge link has remained the same standard PCI bus. Although most devices connected to the Southbridge do not demand high bandwidth, their demands are growing as they evolve, and the aggregate bandwidth they could require easily exceeds the bandwidth of the Northbridge-Southbridge link. Many server applications, such as database functions and data mining, require access to a large amount of data. This requires as much throughput from the disk and network as possible, which is gated by the Northbridge-Southbridge lirik..

Hyper Transport technology, formerly codenamed Lightning Data Transfer (LDT), was developed at AMD with the help of industry partners to provide a high-speed, high-performance, point-to-point link for interconnecting integrated circuits on a board. With atop signaling rate of 1.6 GHz on each wire pair, a Hyper Transport technology link: can support a peak aggregate bandwidth of 12.8 Gbytes/s.The Hyper Transport I/O link is a complementary technology for InfiniBand and1GbllOGb Ethernet solutions. Both InfiniBand and high-speed Ethernet interfaces are highperfonnance networking protocol and box-to-box solutions, while Hyper Transport is intended to support "in-the-box" connectivity. The Hyper Transport specification provides both link-and system-level power management capabilities optimized for processors and other system devices. The ACPI compliant power management scheme is primarily messagebased, reducing pin-count requirements. Hyper Transport technology is targeted at networking, telecommunications, computer and high perfonnance embedded applications and any other application in which high speed, low latency, and scalability is necessary. Hyper Transport technology addresses this bottleneck by providing a point-to point architecture that can support bandwidths of up to 51.2Gbps in each direction. Not all devices will require this much bandwidth, which is why Hyper Transport technology operates at many different frequencies and widths. Currently, the specification supports a frequency of up to 800MHz (sampled twice per period) and a width of up to 32-bits in each direction. Hyper Transport technology also implements fast switching mechanisms, so it provides low latency as well as high bandwidth. By providing up to l02.4Gbps aggregate bandwidth, HyperTransport technology enables 1/0intensive applications to use the throughput they demand. In order to ease the implementation of Hyper Transport technology and provide stability, it was designed to be transparent to existing software and operating systems. Hyper Transport technology supports plug-and-play features and PCI-like enumeration, so existing s9ftware can interface with (l[ Hyper Transport technology link the same way it does with current PCI buses. This interaction is designed to be reliable, because the same software will be used as before. In fact it may become more reliable, as data transfers will benefit from the error detection features Hyper Transport technology provides. Applications will benefit from Hyper Transport technology without needing extra support or updates from the developer. The physical implementation of Hyper Transport technology is straightforward, as it requires no glue logic or additional hardware. Hyper Transport technology specifications also stress a low pin count. This helps to minimize cost, as fewer pal1s are required to implement Hyper Transport technology, and reduces Electro-Magnetic Interference (EMI), a common problem in board layout design. Because Hyper Transport technology is designed to require no additional hardware, is transparent to existing software, and simplifies EMI issues, it is a relatively inexpensive, easy-toimplement technology

In developing Hyper Transport technology, the architects of the technology considered the design goals presented in this section. They wanted to develop a new VO protocol for" in-the-box" VO connectivity that would:
I. Improve system performance
-Provide increased I/O bandwidth -Reduce data bottlenecks by moving slower devices out of critical
information paths -Ensure low latency responses -Reduce power consumption
2. Simplify system design
-Reduce the number ofbuses within the .system -Use as few pins as possible to allow smaller packages and to reduce cost
3. Increase I/O flexibility
-Provide modular bridge architecture -Allow for differing upstream and downstream bandwidth requirements
4. Maintain compatibility with legacy systems
-Complement standard external buses -Have little or no impact on existing operating systems and drivers
1. Ensure extensibility to new system network architecture (SNA) buses
2. Provide highly scalable multiprocessing systems

Flexible 110 Architecture
The resulting protocol defines a high-performance and scalable interconnect between CPU, memory, and I/O devices. Conceptually, the architecture of the Hyper Transport I/O link can be mapped into five different layers, which structure is similar to the Open System Interconnection (OSI) reference model.
In Hyper Transport technology:
1. The physical layer defines the physical and electrical characteristics 0 the protocol. This layer interfaces to the physical world and includes data, control, and clock lines.
2. The data link layer incl udes the initialization and configuration sequence, periodic Cyclic redundancy check (CRC), disconnect/reconnect sequence, information packet tor flow control and error management, and double word framing for other packets.
3. The protocol layer includes the commands, the virtual channels in which they run, and the ordering rules that govern their flow.
4. The transaction layer uses the elements provided by the protocol layer to perform actions, such as reads and writes.
5. The session layer includes rules for negotiating power management state changes, as well as interrupt and system management activities.

Each Hyper Transport link consists of two point-to-point unidirectional data paths, as illustrated in Figure. Data path widths of 2, 4, 8, and 16 bits can be implemented either upstream or downstream, depending on the device-specific bandwidth requirements. Commands, addresses, and data (CAD) all use the same set of wires for signaling, dramatically reducing pin requirements. All Hyper Transport technology commands, addresses, and data travel in packets. All packets are multiples of four bytes (32 bits) in length. If the link uses data paths narrower than 32 bits, successive bit-times are used to complete the packet transfers.The Hyper Transport link was specifically designed to deliver a high-performance and scalable interconnect between CPU, memory, aild I/O devices, whi Ie using as few pins as possible.¢ " To achieve very high data rates, the Hyper Transport link uses low-swing differential signaling with on-die differential termination. To achieve scalable bandwidth, the Hyper Transport link permits seamless scalability ofboth frequency and data width.
The data link layer includes the initialization and configuration sequence, periodic cyclic redundancy check (CRC), disconnect/reconnect sequence, infonnation packets for flow control and error management, and double word framing for other packets.
Hyper Transport technology-enabled devices with transmitter and receiver links of equal width can be easily and directly connected. Devices with asymmetric data paths can also be linked together easily. Extra receiver pins are tied to logic 0, while extra transmitter pins are left open. During power-up, when RESET# is asserted and the Control signal is at logic 0, each device transmits a bit pattern indicating the width of its receiver. Logic within each device determines the maximum safe width for its transmitter. Whi Ie this may be narrower than the optimal width, it provides reliable communications between devices until configuration software can optimize the link to the widest common width. For applications that typically send the bulk of the data in one direction, component vendors can save costs by implementing a wide path for the majority of the traffic and a narrow path in the lesser used direction. Devices are not required to implement equal width upstream and downstream links.
The protocol layer incl udes the commands, the virtual channels in which they run, and the ordering rules that govern their flow. The transaction layer uses the elements provided by the protocol layer to perform actions, such as read request and responses. Commands All Hyper Transport technology commands are either four or eight bytes long and begin with a 6-bit command type field. The most commonly used commands are Read Request, Read Response, and Write. A virtual channel contains requests or responses with the same ordering priority. When the command requires an address, the last byte of the command is concatenated with an additional four bytes to create a 40-bit address. Hyper Transport commands and data are separated into one of three types of ' viliual channels: non-posted requests, posted requests and responses. Non posted requests require a response from the receiver. All read requests and some write requests are non-posted requests. Posted requests do not require a response from the receiver. Write requests are posted requests. Responses are replies to non-posted requests. Read responses or target done responses to non-posted writes are types of response messages. Command packets are 4 or 8 bytes and include all of the infonnation needed for inter-device or system-wide communications except in the case 0 reads and writes, when the data packet is required for the data payload. Hyper Transport writes require an 8-byte Write Request control packet, followed by the data packet. Hyper Transport reads require an 8-byte Read Request control packet (issued from the host or other device), followed by a 4-byte Read Response control packet (issued by the peripheral or responding device), followed by the data packet.
ENHANCED LOW VOLTAGE DIFFERENTIAL SIGNALLING The signaling technology used in Hyper Transport technology is a type of low voltage differential signaling (LVDS ). However, it is not the conventional IEEE L VDS standard. It is an enhanced LVDS technique developed to evolve with the performance of future process technologies. This is designed to help ensure that the Hyper Transport technology standard has a long lifespan. LVDS has been widely used in these types 0 applications because it requires fewer pins and wires. This is also designed to reduce cost and power requirements because the transceivers are built into the controller chips. Hyper Transport technology uses low-voltage differential signal ing with differential impedance (ZOD) of 100 ohms for CAD, Clock, and Control signals, as illustrated in. Characteristic line impedance is 60 ohms. The driver supply voltage is 1.2 volts, instead of the conventional 2.5 volts for standard L VDS. Di fferential signaling and the chosen impedance provide a robust signaling system for use on low-cost printed circuit boards. Common ¢ four-layer PCB materials with specified di-electric" trace, and space dimensions and tolerances or controlled impedance boards are sufficient to implement a Hyper Transport I/O link. The differential signalingpermits trace lengths up to 24 inches for 800 Mbit/s operation.

Important Note..!

If you are not satisfied with above reply ,..Please


So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: powerpoint hyper transport technology, hyper thearding technology seminar pdf, hyper cars pdf, transport, bus hyper transport, rct school transport, advantages and disadvantages of daisy chain topology switch topology and star topology of hyper transport technology,

Quick Reply
Type your reply to this message here.

Image Verification
Image Verification
(case insensitive)
Please enter the text within the image on the left in to the text box below. This process is used to prevent automated posts.

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  5 Pen PC Technology project topics 95 89,370 21-08-2015 11:18 PM
Last Post: Guest
  Jini Technology computer science crazy 10 11,344 19-08-2015 01:36 PM
Last Post: seminar report asees
  3D-OPTICAL DATA STORAGE TECHNOLOGY computer science crazy 3 7,074 12-09-2013 08:28 PM
Last Post: Guest
Question 4g wireless technology (Download Full Report ) computer science crazy 34 30,741 15-03-2013 04:10 PM
Last Post: computer topic
  FACE RECOGNITION TECHNOLOGY A SEMINAR REPORT Computer Science Clay 25 31,584 14-01-2013 01:07 PM
Last Post: seminar details
Last Post: seminar details
  TOUCH SCREEN TECHNOLOGY seminar projects crazy 1 2,550 06-12-2012 12:12 PM
Last Post: seminar details
  Brain finger printing technology seminar projects crazy 43 43,457 05-12-2012 02:41 PM
Last Post: seminar details
Photo Cybereconomy : Information Technology and Economy computer science crazy 1 1,975 23-11-2012 01:00 PM
Last Post: seminar details
  Cybereconomy : Information Technology and Economy Electrical Fan 2 2,129 23-11-2012 01:00 PM
Last Post: seminar details