Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Post: #1

With the world becoming smaller and smaller day-by-day, communication seems to be one of the most sought after technical sector. This revolution is centered on a core called the Digital Signal Processing (DSP). This explosive growth is by fuelled by the proliferation of DSP applications such as Highspeed networking/switching/routing, Voice/Fax Modems, Broadband, Wireless, Digital Imaging and Video. New killer applications are constantly evolving in the DSP space.

The term DSP applies broadly to continuous mathematical processes attempted in real-time. These include functions such as Digital Filtering? (FIR and IIR), Viterbi Decoder, Convolution, Correlation, Fast Fourier Transforms etc. Most of these functions require the incoming data to be multiplied or added with various internal feedback mechanisms to perform the desired mathematical function. This function is generically called Multiply/Accumulate. To increase performance, most general-purpose DSP processors perform a multiply/accumulate function in a single clock cycle (or less). The hardware to perform this function is called a Multiply/Accumulator (MAC). Most DSP processors have a fixed-point MAC while some have a more expensive floating-point MAC. Digital signal processing is an area of science and engineering that has developed rapidly over the past 30 years. Not only do digital circuits yield cheaper and more reliable systems for signal processing, they have other advantages as well. In particular, digital signal processing hardware allows programmable operations. Through software, one can more easily modify the signal processing functions to be performed by the hardware. Thus digital hardware and associated software provide a greater degree of flexibility in system design. Also, there is often a higher order of precision achievable with digital hardware and software compared with analog circuits and analog signal processing systems.
Most of the signals encountered in science and engineering are analog in nature. Such signals are processed by appropriate analog systems or frequency multipliers for the purpose of changing their characteristics or extracting some desired information. Digital signal processing provides an alternative method for processing the analog signal. To perform the processing digitally, there is a need for an interfacebetween the analog signal and the digital processor. This is called the analog-todigital converter. The digital signal processor may be a large programmable digital computer or a small microprocessor programmed to perform the desired operations on the input signal or a hardwired digital processor configured to perform a specified set of operations on the input signal. Programmable machines provide the flexibility to change the signal processing operations through a change in the software, whereas hardwired machines are difficult to reconfigure. The result of the processing can be demanded in digital form or its counterpart, in the analog domain. For the latter, the digital signal is to be transformed back into the analog domain by means of an interface called the digital-to-analog converter.

Digital Signal Processing is carried out by mathematical operations. In comparison, word processing and similar programs merely rearrange stored data. This means that computers designed for business and other general applications are not optimized for algorithms such as digital filtering and Fourier analysis. Digital Signal Processors are microprocessors specifically designed to handle Digital Signal Processing tasks. In fact, hardware engineers use "DSP" to mean Digital Signal Processor, just as algorithm developers use "DSP" to mean Digital Signal Processing. The number and variety of products that include some form of digital signal processing has grown dramatically over the last five years. These devices have seen tremendous growth in the last decade, finding use in everything from cellular telephones to advanced scientific instruments. DSP has become a key component in many consumer, communications, medical, and industrial products. These products use a variety of hardware approaches to implement DSP, ranging from the use of offthe- shelf microprocessors to field-programmable gate arrays (FPGAs) to custom integrated circuits (ICs).
Programmable DSP processors, a class of microprocessors optimized for DSP, are a popular solution for several reasons. They have the advantage of potentially being reprogrammed in the field, allowing product upgrades or fixes. They are often more cost-effective and less risky than custom hardware, particularly for low-volume applications, where the development cost of custom ICs may be prohibitive. And in comparison to other types of microprocessors, DSP processors often have an advantage in terms of speed, cost, and energy efficiency. From the outset, DSP processor architectures have been molded by DSP algorithms. For nearly every feature found in a DSP processor, there are associated DSP algorithms whose computation is in some way eased by inclusion of this feature. Therefore, perhaps the best way to understand the evolution of DSP architectures is to examine typical DSP algorithms and identify how their computational requirements have influenced the architectures of DSP processors.

In the 1960s it was predicted that artificial intelligence would revolutionize the way humans interact with computers and other machines. It was believed that by the end of the century we would have robots cleaning our houses, computers driving our cars, and voice interfaces controlling the storage and retrieval of information. This hasn't happened; these abstract tasks are far more complicated than expected, and very difficult to carry out with the step-by-step logic provided by digital computers. However, the last forty years have shown that computers are extremely capable in two broad areas: data manipulation, such as word processing and database management, and mathematical calculation, used in science, engineering, and Digital Signal Processing.
All microprocessors can perform both tasks; however, it is difficult and expensive too to make a device that is optimized for both. There are technical tradeoffs in the hardware design, such as the size of the instruction set and how interrupts are handled. Even more important, there are marketing issues involved: development and manufacturing cost, competitive position, product lifetime, and so on. As a broad generalization, these factors have made traditional microprocessors, such as the Pentium, primarily directed at data manipulation. Similarly, Digital Signal Processors are designed to perform the mathematical calculations needed in Digital Signal Processing.

Enhanced-conventional DSP processors provide improved performance by allowing more operations to be encoded in every instruction, but because they follow the trend of using specialized hardware and complex, compound instructions, they suffer from some of the same problems as conventional DSPs: they are difficult to program in assembly language and they are unfriendly compiler targets.
With the goals of achieving high performance and creating an architecture that lends itself to the use of compilers, some newer DSP processors use a Multi-Issue approach. In contrast to conventional and enhanced-conventional processors, multi-issue processors use very simple instructions that typically encode a single operation. These processors achieve a high level of parallelism by issuing and executing instructions in parallel groups rather than one at a time. Using simple instructions simplifies instruction decoding and execution, allowing multi-issue processors to execute at higher clock rates than conventional or enhanced conventional DSP processors.

The two classes of architectures that execute multiple instructions in parallel are referred to as VLIW (very long instruction word) and Superscalar. These architectures are quite similar, differing mainly in how instructions are grouped for parallel execution. With one exception, all current multi-issue DSP processors use the VLIW approach. VLIW and superscalar architectures provide many execution units (many more than are found on conventional or even enhanced conventional DSPs) each of which executes its own instruction. VLIW DSP processors typically issue a maximum of between four and eight instructions per clock cycle, which are fetched and issued as part of one long super-instruction ” hence the name very long instruction word. Superscalar processors typically issue and execute fewer instructions per cycle, usually between two and four.

Many high-end CPUs, such as Pentiums and PowerPCs, have been enhanced to increase the speed of computations associated with signal processing tasks. This approach is a good one for CPUs, which typically have wide resources (buses, registers, ALUs) which can be treated as multiple smaller resources to increase performance.
Using this approach, general-purpose processors are often able to achieve performance on DSP algorithms that is better than that of even the fastest DSP processors. This surprising result is partly due to the effectiveness of SIMD, but also because many CPUs operate at extremely high clock speeds in comparison to DSP processors; high-performance CPUs typically operate at upwards of 500 MHz, while the fastest DSP processors are in the 200-250 MHz range. Given this speed advantage, the question naturally arises, Why use a DSP processor at all? There are a number of reasons why DSP processors are still the solution of choice for many applications. Although other types of processors may provide similar (or better) speed, DSP processors often provide the best mixture of performance, powerconsumption, and price. Another key advantage is the availability of DSP-specific development tools and off-the-shelf DSP software components. And for real-time applications, the superscalar architectures and dynamic features common among highperformance CPUs can be problematic.
processors are often used in control applications where the computational requirements are modest but where factors that influence product cost and time-to-market, such as low program memory use and the availability of efficient compilers, are important. Many applications require a mixture of control-oriented software and DSP software. An example is the digital cellular phone, which must implement both supervisory tasks and voice-processing tasks. In general, microcontrollers provide good performance in controller tasks and poor performance in DSP tasks; DSP processors have the opposite characteristics. Hence, until recently, combination control/signal processing applications were typically implemented using two separate processors: a microcontroller and a DSP processor. In recent years, however, a number of microcontroller vendors have begun to offer DSP-enhanced versions of their microcontrollers as an alternative to the dual-processor solution. Using a single processor to implement both types of software is attractive, because it can potentially:

1. Simplify the design task
2. Save circuit board space
3. Reduce total power consumption
4. Reduce overall system cost

Full project report:


Important Note..!

If you are not satisfied with above reply ,..Please


So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: highperformance, highspeed,

Quick Reply
Type your reply to this message here.

Image Verification
Image Verification
(case insensitive)
Please enter the text within the image on the left in to the text box below. This process is used to prevent automated posts.