Embryonics Approach Towards Integrated Circuits
Embryonics is embryonic electronics. Working of multicellular organization in living beings suggests that concepts from biology can be applied to development of new "embryonic" integrated circuits. The final objective is the development of VLSI circuits that can partially reconstruct themselves in case of a minor fault (self-repair) or completely reconstruct the original device in case of major fault (self-replication). These features are advantageous for applications depending on high reliability, like avionics and medical electronics. The basic primitive of the system is the molecule: the element of new FPGA- essentially a multiplexer associated with a programmable connection network. A finite set of molecules comprises a cell, i.e., a very simple processor associated to some memory resources. A finite set of cells comprises an organism, i.e., an application- specific multiprocessor system. The organism itself can self-replicate, giving rise to a population of identical organisms. The self-repair and self-replication are achieved by providing spare cells. This seminars report tries to bring out the basic concepts in the embryonics approach to realize VLSI circuits.
The growth and operation of all living beings are directed by the interpretation, in each of the their cells, of a chemical program, the DNA string or genome.This process is the source of inspiration for Embryonics (embryonic electronics),whose final objective is the design of highly roubst integrated circuits, endowed with properties usually associated with the living world: self repair (cicatrisation) and self-replication.The embryonics architecture is based on four hierarchical levels of organization.
1. The basic primitive of our system is the molecule, a multiplexer-based element of a novel programmable cicuit.
2. A finite set of molecules makes up a cell, essentially a small processor with an associated memory.
3. A finite set of cells makes up an organism,an application specific multiprocessor system.
4. The organism can itself replicate,giving rise to a population of identical organisms, capable of self replication and repair.
Each of the artificial cell is characterized by a fixed architecture .Multicellular arrays can realize a variety of different organisms, all capable of self replication and self repair.In order to allow for a wide range of application we then introduce a flexible architeture, realized using a new type of fine-grained field-programmable gate array whose basic element, our molecule, is essentially a programmable multiplexer.
A human being consists of approximately 60 trillion cells.At each instant, in each of these 60 trillion cells, the genome, a ribbon of 2 billion characters, is decoded to produce the proteins needed for survival of the organism.The genome contains the ensemble of the genetic inheritance of the individual and, at the same time, the instructions for both the construction and operation of the organism.the parallel execution of 60 trillion genomes in as many cells occurs ceaselessly from conception to death of the individual.Faults are rare, and in majority of cases, successfully detected and repaired.This process is remarkable for its complexity and its precision.Moreover, it relies on completely discrete information :the struture of DNA (the chemical substrate of the genome) is a sequence of four bases, usually designated with letters A(adenine),C(cytosine),G(guanine) and T(thymine).
Embedded Systems and Information Appliances
Embedded Systems and Information Appliances
Embedded system is a combination of computer hardware, software and, perhaps, additional mechanical parts, designed to perform a specific function. Embedded systems are usually programmed in high level language that is compiled (and/or assembled) into an executable ("machine") code. These are loaded into Read Only Memory (ROM) and called "firmware", "microcode" or a "microkernel". The microprocessor is 8-bit or 16-bit.The bit size refers to the amount of memory accessed by the processor. There is usually no operating system and perhaps 0.5k of RAM. The functions implemented normally have no priorities. As the need for features increases and/or as the need to establish priorities arises, it becomes more important to have some sort of decision making mechanism be part of the embedded system. The most advanced systems actually have a tiny, streamlined OS running the show, executing on a 32-bit or 64-bit processor. This is called RTOS.
All embedded system has a microprocessor or microcontroller for processing of information and execution of programs, memory in the form of ROM/RAM for storing embedded software programs and data, and I/O interfaces for external interface. Any additional requirement in an embedded system is dependent on the equipment it is controlling. Very often these systems have a standard serial port, a network interface, I/O interface, or hardware to interact with sensors and activators on the equipment.
C has become the language of choice for embedded programmers, because it has the benefit of processor independence, which allows the programmer to concentrate on algorithms and applications, rather than on the details of processor architecture. However, many of its advantages apply equally to other high-level languages as well. Perhaps the greatest strength of C is that it gives embedded programmers an extraordinary degree of direct hardware control without sacrificing the benefits of high-level languages. Compilers and cross compilers are also available for almost every processor with C.
Any source code written in C or C++ or Assembly language must be converted into an executable image that can be loaded onto a ROM chip. The process of converting the source code representation of your embedded software into an executable image involves three distinct steps, and the system or computer on which these processes are executed is called a host computer.First, each of the source files that make an embedded application must be compiled or assembled into distinct object files.Second, all of the object files that result from the first step must be linked into a final object file called the relocatable program.
Electronic Data Interchange (EDI)
Electronic Data Interchange (EDI)
Prosperity, and even survival, for small businesses depends as never before on the ability to respond with speed and certainty to the challenges and opportunities that are presented by competitors and customers. Electronic Commerce provides an opportunity to increase competitive edge and consolidate and enhance both business to business and business to consumer trading relationships.
In the current competitive & fast moving world of E-commerce & Electronic data transfer , comes a highly relevant , yet , under-utilised system of data exchange -the Electronic Data Interchange , or the EDI.
EDI has no single consensus definition .Two generally accepted definitions are :
1.Standardized format for communication of business information between computer applications .
2.Computer- to- computer exchange of information between companies, using an industry standard format .
In short , Electronic Data Interchange (EDI) is the computer-to-computer exchange of business information using a public standard. EDI is a central part of Electronic Commerce (EC), because it enables businesses to exchange business information electronically much faster, cheaper and more accurately than is possible using paper-based systems. Electronic Data Interchange, consists of data that has been put into a standard format and is electronically transferred between trading partners.Often ,an acknowledgement is returned to the sender informing them that the data was received. The term EDI is often used synonymously with the term EDT. These two terms are indeed different and should not be used interchangeably.
EDI vs EDT
The terms EDI and EDT are often misused .
1.EDT, Electronic Data Transfer, is simply sending a file electronically to a trading partner.
2.Although EDI documents are sent electronically, they are sent in a standard format.
This standard format is what makes EDI different than EDT.
EDI vs E-Commerce
EDI is also often confused with E-commerce itself , though , of course by those who are relatively novices to the technology . However, they may not be faulted as, even now, this method has not found use in many such areas where it may work wonders .
The best way to understand the requirements is to examine typical DSP algorithms and identify how their compositional requirements have influenced the architectures of DSP processor. Let us consider one of the most common processing tasks the finite impulse response filter.
For each tap of the filter a data sample is multiplied by a filter coefficient with result added to a running sum for all of the taps .Hence the main component of the FIR filter is dot product: multiply and add .These options are not unique to the FIR filter algorithm; in fact multiplication is one of the most common operation performed in signal processing -convolution, IIR filtering and Fourier transform also involve heavy use of multiply -accumulate operation. Originally, microprocessors implemented multiplication by a series of shift and add operation, each of which consumes one or more clock cycle .First a DSP processor requires a hardware which can multiply in one single cycle. Most of the DSP algorithm require a multiply and accumulate unit (MAC).
In comparison to other type of computing tasks, DSP application typically have very high computational requirements since they often must execute DSP algorithms in real time on lengthy segments ,therefore parallel operation of several independent execution units is a must -for example in addition to MAC unit an ALU and shifter is also required .
Executing a MAC in every clock cycle requires more than just single cycle MAC unit. It also requires the ability to fetch the MAC instruction, a data sample, and a filter coefficient from a memory in a single cycle. Hence good DSP performance requires high memory band width-higher than that of general microprocessors, which had one single bus connection to memory and could only make one access per cycle. The most common approach was to use two or more separate banks of memory, each of which was accessed by its own bus and could be written or read in a single cycle. This means programs are stored in a memory and data in another .With this arrangement, the processor could fetch and a data operand in parallel in every cycle .since many DSP algorithms consume two data operands per instruction a further optimization commonly used is to include small bank of RAM near the processor core that is used as an instruction cache. When a small group of instruction is executed repeatedly, the cache is loaded with those instructions, freeing the instruction bus to be used for data fetches instead of instruction fetches -thus enabling the processor to execute a MAC in a single cycleHigh memory bandwidth requirements are often further supported by dedicated hard ware for calculating memory address. These memory calculating units operate in parallel with DSP processors main execution units, enabling it to access data in new location in the memory without pausing to calculate the new address.
Memory accesses in DSP algorithm tend to exhibit very predictable pattern: for example For sample in FIR filter , the filter coefficient are accessed sequentially from start to finish , then accessed start over from beginning of the coefficient vector when processing the next input sample .This is in the contrast of other computing tasks ,such as data base processing where accesses to memory are less predictable .DSP processor address generation units take advantage of this predictability of supporting specialize addressing modes that enable the processor to efficiently access data in the patterns commonly found in DSP algorithms .The most common of these modes is register indirect addressing with post increment , which is used to automatically increment the address pointer for the algorithms where repetitive computations are performed on a series of data stored sequentially in the memory .Without this feature , the programmer would need to spend instruction explicitly incrementing the address pointer .
Direct to home television (DTH)
Direct to home television (DTH)
Direct to home (DTH) television is a wireless system for delivering television programs directly to the viewer's house. In DTH television, the broadcast signals are transmitted from satellites orbiting the Earth to the viewer's house. Each satellite is located approximately 35,700 km above the Earth in geosynchronous orbit. These satellites receive the signals from the broadcast stations located on Earth and rebroadcast them to the Earth.
The viewer's dish picks up the signal from the satellite and passes it on to the receiver located inside the viewer's house. The receiver processes the signal and passes it on to the television.The DTH provides more than 200 television channels with excellent quality of reception along with teleshopping, fax and internet facilities. DTH television is used in millions of homes across United States, Europe and South East Asia. Direct to home television is a wireless system for delivering television programming directly to a viewer's house. Usually broadcasting stations use a powerful antenna to transmit radio waves to the surrounding area. Viewer's can pickup the signal with a much smaller antenna. The main limitation of broadcast television is range. The radio signal used to broadcast television shoot out from the broadcast antenna in a straight line. Inorder to receive these signals, you have to be in the direct "line of sight" of the antenna. Small obstacles like trees or small buildings aren't a problem; but a big obstacle, such as Earth, will reflect these waves. If the Earth were perfectly flat, you could pickup broadcast television thousands of miles from the source. But because the planet is curved, it eventually breaks the signal's line of sight. The other problem with broadcast television is that the signal is often distorted even in the viewing area. To get a perfectly clear signal like you find on the cable one has to be pretty close to the broadcast antenna without too many obstacles in the wave.
DTH Television solves both these problems by transmitting broadcast signals from satellites orbiting the Earth. Since satellites are high in the sky there are a lot more customers in the line of sight. Satellites television systems transmit and receive radio signals using specialized antennas called satellite dishes. The television satellites are all in geosynchronous orbit approximately 35,700 km above the Earth. In this way you have to direct the dish at the satellite only once, and from then on it picks up the signal without adjustment. More than 200 channels with excellent audio and video are made available. The dish required is quite small (30 to 95 cm in diameter).
The Overall System
Early satellite TV viewers were explorers of sorts. They used their expensive dishes to discover unique programming that wasn't necessarily intended for mass audiences. The dish and receiving equipment gave viewers the tools to pick up foreign stations, live feeds between different broadcast stations, NASA activities and a lot of other stuff transmitted using satellites. Some satellite owners still seek out this sort of programming on their own, but today, most Direct to home TV customers get their programming through a direct broadcast satellite (DBS) provider, such as DirecTV or the Dish Network. The provider selects programs and broadcasts them to subscribers as a set package. Basically, the provider's goal is to bring dozens or even hundreds of channels to your television in a form that approximates the competition, cable TV. Unlike earlier programming, the provider's broadcast is completely digital, which means it has much better picture and sound quality. Early satellite television was broadcast in C-band radio -- radio in the 3.4-gigahertz (GHz) to 7-GHz frequency range. Digital broadcast satellite transmits programming in the Ku frequency range (12 GHz to 14 GHz ).
Digital Subscriber Line (DSL)
Identification And Verification Systems
Augmented reality (AR)
Augmented reality (AR)
Augmented reality (AR) refers to computer displays that add virtual information to a user's sensory perceptions. Most AR research focuses on see-through devices, usually worn on the head that overlay graphics and text on the user's view of his or her surroundings. In general it superimposes graphics over a real world environment in real time. Getting the right information at the right time and the right place is key in all these applications. Personal digital assistants such as the Palm and the Pocket PC can provide timely information using wireless networking and Global Positioning System (GPS) receivers that constantly track the handheld devices. But what makes augmented reality different is how the information is presented: not on a separate display but integrated with the user's perceptions. This kind of interface minimizes the extra mental effort that a user has to expend when switching his or her attention back and forth between real-world tasks and a computer screen. In augmented reality, the user's view of the world and the computer interface literally become one.
Between the extremes of real life and Virtual Reality lies the spectrum of Mixed Reality, in which views of the real world are combined in some proportion with views of a virtual environment. Combining direct view, stereoscopic video, and stereoscopic graphics, Augmented Reality describes that class of displays that consists primarily of a real environment, with graphic enhancements or augmentations.In Augmented Virtuality, real objects are added to a virtual environment. In Augmented reality, virtual objects are added to real world. An AR system supplements the real world with virtual (computer generated) objects that appear to co-exist in the same space as the real world. Virtual Reality is a synthetic environment Comparison between AR and virtual environments.The overall requirements of AR can be summarized by comparing them against the requirements for Virtual Environments, for the three basic subsystems that they require.
1) Scene generator: Rendering is not currently one of the major problems in AR. VE systems have much higher requirements for realistic images because they completely replace the real world with the virtual environment. In AR, the virtual images only supplement the real world. Therefore, fewer virtual objects need to be drawn, and they do not necessarily have to be realistically rendered in order to serve the purposes of the application.
2) Display device: The display devices used in AR may have less stringent requirements than VE systems demand, again because AR does not replace the real world. For example, monochrome displays may be adequate for some AR applications, while virtually all VE systems today use full color. Optical see-through HMDs with a small field-of-view may be satisfactory because the user can still see the real world with his peripheral vision; the see-through HMD does not shut off the user's normal field-of-view. Furthermore, the resolution of the monitor in an optical see-through HMD might be lower than what a user would tolerate in a VE application, since the optical see-through HMD does not reduce the resolution of the real environment.
3) Tracking and sensing: While in the previous two cases AR had lower requirements than VE, that is not the case for tracking and sensing. In this area, the requirements for AR are much stricter than those for VE systems. A major reason for this is the registration problem.
Asynchronous Transfer Mode (ATM)
Asynchronous Transfer Mode (ATM)
These computers include the entire spectrum of PCs, through professional workstations up to super-computers. As the performance of computers has increased, so too has the demand for communication between all systems for exchanging data, or between central servers and the associated host computer system.The replacement of copper with fiber and the advancement sin digital communication and encoding are at the heart of several developments that will change the communication infrastructure. The former development has provided us with huge amount of transmission bandwidth. While the latter has made the transmission of all information including voice and video through a packet switched network possible. With continuously work sharing over large distances, including international communication, the systems must be interconnected via wide area networks with increasing demands for higher bit rates.
For the first time, a single communications technology meets LAN and WAN requirements and handles a wide variety of current and emerging applications. ATM is the first technology to provide a common format for bursts of high speed data and the ebb and flow of the typical voice phone call. Seamless ATM networks provide desktop-to-desktop multimedia networking over single technology, high bandwidth, low latency network, removing the boundary between LAN WAN.
ATM is simply a Data Link Layer protocol. It is asynchronous in the sense that the recurrence of the cells containing information from an individual user is not necessarily periodic. It is the technology of choice for evolving B-ISDN (Board Integrated Services Digital Network), for next generation LANs and WANs. ATM supports transmission speeds of 155Mbits / sec. In the future, Photonic approaches have made the advent of ATM switches feasible, and an evolution towards an all packetized, unified, broadband telecommunications and data communication world based on ATM is taking place.
These computers include the entire spectrum of PCs, through professional workstations upto super-computers. As the performance of computers has increased, so too has the demand for communication between all systems for exchanging data, or between central servers and the associated host computer system.
The replacement of copper with fiber and the advancement sin digital communication and encoding are at the heart of several developments that will change the communication infrastructure. The former development has provided us with huge amount of transmission bandwidth. While the latter has made the transmission of all information including voice and video through a packet switched network possible.
With continuously work sharing over large distances, including international communication, the systems must be interconnected via wide area networks with increasing demands for higher bit rates.For the first time, a single communications technology meets LAN and WAN requirements and handles a wide variety of current and emerging applications. ATM is the first technology to provide a common format for bursts of high speed data and the ebb and flow of the typical voice phone call. Seamless ATM networks provide desktop-to-desktop multimedia networking over single technology, high bandwidth, low latency network, removing the boundary between LAN WAN.
The retina is a thin layer of neural tissue that lines the back wall inside the eye. Some of these cells act to receive light, while others interpret the information and send messages to the brain through the optic nerve. This is part of the process that enables us to see. In damaged or dysfunctional retina, the photoreceptors stop working, causing blindness. By some estimates, there are more than 10 million people worldwide affected by retinal diseases that lead to loss of vision. The absence of effective therapeutic remedies for retinitis pigmentosa (RP) and age-related macular degeneration (AMD) has motivated the development of experimental strategies to restore some degree of visual function to affected patients. Because the remaining retinal layers are anatomically spared, several approaches have been designed to artificially activate this residual retina and thereby the visual system.
At present, two general strategies have been pursued. The "Epiretinal" approach involves a semiconductor-based device placed above the retina, close to or in contact with the nerve fiber layer retinal ganglion cells. The information in this approach must be captured by a camera system before transmitting data and energy to the implant. The "Sub retinal" approach involves the electrical stimulation of the inner retina from the sub retinal space by implantation of a semiconductor-based micro photodiode array (MPA) into this location. The concept of the sub retinal approach is that electrical charge generated by the MPA in response to a light stimulus may be used to artificially alter the membrane potential of neurons in the remaining retinal layers in a manner to produce formed images.
Some researchers have developed an implant system where a video camera captures images, a chip processes the images, and an electrode array transmits the images to the brain. It's called Cortical Implants.