The Visual System
The human visual system is remarkable instrument. It features two mobile acquisition units each has formidable preprocessing circuitry placed at a remote location from the central processing system (brain). Its primary task include transmitting images with a viewing angle of at least 140deg and resolution of 1 arc min over a limited capacity carrier, the million or so fibers in each optic nerve through these fibers the signals are passed to the so called higher visual cortex of the brain The nerve system can achieve this type of high volume data transfer by confining such capability to just part of the retina surface, whereas the center of the retina has a 1:1 ration between the photoreceptors and the transmitting elements, the far periphery has a ratio of 300:1. This results in gradual shift in resolution and other system parameters.
At the brain's highest level the visual cortex an impressive array of feature extraction mechanisms can rapidly adjust the eye's position to sudden movements in the peripherals filed of objects too small to se when stationary. The visual system can resolve spatial depth differences by combining signals from both eyes with a precision less than one tenth the size of a single photoreceptor.
AI for Speech Recognition
AI for Speech Recognition
AI is the study of the abilities for computers to perform tasks, which currently are better done by humans. AI has an interdisciplinary field where computer science intersects with philosophy, psychology, engineering and other fields. Humans make decisions based upon experience and intention. The essence of AI in the integration of computer to mimic this learning process is known as Artificial Intelligence Integration When you dial the telephone number of a big company, you are likely to hear the sonorous voice of a cultured lady who responds to your call with great courtesy saying "welcome to company X. Please give me the extension number you want" .You pronounces the extension number, your name, and the name of the person you want to contact. If the called person accepts the call, the connection is given quickly. This is artificial intelligence where an automatic call-handling system is used without employing any telephone operator.
Artificial intelligence (AI) involves two basic ideas. First, it involves studying the thought processes of human beings. Second, it deals with representing those processes via machines (like computers, robots, etc).AI is behaviour of a machine, which, if performed by a human being, would be called intelligence. It makes machines smarter and more useful, and is less expensive than natural intelligence.
Natural language processing (NLP) refers to artificial intelligence methods of communicating with a computer in a natural language like English. The main objective of a NLP program is to understand input and initiate action.The input words are scanned and matched against internally stored known words. Identification of a keyword causes some action to be taken. In this way, one can communicate with the computer in one's language. No special commands or computer language are required. There is no need to enter programs in a special language for creating software. Voice XML takes speech recognition even further. Instead of talking to your computer, you're essentially talking to a web site, and you're doing this over the phone.OK, you say, well, what exactly is speech recognition? Simply put, it is the process of converting spoken input to text. Speech recognition is thus sometimes referred to as speech-to-text.Speech recognition allows you to provide input to an application with your voice. Just like clicking with your mouse, typing on your keyboard, or pressing a key on the phone keypad provides input to an application; speech recognition allows you to provide input by talking. In the desktop world, you need a microphone to be able to do this. In the Voice XML world, all you need is a telephone. The speech recognition process is performed by a software component known as the speech recognition engine. The primary function of the speech recognition engine is to process spoken input and translate it into text that an application understands. The application can then do one of two things:The application can interpret the result of the recognition as a command. In this case , the application is a command and control application. If an application handles the recognized text simply as text, then it is considered a dictation application. The user speaks to the computer through a microphone, which in turn, identifies the meaning of the words and sends it to NLP device for further processing. Once recognized, the words can be used in a variety of applications like display, robotics, commands to computers, and dictation.
Treating Cardiac Disease With Catheter-Based Tissue Heating
Treating Cardiac Disease With Catheter-Based Tissue Heating
In microwave ablation, electromagnetic energy would be delivered via a catheter to a precise location in a coronary artery for selective heating of a targeted atherosclerotic lesion. Advantageous temperature profiles would be obtained by controlling the power delivered, pulse duration, and frequency. The major components of an apparatus for microwave ablation apparatus would include a microwave source, a catheter/transmission line, and an antenna at the distal end of the catheter .The antenna would focus the radiated beam so that most of the microwave energy would be deposited within the targeted atherosclerotic lesion. Because of the rapid decay of the electromagnetic wave, little energy would pass into, or beyond, the adventitia. By suitable choice of the power delivered, pulse duration, frequency, and antenna design (which affects the width of the radiated beam), the temperature profile could be customized to the size, shape, and type of lesion being treated.
For decades, scientists have been using electromagnetic and sonic energy to serve medicine. But, aside from electro surgery, their efforts have focused on diagnostic imaging of internal body structures-particularly in the case of x-ray, MRI, and ultrasound systems. Lately, however, researchers have begun to see acoustic and electromagnetic waves in a whole new light, turning their attention to therapeutic-rather than diagnostic-applications. Current research is exploiting the ability of radio-frequency (RF) and microwaves to generate heat, essentially by exciting molecules. This heat is used predominantly to ablate cells. Of the two technologies, RF was the first to be used in a marketable device. And now microwave devices are entering the commercialization stage. These technologies have distinct strengths weaknesses that will define their use and determine their market niches. The depth to which microwaves can penetrate tissues is primarily a function of the dielectric properties of the tissues and of the frequency of the micro waves.
The tissue of the human body is enormously varied and complex, with innumerable types of structures, components, and cells. These tissues vary not only with in an individual, but also among people of different gender, age, physical condition, health and even as a function of external in puts, such as food eaten, air breathed, ambient temperature, or even state of minds. From the point of view of RF and Microwaves in the frequency range 10 MHz ~ 10GHz, however biological tissue can be viewed macroscopically in terms of its bulk shape and electromagnetic characteristic: dielectric constant and electrical conductivity . These are dependent on frequency and very dependent on the particular tissue type.
All biological tissue is somewhat electrically conductive, absorbing microwave power and converting it to heat as it penetrates the tissue. Delivering heat at depth is not only valuable for cooking dinner, but it can be quite useful for many therapeutic medical applications as well. These includes: diathermy for mild orthopedic heating, hyperthermia cell killing for cancer therapy, microwave ablation and microwave assisted balloon angioplasty. These last two are the subject of this article. It should also be mention that based on the long history of hi power microwave exposure in human, it is reasonable certain that, barring overheating effects, microwave radiation is medically safe. There have been no credible reported carcinogenic , muragenic or poisonous effects of microwave exposure.
Surround Sound System
Surround Sound System
There are many surround systems available in the market .They use different technologies for produce surround effect. Some Surround sound is based on using audio compression technology (for example Dolby ProLogicÃ‚Â® or Digital AC-3Ã‚Â®) to encode and deliver a multi-channel soundtrack, and audio decompression technology to decode the soundtrack for delivery on a surround sound 5-speaker setup. Additionally, virtual surround sound systems use 3D audio technology to create the illusion of five speakers emanating from a regular set of stereo speakers, therefore enabling a surround sound listening experience without the need for a five speaker setup.
We are now entering the Third Age of reproduced sound. The monophonic era was the First Age, which lasted from the Edison's invention of the phonograph in 1877 until the 1950s. during those times, the goal was simply to reproduce the timbre of the original sound. No attempts were made to reproduce directional properties or spatial realism. The stereo era was the Second Age. It was based on the inventions from the 1930s, reached the public in the mid-'50s, and has provided great listening pleasure for four decades. Stereo improved the reproduction of timbre and added two dimensions of space: the left - right spread of performers across a stage and a set of acoustic cues that allow listeners to perceive a front-to-back dimension.
In two-channel stereo, this realism is based on fragile sonic cues. In most ordinary two-speaker stereo systems, these subtle cues can easily be lost, causing the playback to sound flat and uninvolved. Multichannel surround systems, on the other hand, can provide this involving presence in a way that is robust, reliable and consistent. The purpose of this seminars is to explore the advances and technologies of surround sound in the consumer market.
Human hearing is binaural (based on two ears), yet we have the ability to locate sound spatially. That is, we can determine where a sound is coming from, and in most cases, from how far away. In addition, humans can distinguish multiple sound sources in relation to the surrounding environment. This is possible because our brains can determine the location of each sound in the three-dimensional environment we live in by processing the information received by our two ears.
The principal localization cues used in binaural human hearings are Interaural Intensity Difference ( IID ) and Interaural Time Difference ( ITD ). IID refers to the fact that if a sound is closer to one ear than the other, its intensity at that ear is greater than at the other ear, which is not only farther away but also receives the sound shadowed by the listener's head. ITD is related to the fact that unless the sound is located at exactly the same distance from both ears (i.e. directly in front or back of the listener), it arrives at one ear sooner than the other. If the sound reaches the right ear first, the source is somewhere to the right, and vice-versa. By combining these two cues and other related to the reflection of the sound as they travel to our eardrums, our brains are able to determine the position of an individual sound source.
The principal format for digital discrete surround is the "5.1 channel" system. The 5.1 name stands for five channels (see figure 1 below) (in front: left, right and centre, and behind: left surround and right surround)of full bandwidth audio (20 Hz to 20 kHz) plus a sixth channel which will, at times, contain additional bass information to maximize the impact of scenes such as explosions, etc.
Space Time Adaptive Processing
Space Time Adaptive Processing
Space-Time Adaptive Processing (STAP) refers to a class of signal processing techniques used to process returns of an antenna array radar system. It enhances the ability of radars to detect targets that might otherwise be obscured by clutter or jamming.
. The output of STAP is a linear combination or weighted sum of the input signal samples .The "adaptive" in STAP refers to the fact that STAP weights are computed to reflect the actual noise, clutter and jamming environment in which the radar finds itself. The "space" in STAP refers to the fact that STAP the STAP weights (applied to the signal samples at each of the elements of the antenna array) at one instant of time define an antenna pattern in space. If there are jammers in the field of view, STAP will adapt the radar antenna pattern by placing nulls in the directions those jammers thus rejecting jammer power. The "time" in STAP refers to the fact that the STAP weights applied to the signal samples at one antenna element over the entire dwell define a system impulse response and hence a system frequency response.
STAP is a multi-dimensional adaptive signal processing technique over spatial and temporal samples. In this approach, the input data collected from several antenna sensors has a cubic form. Depending on how this input data cube is processed, STAP is classified into Higher Order Post-Doppler (HOPD), Element Space Pre-Doppler, Element Space Post-Doppler, Beam Space Pre-Doppler, and Beam Space Post-Doppler. STAP consists of three major computation steps. First, a set of rules called the training strategy is used to select data which will be processed in the subsequent computation. The second step is weight computation.
It requires solving a set of linear equations. This is the most computationally intensive step. Finally, thresholding operation is performed after applying the computed weights. In HOPD processing, Doppler processing (FFT computations) is followed by solving least square problems (QR decompositions).
Real-time Programs: The Computational Model
Radio frequency identification (RFID)
Quantum Dot Lasers
Organic Light Emitting Diodes (OLED)
Narrow Band & Broad Band ISDN
Narrow Band & Broad Band ISDN
The most important development in the computer communications industry in the 1990s is the evolution of the integrated services digital network (ISDN) and broadband ISDN (B-ISDN). The ISDN and B-ISDN have had a dramatic impact on the planning and deployment of intelligent digital networks providing integrated services for voice, data and video. Further, the work on the ISDN and B-ISDN standards has led to the development of two major new networking technologies; frame relay and asynchronous transfer mode (ATM). Frame relay and ATM have become the essential ingredients in developing high-speed networks for local, metropolitan and wider area applications.
The ISDN is intended to be a worldwide public telecommunications network to replace existing public telecommunication networks and deliver a wide variety of services. The ISDN is defined by the standardization of user interfaces and implemented as a set of digital switches and paths supporting a broad range of traffic types and providing value added processing services. In practice, there are multiple networks, implemented within national boundaries but from the user's point of view, the eventual widespread deployment of ISDN will lead to a single, uniformly accessible, worldwide network.
The narrowband ISDN is based on the use of a 64 kbps channel as the basic unit of switching and has a circuit switching orientation. The major technical contribution of the narrowband ISDN effort has been frame relay. The B-ISDN supports very high data rates (100s of Mbps) and has a packet switching orientation. The major technical contribution of the B-ISDN effort has been asynchronous transfer mode, also known as cell relay.
The circuit switching is the dominant technology for both voice and data communications. Communication via circuit switching implies that there is a dedicated communication path between two stations. That path is a connected sequence of links between network nodes. On each physical link, a channel is dedicated to the connection. The three phases involved in a communication via circuit switching are circuit establishment, information transfer and circuit disconnect.
In a typical data connection much of the time the line is idle. Thus circuit switched approach is inefficient. In packet switching data are transmitted in short packets. Each packet contains a portion of the user's data plus some control information. The control information, at a minimum, includes the information that the network requires to be able to route the packet through the network and deliver it to the intended destination. At each node enroute, the packet is received, stored briefly, and passed on the next node. The advantages of packet switching are line efficiency is greater, data rate conversion is possible and priorities can be used.
With modern, high-speed telecommunication systems, the overhead in error control is unnecessary and counter productive. To take advantages of the high data rates and low error rates of contemporary networking facilities, frame relay was developed. Whereas the original packet switching networks were designed with a data rate to the end user of about 64 kbps, Frame relay networks are designed to operate at user data rates of up to 2 Mbps. The key to achieving these high data rates are to strip out most of the overhead involved with error control
Nanotechnology is defined as fabrication of devices with atomic or molecular scale precision. Devices with minimum feature sizes less than 100 nanometers (nm) are considered to be products of nanotechnology. A nanometer is one billionth of a meter (10-9 m) and is the unit of length that is generally most appropriate for describing the size of single molecules. The nanoscale marks the nebulous boundary between the classical and quantum mechanical worlds; thus, realization of nanotechnology promises to bring revolutionary capabilities. Fabrication of nanomachines, nanoelectronics and other nanodevices will undoubtedly solve an enormous amount of the problems faced by mankind today. Nanotechnology is currently in a very infantile stage. However, we now have the ability to organize matter on the atomic scale and there are already numerous products available as a direct result of our rapidly increasing ability to fabricate and characterize feature sizes less than 100 nm. Mirrors that don't fog, biomimetic paint with a contact angle near 180Ã‚Â°, gene chips and fat soluble vitamins in aqueous beverages are some of the first manifestations of nanotechnology. However, immenant breakthroughs in computer science and medicine will be where the real potential of nanotechnology will first be achieved.
Nanoscience is an interdisciplinary field that seeks to bring about mature nanotechnology. Focusing on the nanoscale intersection of fields such as physics, biology, engineering, chemistry, computer science and more, nanoscience is rapidly expanding. Nanotechnology centers are popping up around the world as more funding is provided and nanotechnology market share increases. The rapid progress is apparent by the increasing appearance of the prefix "nano" in scientific journals and the news. Thus, as we increase our ability to fabricate computer chips with smaller features and improve our ability to cure disease at the molecular level, nanotechnology is here.
History of Nanotechnology
The amount of space available to us for information storage (or other uses) is enormous. As first described in a lecture titled, 'There's Plenty of Room at the Bottom' in 1959 by Richard P. Feynman, there is nothing besides our clumsy size that keeps us from using this space. In his time, it was not possible for us to manipulate single atoms or molecules because they were far too small for our tools. Thus, his speech was completely theoretical and seemingly fantastic. He described how the laws of physics do not limit our ability to manipulate single atoms and molecules. Instead, it was our lack of the appropriate methods for doing so. However, he correctly predicted that the time would come in which atomically precise manipulation of matter would inevitably arrive.
Prof. Feynman described such atomic scale fabrication as a bottom-up approach, as opposed to the top-down approach that we are accustomed to. The current top-down method for manufacturing involves the construction of parts through methods such as cutting, carving and molding.
Billions of visible LEDs are produced each year, and the emergence of high brightness AlGaAs and AlInGaP devices has given rise to many new markets. The surprising growth of activity in, relatively old, LED technology has been spurred by the introduction of AlInGaP devices. Recently developed AlGaInN materials have led to the improvements in the performance of bluish-green LEDs, which have luminous efficacy peaks much higher than those for incandescent lamps. This advancement has led to the production of large-area full-color outdoors LED displays with diverse industrial applications.
The novel idea of this article is to modulate light waves from visible LEDs for communication purposes. This concurrent use of visible LEDs for simultaneous signaling and communication, called iLight, leads to many new and interesting applications and is based on the idea of fast switching of LEDs and the modulation visible-light waves for free-space communications. The feasibility of such approach has been examined and hardware has been implemented with experimental results. The implementation of an optical link has been carried out using an LED traffic-signal head as a transmitter. The LED traffic light (fig 1 below) can be used for either audio or data transmission.
Audio messages can be sent using the LED transmitter, and the receiver located at a distance around 20 m away can play back the messages with the speaker. Another prototype that resembles a circular speed-limit sign with a 2-ft diameter was built. The audio signal can be received in open air over a distance of 59.3 m or 194.5 ft. For data transmission, digital data can be sent using the same LED transmitter, and the experiments were setup to send a speed limit or location ID information.
The work reported in this article differs from the use of infrared (IR) radiation as a medium for short-range wireless communications. Currently, IR links and local-area networks available. IR transceivers for use as IR data links are widely available in the markets. Some systems are comprised of IR transmitters that convey speech messages to small receivers carried by persons with severe visual impairments. The Talking Signs system is one such IR remote signage system developed at the Smith-Kettlewell Rehabilitation Engineering Research center. It can provide a repeating, directionally selective voice message that originates at a sign. However, there has been very little work on the use of visible light as a communication medium.
The availability of high brightness LEDs make the visible-light medium even more feasible for communications. All products with visible-LED components (like an LED traffic signal head) can be turned into an information beacon. This iLight technology has many characteristics that are different from IR. The iLight transceivers make use of the direct line-of-sight (LOS) property of visible light, which is ideal in applications for providing directional guidance to persons with visual impairments. On the other hand, IR has the property of bouncing back and forth in a confined environment. Another advantage of iLight is that the transmitter provides easy targets for LOS reception by the receiver. This is because the LEDs, being on at all times, are also indicators of the location of the transmitter. A user searching for information has only to look for lights from an iLight transmitter. Very often, the device is concurrently used for illumination, display, or visual signage. Hence, there is no need to implement an additional transmitter for information broadcasting. Compared with an IR transmitter, an iLight transmitter has to be concerned with even brightness. There should be no apparent difference to a user on the visible light that emits from an iLight device. It has long been realized that visible light has the potential to be modulated and used as a communication channel with entropy. The application has to make use of the directional nature of the communication medium because the receiver requires a LOS to the audio system or transmitter. The locations of the audio signal broadcasting system and the receiver are relatively stationary. Since the relative speed between the receiver and the source are much less than the speed of light, the Doppler frequency shift observed by the receiver can be safely neglected. The transmitter can broadcast with viewing angle close to 180 . The frequency of an ON period followed by an OFF period to transmit information is short enough to be humanly unperceivable; so that it does not affect traffic control. This article aims to present an application of high-brightness visible LEDs for establishing optical free-space links