The high-tech industry has spent decades creating computer systems with ever mounting degrees of complexity to solve a wide variety of business problems. Ironically, complexity itself has become part of the problem. As networks and distributed systems grow and change, they can become increasingly hampered by system deployment failures, hardware and software issues, not to mention human error. Such scenarios in turn require further human intervention to enhance the performance and capacity of IT components. This drives up the overall IT costs-even though technology component costs continue to decline. As a result, many IT professionals seek ways to improve their return on investment in their IT infrastructure, by reducing the total cost of ownership of their environments while improving the quality of service for users.
Self managing computing helps address the complexity issues by using technology to manage technology. The idea is not new many of the major players in the industry have developed and delivered products based on this concept. Self managing computing is also known as autonomic computing.
The term autonomic is derived from human biology. The autonomic nervous system monitors your heartbeat, checks your blood sugar level and keeps your body temperature close to 98.6F, without any conscious effort on your part. In much the same way, self managing Ovonic Unified Memory (OUM)
Ovonyx is developing a microelectronics memory technology called Ovonic Unified Memory (OUM). This technology is originally developed by Mr. Stanford Ovshinsky and exclusively licensed from Energy Conversion Devices (ECD) Inc. Ovonic unified memory -- its name is derived from ''Ovshinsky'' and ''electronic''. OVM is also known as phase change memory because it uses unique thin-film phase change material to store information economically and with excellent solid-state memory properties. It would be the replacement of conventional memories like Magnetic Random Access Memory (MRAM), Ferro electric Random Access Memory (FeRAM or FRAM), Dynamic Random Access Memory (DRAM), and Static Random Access Memory (SRAM).
OVM allows the rewriting of CD & DVDs .CD & DVD drives read or write ovonic material with laser , but OVM uses electric current to change the phase of the material. The thin-film material is a phase-change chalcogenide alloy similar to the film used to store information on commercial CD-RW and DVD-RAM optical disks, based on proprietary technology originally developed by and exclusively licensed from Energy Conversion Devices.
Evolution Of OUM
Magnetic Random Access Memory (MRAM), a technology first developed in the 1970's, but rarely commercialized, has attracted by the backing of I.B.M. Motorola and others. MRAM stores information by flip flopping two layers of magnetic material in and out of alignment with an electric current. For reading and writing data, MRAM can be as fast as a few nanoseconds, or billionths of a second, best among the next three generation memory candidates. And if promises to integrate easily with the industry's existing chip manufacturing process. MRAM is built on top of silicon circuitry. The biggest problem with MRAM is a relatively small distance, difficult to detect, between it's ON and OFF states.
The second potential successor to flash, Ferro - electric Random Access Memory (FeRAM / FRAM), has actually been commercially available for nearly 15 years, has attracted by the backing of Fujitsu, Matsushita, I.B.M. and Ramtron. FRAM relies on the polarization of what amount to tiny magnets inside certain materials like perouikite, from basaltic rocks. FRAM memory cells do not wear out until they have been read or written to billions of times, while MRAM and OUM would require the addition of six to eight "masking" layers in the chip manufacturing process, just like Flash, FRAM might require as little as two extra layers.
OUM is based on the information storage technology developed by Mr.Ovshinsky that allows rewriting of CD's and DVD's. While CD and DVD drives read and write ovonic material with lasers, OUM uses electric current to change the phase of memory cells. These cells are either in crystalline state, where electrical resistance is low or in amorphous state, where resistance is high. OUM can be read and write to trillionths of times making its use essentially nondestructive, unlike MRAM or FRAM. OUM's dynamic range, difference between the electrical resistance in the crystalline state and in the amorphous state - is wide enough to allow more than one set of ON and OFF values in a cell, dividing it into several bits and multiplying memory density by two, four potential even 16 times. OUM is not as fast as MRAM.The OUM solid-state memory has cost advantages over conventional solid-state memories such as DRAM or Flash due to its thin-film nature, very small active storage media, and simple device structure. OUM requires fewer steps in an IC manufacturing process resulting in reduced cycle times, fewer defects, and greater manufacturing flexibility.
Spintronics can be fairly new term for you but the concept isn't so very exotic .This technological discipline aim to exploit subtle and mind bending esoteric quantum property of electron to develop a new generation of electronics devices. The ability to exploit spin in semiconductor promise a new logical devices as spin transistor etc with enhanced functionality higher speed and reduction power conception and might have a spark revolution in semiconductor industry. so far the problem of injecting electron with controlled spin direction has held up the realization of such spintronics
Spintronics is an emergent technology that exploits the quantum propensity of the electrons to spin as well as making use of their charge state. The spin itself is manifested as a detectable weak magnetic energy state characterised as "spin up" or "spin down".
Conventional electronic devices rely on the transport of electrical charge carriers - electrons - in a semiconductor such as silicon. Device engineers and physicists are now trying to exploit the spin of the electron rather than its charge.
Spintronic-devices combine the advantages of magnetic materials and semiconductors. They are expected to be non-volatile, versatile, fast and capable of simultaneous data storage and processing, while at the same time consuming less energy. Spintronic-devices are playing an increasingly significant role in high-density data storage, microelectronics, sensors, quantum computing and bio-medical applications, etc
In an effort to further the development of e-commerce, the federal Electronic Signatures Act (2000) established uniform national standards for determining the circumstances under which contracts and notifications in electronic form are legally valid. Legal standards were also specified regarding the use of an electronic signature ("an electronic sound, symbol, or process, attached to or logically associated with a contract or other record and executed or adopted by a person with the intent to sign the record"), but the law did not specify technological standards for implementing the act. The act gave electronic signatures a legal standing similar to that of paper signatures, allowing contracts and other agreements, such as those establishing a loan or brokerage account, to be signed on line.
Once consumers' worries eased about on-line credit card purchases, e-commerce grew rapidly in the late 1990s. In 1998 on-line retail ("e-tail") sales were $7.2 billion, double the amount in 1997. On-line retail ordering represented 15% of nonstore sales (which included catalogs, television sales, and direct sales) in 1998, but this constituted only 1% of total retail revenues that year. Books are the most popular on-line product order-with over half of Web shoppers ordering books (one on-line bookseller, Amazon.com, which started in 1995, had revenues of $610 million in 1998)-followed by software, audio compact discs, and personal computers. Other on-line commerce includes trading of stocks, purchases of airline tickets and groceries, and participation in auctions.
Molecular computing is an emerging field to which chemistry, biophysics, molecular biology, electronic engineering, solid state physics and computer science contribute to a large extent. It involves the encoding, manipulation and retrieval of information at a macromolecular level in contrast to the current techniques, which accomplish the above functions via IC miniaturization of bulk devices. The biological systems have unique abilities such as pattern recognition, learning, self-assembly and self-reproduction as well as high speed and parallel information processing. The aim of this article is to exploit these characteristics to build computing systems, which have many advantages over their inorganic (Si,Ge) counterparts.
DNA computing began in 1994 when Leonard Adleman proved thatDNA computing was possible by finding a solution to a real- problem, a Hamiltonian Path Problem, known to us as the Traveling Salesman Problem,with a molecular computer. In theoretical terms, some scientists say the actual beginnings of DNA computation should be attributed to Charles Bennett's work. Adleman, now considered the father of DNA computing, is a professor at the University of Southern California and spawned the field with his paper, "Molecular Computation of Solutions of Combinatorial Problems." Since then, Adleman has demonstrated how the massive parallelism of a trillion DNA strands can simultaneously attack different aspects of a computation to crack even the toughest combinatorial problems.
Adleman's Traveling Salesman Problem:
The objective is to find a path from start to end going through all the points only once. This problem is difficult for conventional computers to solve because it is a "non-deterministic polynomial time problem" . These problems, when they involve large numbers, are intractable with conventional computers, but can be solved using massively parallel computers like DNA computers. The Hamiltonian Path problem was chosen by Adleman because it is known problem.
The following algorithm solves the Hamiltonian Path problem:
1.Generate random paths through the graph.
2.Keep only those paths that begin with the start city (A) and conclude with the end city (G).
3.If the graph has n cities, keep only those paths with n cities. (n=7)
4.Keep only those paths that enter all cities at least once.
5.Any remaining paths are solutions.
The key was using DNA to perform the five steps in the above algorithm. Adleman's first step was to synthesize DNA strands of known sequences, each strand 20 nucleotides long. He represented each of the six vertices of the path by a separate strand, and further represented each edge between two consecutive vertices, such as 1 to 2, by a DNA strand which consisted of the last ten nucleotides of the strand representing vertex 1 plus the first 10 nucleotides of the vertex 2 strand. Then, through the sheer amount of DNA molecules (3x1013 copies for each edge in this experiment!) joining together in all possible combinations, many random paths were generated. Adleman used well-established techniques of molecular biology to weed out the Hamiltonian path, the one that entered all vertices, starting at one and ending at six. After generating the numerous random paths in the first step, he used polymerase chain reaction (PCR) to amplify and keep only the paths that began on vertex 1 and ended at vertex 6. The next two steps kept only those strands that passed through six vertices, entering each vertex at least once. At this point, any paths that remained would code for a Hamiltonian path, thus solving the problem.
4G Wireless Systems
4G Wireless Systems
Fourth generation wireless system is a packet switched wireless system with wide area coverage and high throughput. It is designed to be cost effective and to provide high spectral efficiency .
The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB),and Millimeter wireless. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.The high performance is achieved by the use of long term channel prediction, in both time and frequency, scheduling among users and smart antennas combined with adaptive modulation and power control. Frequency band is 2-8 GHz. it gives the ability for world wide roaming to access cell anywhere.
Wireless mobile communications systems are uniquely identified by "generation designations. Introduced in the early 1980s, first generation (1G) systems were marked by analog frequency modulation and used primarily for voice communications. Second generation (2G) wireless communications systems, which made their appearance in the late 1980s, were also used mainly for voice transmission and reception The wireless system in widespread use today goes by the name of 2.5G-an "in between " service that serves as a stepping stone to 3G. Whereby 2G communications is generally associated with Global System for Mobile (GSM) service, 2.5G is usually identified as being "fueled " by General Packet Radio Services (GPRS) along with GSM. In 3G systems, making their appearance in late 2002 and in 2003, are designed for voice and paging services, as well as interactive media use such as teleconferencing, Internet access, and other services.
The problem with 3G wireless systems is bandwidth-these systems provide only WAN coverage ranging from 144 kbps (for vehicle mobility applications) to 2 Mbps (for indoor static applications). Segue to 4G, the "next dimension " of wireless communication. The 4g wireless uses Orthogonal Frequency Division Multiplexing (OFDM), Ultra Wide Radio Band (UWB), and Millimeter wireless and smart antenna. Data rate of 20mbps is employed. Mobile speed will be up to 200km/hr.Frequency band is 2 ?]8 GHz. it gives the ability for world wide roaming to access cell anywhere.
o Support for interactive multimedia, voice, streaming video, Internet, and other broadband services
o IP based mobile system
o High speed, high capacity, and low cost per bit
o Global access, service portability, and scalable mobile services
o Seamless switching, and a variety of Quality of Service driven services
o Better scheduling and call admission control techniques
o Ad hoc and multi hop networks (the strict delay requirements of voice make multi hop network service a difficult problem)
o Better spectral efficiency
o Seamless network of multiple protocols and air interfaces (since 4G will be all ?]IP, look for 4G systems to be compatible with all common network technologies, including802.11, WCDMA, Blue tooth, and Hyper LAN).
o An infrastructure to handle pre existing 3G systems along with other wireless technologies, some of which are currently under development.
Code Division Duplexing
Orthogonal Frequency Division Multiplplexing
Need For Parallel Processing
Tunable lasers as the name suggests are lasers whose wavelengths can be tuned or varied. They play an important part in optical communication networks. Recent improvements in tunable laser technologies are enabling highly flexible and effective utilization of the massive increases in optical network capacity brought by large-scale application of dense wavelength division multiplexing. Several tunable laser technologies have emerged, each with its own set of tradeoffs with respect to the needs of particular optical networking applications. Tunable lasers are produced mainly in 4 ways: The distributed feedback laser (DFB), the external cavity diode laser, the vertical cavity diode laser and the micro electro mechanical system (MEMS) technology. Tunable lasers help network administrators to save a lot of cost, by allowing them to efficiently manage the network with lesser number of spares. They also enable reliable functioning of the optical network. Changing traffic patterns, customer requirements, and new revenue opportunities require greater flexibility than static OADMs can provide, complicating network operations and planning. Incorporating tunable lasers removes this constraint altogether by allowing any channel to be added by the OADM at any time.
In a wavelength-division multiplexed (WDM) network carrying 128 wavelengths of information, we have 128 different lasers giving out these wavelengths of light. Each laser is designed differently in order to give the exact wavelength needed. Even though the lasers are expensive, in case of a breakdown, we should be able to replace it at a moment's notice so that we don't lose any of the capacity that we have invested so much money in. So we keep in stock 128 spare lasers or maybe even 256, just to be prepared for double failures.
What if we have a multifunctional laser for the optical network that could be adapted to replace one of a number of lasers out of the total 128 wavelengths? Think of the money that could be saved, as well as the storage space for the spares. What is needed for this is a "tunable laser," Tunable lasers are still a relatively young technology, but as the number of wavelengths in networks increases so will their importance. Each different wavelength in an optical network will be separated by a multiple of 0.8 nanometers (sometimes referred to as 100GHz spacing. Current commercial products can cover maybe four of these wavelengths at a time. While not the ideal solution, this still cuts your required number of spare lasers down. More advanced solutions hope to be able to cover larger number of wavelengths, and should cut the cost of spares even further.
The devices themselves are still semiconductor-based lasers that operate on similar principles to the basic non-tunable versions. Most designs incorporate some form of grating like those in a distributed feedback laser. These gratings can be altered in order to change the wavelengths they reflect in the laser cavity, usually by running electric current through them, thereby altering their refractive index. The tuning range of such devices can be as high as 40nm, which would cover any of 50 different wavelengths in a 0.8nm wavelength spaced system. Technologies based on vertical cavity surface emitting lasers (VCSELs) incorporate moveable cavity ends that change the length of the cavity and hence the wavelength emitted. Current designs of tunable VCSELs have similar tuning ranges.
Lasers are devices giving out intense light at one specific color. The kinds of lasers used in optical networks are tiny devices - usually about the size of a grain of salt. They are little pieces of semiconductor material, specially engineered to give out very precise and intense light. Within the semiconductor material are lots of electrons - negatively charged particles.
High Altitude Aeronautical Platforms
High Altitude Aeronautical Platforms
Affordable bandwidth will be as essential to the Information Revolution in the21 st century as inexpensive power was to the Industrial Revolution in the 18 th and 19 th centuries. Today's global communications infrastructures of landlines, cellular towers, and satellites are inadequately equipped to support the increasing worldwide demand for faster, better, and less expensive service. At a time when conventional ground and satellite systems are facing increasing obstacles and spiraling costs, a low cost solution is being advocated.
This paper focuses on airborne platforms- airships, planes, helicopters or some hybrid solutions which could operate at stratospheric altitudes for significant periods of time, be low cost and be capable of carrying sizable multipurpose communications payloads. This report briefly presents an overview about the internal architecture of a High Altitude Aeronautical Platform and the various HAAPS projects.
High Altitude Aeronautical Platform Stations (HAAPS) is the name of a technology for providing wireless narrowband and broadband telecommunication services as well as broadcasting services with either airships or aircrafts. The HAAPS are operating at altitudes between 3 to 22 km. A HAPS shall be able to cover a service area of up to 1'000 km diameter, depending on the minimum elevation angle accepted from the user's location. The platforms may be airplanes or airships (essentially balloons) and may be manned or un-manned with autonomous operation coupled with remote control from the ground. While the term HAP may not have a rigid definition, we take it to mean a solar-powered and unmanned airplane or airship, capable of long endurance on-station -possibly several years.
Various types of platform options exist: SkyStation, the Japanese Stratospheric Platform Project, the European Space Agency (ESA) and others suggest the use of airships/blimps/dirigibles. These will be stationed at 21km and are expected to remain aloft for about 5 years. Angel Technologies (HALO), AeroVironment/ NASA (Helios) and the European Union (Heliplat) propose the use of high altitude long endurance aircraft. The aircraft are either engine or solar powered and are stationed at 16km (HALO) or 21km (Helios). Helios is expected to stay aloft for a minimum of 6 months whereas HALO will have 3 aircraft flying in 8- hour shifts. Platforms Wireless International is implementing a tethered aerostat situated at ~6km.
A high altitude telecommunication system comprises an airborne platform - typically at high atmospheric or stratospheric altitudes - with a telecommunications payload, and associated ground station telecommunications equipment. The combination of altitude, payload capability, and power supply capability makes it ideal to serve new and metropolitan areas with advanced telecommunications services such as broadband access and regional broadcasting. The opportunities for applications are virtually unlimited. The possibilities range from narrowband services such as paging and mobile voice to interactive broadband services such as multimedia and video conferencing. For future telecommunications operators such a platform could provide blanket coverage from day one with the added advantage of not being limited to a single service. Where little or unreliable infrastructure exists, traffic could be switched through air via the HAPS platform. Technically, the concept offers a solution to the propagation and rollout problems of terrestrial infrastructure and capacity and cost problems of satellite networks.
Now a day it is very easy to establish communication from one part of the world to other. Despite this even now in remote areas villagers travel to talk to family members or to get forms which citizens in-developed countries an call up on a computer in a matter of seconds. The government tries to give telephone connection in very village in the mistaken belief that ordinary telephone is the cheapest way to provide connectivity. But the recent advancements in wireless technology make running a copper wire to an analog telephone much more expensive than the broadband wireless Internet connectivity. Daknet, an ad hoc network uses wireless technology to provide digital connectivity. Daknet takes advantages of the existing transportation and communication infrastructure to provide digital connectivity. Daknet whose name derives from the Hindi word "Dak" for postal combines a physical means of transportation with wireless data transfer to extend the internet connectivity that a uplink, a cyber cafÃƒÂ© or post office provides.
Real time communications need large capital investment and hence high level of user adoption to receiver costs. The average villager cannot even afford a personnel communications device such as a telephone or computer. To recover cost, users must share the communication infrastructure. Real time aspect of telephony can also be a disadvantage. Studies show that the current market for successful rural Information and Communication Technology (ICT) services does not appear to rely on real-time connectivity, but rather on affordability and basic interactivity. The poor not only need digital services, but they are willing and able to pay for them to offset the much higher costs of poor transportation, unfair pricing, and corruption. It is useful to consider non real-time infrastructures and applications such as voice mail, e-mail, and electronic bulletin boards. Technologies like store- and forward or asynchronous modes of communication can be significantly lower in cost and do not necessarily sacrifice the functionality required to deliver valuable user services. In addition to non real-time applications such as e-mail and voice messaging , providers can use asynchronous modes of communication to create local information repositories that community members can add to and query.
Advances in the IEEE 802 standards have led to huge commercial success and low pricing for broadband networks. These techniques can provide broadband access to even the most remote areas at low price. Important considerations in a WLAN are
Security: In a WLAN, access is not limited to the wired PCs but it is also open to all the wireless network devices, making it for a hacker to easily breach the security of that network.
Reach: WLAN should have optimum coverage and performance for mobile users to seamlessly roam in the wireless network
Interference: Minimize the interference and obstruction by designing the wireless network with proper placement of wireless devices.
Interoperability: Choose a wireless technology standard that would make the WLAN a truly interoperable network with devices from different vendors integrated into the same.
Reliability: WLAN should provide reliable network connection in the enterprise network.
Manageability: A manageable WLAN allows network administrators to manage, make changes and troubleshoot problems with fewer hassles. Wireless data networks based on the IEEE 802.11 or wifi standard are perhaps the most promising of the wireless technologies. Features of wifi include ease of setup, use and maintenance, relatively high bandwidth; and relatively low cost for both users and providers.
Daknet combines physical means of transportation with wireless data transfer to extend the internet connectivity. In this innovative vehicle mounted access points using 802.11b based technology to provide broadband, asynchronous, store and forward connectivity in rural areas