Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Electrical Seminar Abstracts And Report
Post: #1

Smart Cameras in Embedded Systems

A smart camera performs real-time analysis to recognize scenic elements. Smart cameras are useful in a variety of scenarios: surveillance, medicine, etc.We have built a real-time system for recognizing gestures. Our smart camera uses novel algorithms to recognize gestures based on low-level analysis of body parts as well as hidden Markov models for the moves that comprise the gestures. These algorithms run on a Trimedia processor. Our system can recognize gestures at the rate of 20 frames/second. The camera can also fuse the results of multiple cameras


Recent technological advances are enabling a new generation of smart cameras that represent a quantum leap in sophistication. While today's digital cameras capture images, smart cameras capture high-level descriptions of the scene and analyze what they see. These devices could support a wide variety of applications including human and animal detection, surveillance, motion analysis, and facial identification.

Video processing has an insatiable demand for real-time performance. Fortunately, Moore's law provides an increasing pool of available computing power to apply to real-time analysis. Smart cameras leverage very large-scale integration (VLSI) to provide such analysis in a low-cost, low-power system with substantial memory. Moving well beyond pixel processing and compression, these systems run a wide range of algorithms to extract meaning from streaming video.

Because they push the design space in so many dimensions, smart cameras are a leading-edge application for embedded system research.

Detection and Recognition Algorithms

Although there are many approaches to real-time video analysis, we chose to focus initially on human gesture recognition-identifying whether a subject is walking, standing, waving his arms, and so on. Because much work remains to be done on this problem, we sought to design an embedded system that can incorporate future algorithms as well as use those we created exclusively for this application. Our algorithms use both low-level and high-level processing. The low-level component identifies different body parts and categorizes their movement in simple terms. The high-level component, which is application-dependent, uses this information to recognize each body part's action and the person's overall activity based on scenario parameters. Low-level processing The system captures images from the video input, which can be either uncompressed or compressed (MPEG and motion JPEG), and applies four different algorithms to detect and identify human body parts.

Region extraction: The first algorithm transforms the pixels of an image into an M ¥ N bitmap and eliminates the background. It then detects the body part's skin area using a YUV color model with chrominance values down sampled Nextthe algorithm hierarchically segments the frame into skin-tone and non-skin-tone regions by extracting foreground regions adjacent to detected skin areas and combining these segments in a meaningful way.

Contour following: The next step in the process involves linking the separate groups of pixels into contours that geometrically define the regions. This algorithm uses a 3 ¥ 3 filter to follow the edge of the component in any of eight different directions.

Ellipse fitting: To correct for deformations in image processing caused by clothing, objects in the frame, or some body parts blocking others, an algorithm fits ellipses to the pixel regions to provide simplified part attributes. The algorithm uses these parametric surface approximations to compute geometric descriptors for segments such as area, compactness (circularity), weak perspective invariants, and spatial relationships.

Graph matching: Each extracted region modeled with ellipses corresponds to a node in a graphical representation of the human body. A piecewise quadratic Bayesian classifier uses the ellipses parameters to compute feature vectors consisting of binary and unary attributes. It then matches these attributes to feature vectors of body parts or meaningful combinations of parts that are computed offline. To expedite the branching process, the algorithm begins with the face, which is generally easiest to detect.
Spin Valve Transistor
Spin Valve Transistor

In a world of ubiquitous presence of electrons can you imagine any other field displacing it? It may seem peculiar, even absurd, but with the advent of spintronics it is turning into reality.

In our conventional electronic devices we use semi conducting materials for logical operation and magnetic materials for storage, but spintronics uses magnetic materials for both purposes. These spintronic devices are more versatile and faster than the present one. One such device is spin valve transistor. Spin valve transistor is different from conventional transistor. In this for conduction we use spin polarization of electrons. Only electrons with correct spin polarization can travel successfully through the device. These transistors are used in data storage, signal processing, automation and robotics with less power consumption and results in less heat. This also finds its application in Quantum computing, in which we use Qubits instead of bits.


Two experiments in 1920's suggested spin as an additional property of the electron. One was the closely spaced splitting of Hydrogen spectralines, called fine structure. The other was Stern -Gerlach experiment, which in 1922 that a beam of silver atoms directed through an inhomogeneous magnetic field would be forced in to two beams. These pointed towards magnetism associated with the electrons. Spin is the root cause of magnetism that makes an electron tiny magnet. Magnetism is already been exploited in recording devices. Where data is recorded and stored as tiny areas of magnetized iron or chromium oxide. To access that information the head detects the minute changes in magnetic field. This induces corresponding changes in the head's electrical resistance - a phenomenon called Magneto Resistance.


Spintronics came into light by the advent of Giant Magneto Resistance (GMR) in 1988. GMR is 200 times stronger than ordinary Magneto Resistance. It results from subtle electron - spin effects in ultra multilayers of magnetic materials that cause a huge change in electrical resistance. The discovery of Spin Valve Transistor (GMR in magnetic multilayers) has let to a large number of studies on GMR systems. Usually resistance of multilayer is measured with the Current in Plane (CIP). For instance, Read back magnetic heads uses this property. But this suffers from several drawbacks such as; shunting and channeling, particularly for uncoupled multilayers and for thick spaced layers diminish the CIP magneto resistance. Diffusive surface scattering reduces the magneto resistance for sandwiches and thin multilayers. In spin valve transistor (SVT) electrons are injected in to metallic base across a Schottky barrier (Emitter side) pass through the spin valve and reach the opposite side (Collector side) of transistor. When these injected electrons traverse the metallic base electrons are above Fermi level, hence hot electron magneto transport should be considered in Spin Valve Transistor (SVT). The transport properties of hot electrons are different from Fermi electrons .For example spin polarisation of Fermi electrons mainly depends on Density Of States (DOS) at Fermi level, while the spin polarisation of hot electron is related to the density of unoccupied states above the fermi level.

For the preparations of transistor we apply direct bonding, both to obtain device quality semiconductor material for the emitter and to allow room temperature processes.The starting material for both emitter and collector is a 380um, 5-10Ocm, n-si (100) wafer. After back side n++ implantation ,wafer is dry oxidised to anneal the implant and to form a SIO2 layer .After depositing a Pt ohmic contact on to the back side, wafer is sawn in to 10X10mm collector and 1.6X1.6mm emitters. Collector is subsequently dipped in HNO3, 2% HF to remove the native oxide on silicon fragments,5% Tetra methyl Ammonium Hydroxide at 90°, and buffered HF to remove thermal oxide .following each step the collector is rinsed in demineralised water.
Moletronics- an invisible technology
Moletronics- an invisible technology

As a scientific pursuit, the search for a viable successor to silicon computer technology has garnered considerable curiosity in the last decade. The latest idea, and one of the most intriguing, is known as molecular computers, or moletronics, in which single molecules serve as switches, "quantum wires" a few atoms thick serve as wiring, and the hardware is synthesized chemically from the bottom up.

The central thesis of moletronics is that almost any chemically stable structure that is not specifically disallowed by the laws of physics can in fact be built. The possibility of building things atom by atom was first introduced by Richard Feynman in 1959. An "assembler", which is little more than a submicroscopic robotic arm can be built and be controlled. We can use it to secure and position compounds in order to direct the precise location at which chemical reactions occur. This general approach allows the construction of large, atomically precise objects by initiating a sequence of controlled chemical reactions. In order for this to function as we wish, each assembler requires a process for receiving and executing the instruction set that will dictate its actions. In time, molecular machines might even have onboard, high speed RAM and slower but more permanent storage. They would have communications capability and power supply.

Moletronics is expected to touch almost every aspect of our lives, right down to the water we drink and the air we breathe. Experimental work has already resulted in the production of molecular tweezers, a carbon nanotube transistor, and logic gates. Theoretical work is progressing as well. James M. Tour of Rice University is working on the construction of a molecular computer. Researchers at Zyvex have proposed an Exponential Assembly Process that might improve the creation of assemblers and products, before they are even simulated in the lab. We have even seen researchers create an artificial muscle using nanotubes, which may have medical applications in the nearer term.

Teramac computer has the capacity to perform 1012 operations in one seconds but it has 220,000 hardware defects and still has performed some tasks 100 times faster than single-processor .The defect-tolerant computer architecture and its implications for moletronics is the latest in this technology. So the very fact that this machine worked suggested that we ought to take some time and learn about it.

Such a 'defect-tolerant' architecture through moletronics could bridge the gap between the current generation of microchips and the next generation of molecular-scale computers.

Moletronic circuit--QCA basics

The interaction between cells is Coulombic, and provides the necessary computing power. No current flows between cells and no power or information is delivered to individual internal cells. Local interconnections between cells are provided by the physics of cell-cell interaction. The links below describes the QCA cell and the process of building up useful computational elements from it. The discussion is mostly qualitative and based on the intuitively clear behavior of electrons in the cell.

Fundamental Aspects of QCA

A QCA cell consists of 4 quantum dots positioned at the vertices of a square and contains 2 extra electrons. The configuration of these electrons is used to encode binary information. The 2 electrons sitting on diagonal sites of the square from left to right and right to left are used to represent the binary "1" and "0" states respectively. For an isolated cell these 2 states will have the same energy. However for an array of cells, the state of each cell is determined by its interaction with neighboring cells through the Coulomb interaction.
Laser Communications
Laser Communications

Laser communications offer a viable alternative to RF communications for inter satellite links and other applications where high-performance links are a necessity. High data rate, small antenna size, narrow beam divergence, and a narrow field of view are characteristics of laser communications that offer a number of potential advantages for system design. Lasers have been considered for space communications since their realization in 1960. Specific advancements were needed in component performance and system engineering particularly for space qualified hardware. Advances in system architecture, data formatting and component technology over the past three decades have made laser communications in space not only viable but also an attractive approach into inter satellite link applications.

Information transfer is driving the requirements to higher data rates, laser cross -link technology explosions, global development activity, increased hardware, and design maturity. Most important in space laser communications has been the development of a reliable, high power, single mode laser diode as a directly modulable laser source. This technology advance offers the space laser communication system designer the flexibility to design very lightweight, high bandwidth, low-cost communication payloads for satellites whose launch costs are a very strong function of launch weigh. This feature substantially reduces blockage of fields of view of most desirable areas on satellites. The smaller antennas with diameter typically less than 30 centimeters create less momentum disturbance to any sensitive satellite sensors. Fewer on board consumables are required over the long lifetime because there are fewer disturbances to the satellite compared with heavier and larger RF systems. The narrow beam divergence affords interference free and secure operation.

Laser communication systems offer many advantages over radio frequency (RF) systems. Most of the differences between laser communication and RF arise from the very large difference in the wavelengths. RF wavelengths are thousands of times longer than those at optical frequencies are. This high ratio of wavelengths leads to some interesting differences in the two systems. First, the beam-width attainable with the laser communication system is narrower than that of the RF system by the same ratio at the same antenna diameters (the telescope of the laser communication system is frequently referred as an antenna). For a given transmitter power level, the laser beam is brighter at the receiver by the square of this ratio due to the very narrow beam that exits the transmit telescope. Taking advantage of this brighter beam or higher gain, permits the laser communication designer to come up with a system that has a much smaller antenna than the RF system and further, need transmit much less power than the RF system for the same receiver power. However since it is much harder to point, acquisition of the other satellite terminal is more difficult. Some advantages of laser communications over RF are smaller antenna size, lower weight, lower power and minimal integration impact on the satellite. Laser communication is capable of much higher data rates than RF.

The laser beam width can be made as narrow as the diffraction limit of the optic allows. This is given by beam width = 1.22 times the wavelength of light divided by the radius of the output beam aperture. The antennae gain is proportional to the reciprocal of the beam width squared. To achieve the potential diffraction limited beam width a single mode high beam quality laser source is required; together with very high quality optical components throughout the transmitting sub system. The possible antennae gain is restricted not only by the laser source but also by the any of the optical elements. In order to communicate, adequate power must be received by the detector, to distinguish the signal from the noise. Laser power, transmitter, optical system losses, pointing system imperfections, transmitter and receiver antennae gains, receiver losses, receiver tracking losses are factors in establishing receiver power. The required optical power is determined by data rate, detector sensitivity, modulation format ,noise and detection methods.
Solar Power Satellites
Solar Power Satellites

The new millennium has introduced increased pressure for finding new renewable energy sources. The exponential increase in population has led to the global crisis such as global warming, environmental pollution and change and rapid decrease of fossil reservoirs. Also the demand of electric power increases at a much higher pace than other energy demands as the world is industrialized and computerized. Under these circumstances, research has been carried out to look into the possibility of building a power station in space to transmit electricity to Earth by way of radio waves-the Solar Power Satellites. Solar Power Satellites(SPS) converts solar energy in to micro waves and sends that microwaves in to a beam to a receiving antenna on the Earth for conversion to ordinary electricity.

SPS is a clean, large-scale, stable electric power source. Solar Power Satellites is known by a variety of other names such as Satellite Power System, Space Power Station, Space Power System, Solar Power Station, Space Solar Power Station etc. One of the key technologies needed to enable the future feasibility of SPS is that of Microwave Wireless Power Transmission.WPT is based on the energy transfer capacity of microwave beam i.e, energy can be transmitted by a well focused microwave beam. Advances in Phased array antennas and rectennas have provided the building blocks for a realizable WPT system.

Increasing global energy demand is likely to continue for many decades. Renewable energy is a compelling approach - both philosophically and in engineering terms. However, many renewable energy sources are limited in their ability to affordably provide the base load power required for global industrial development and prosperity, because of inherent land and water requirements. The burning of fossil fuels resulted in an abrupt decrease in their .it also led to the green house effect and many other environmental problems. Nuclear power seems to be an answer for global warming, but concerns about terrorist attacks on Earth bound nuclear power plants have intensified environmentalist opposition to nuclear power.

Moreover, switching on to the natural fission reactor, the sun, yields energy with no waste products. Earth based solar panels receives only a part of the solar energy. It will be affected by the day & night effect and other factors such as clouds. So it is desirable to place the solar panel in the space itself, where, the solar energy is collected and converted in to electricity which is then converted to a highly directed microwave beam for transmission. This microwave beam, which can be directed to any desired location on Earth surface, can be collected and then converted back to electricity. This concept is more advantageous than conventional methods. Also the microwave energy, chosen for transmission, can pass unimpeded through clouds and precipitations.


The concept of a large SPS that would be placed in geostationary orbit was invented by Peter Glaser in 1968. The SPS concept was examined extensively during the late 1970s by the U.S Department of Energy (DOE) and the National Aeronautics and Space Administration (NASA). The DOE-NASA put forward the SPS Reference System Concept in 1979. The central feature of this concept was the creation of a large scale power infrastructure in space, consisting of about 60 SPS, delivering a total of about 300GW.But, as a result of the huge price tag, lack of evolutionary concept and the subsiding energy crisis in 1980-1981, all U.S SPS efforts were terminated with a view to re-asses the concept after about ten years. During this time international interest in SPS emerged which led to WPT experiments in Japan.
MIMO Wireless Channels: Capacity and Performance Prediction

Fractal Robots

Stereoscopic Imaging


Home Networking

Digital Cinema

Digital Cinema

Digital cinema encompasses every aspect of the movie making process, from production and post-production to distribution and projection. A digitally produced or digitally converted movie can be distributed to theaters via satellite, physical media, or fiber optic networks. The digitized movie is stored by a computer/server which "serves" it to a digital projector for each screening of the movie. Projectors based on DLP Cinema® technology are currently installed in over 1,195 theaters in 30 countries worldwide - and remain the first and only commercially available digital cinema projectors.

When you see a movie digitally, you see that movie the way its creators intended you to see it: with incredible clarity and detail. In a range of up to 35 trillion colors. And whether you're catching that movie on opening night or months after, it will always look its best, because digital movies are immune to the scratches, fading, pops and jitter that film is prone to with repeated screenings.Main advantage of digital movies are that, expensive film rolls and postprocessing expenses could be done away. Movie would be transmitted to computers in movie theatres, hence the movie could be released in a larger number of theatres.

Digital technology has already taken over much of the home entertainment market. It seems strange, then, that the vast majority of theatrical motion pictures are shot and distributed on celluloid film,just like they were more than a century ago. Of course, the technology has improved over the years, but it's still based on the same basic principles. The reason is simple: Up until recently, nothing could come close to the image quality of projected film. Digital cinema is simply a new approach to making and showing movies. The basic idea is to use bits and bytes (strings of 1s and 0s) to record, transmit and replay images, rather than using chemicals on film.

The main advantage of digital technology (such as a HYPERLINK "" CD ) is that it can store, transmit and retrieve a huge amount of information exactly as it was originally recorded. Analog technology (such as an audio tape) loses information in transmission, and generally degrades with each viewing. Digital information is also a lot more flexible than analog information. A computer can manipulate bytes of data very easily, but it can't do much with a streaming analog signal. It's a completely different language.

Digital cinema affects three major areas of movie-making:

" Production - how the movie is actually made
" Distribution - how the movie gets from the production company
" to movie theaters
" Projection - how the theater presents the movie
. Production With an $800 consumer digital camcorder, a stack of tapes, a computer and some video-editing software, you could make a digital movie. But there are a couple of problems with this approach. First, your image resolution won't be that great on a big movie screen. Second, your movie will look like news footage, not a normal theatrical film. onventional video has a completely different look from film, and just about anybody can tell the difference in a second. Film and video differ a lot in image clarity, depth of focus and color range, but the biggest contrast is frame rate. Film cameras normally shoot at 24 frames per second, while most U.S. television video cameras shoot at 30 frames per second (29.97 per second, to be exact).
Face Recognition Technology
Face Recognition Technology

Humans are very good at recognizing faces and if computers complex patterns. Even a passage of time doesn't affect this capability and therefore it would help become as robust as humans in face recognition. Machine recognition of human faces from still or video images has attracted a great deal of attention in the psychology, image processing, pattern recognition, neural science, computer security, and computer vision communities. Face recognition is probably one of the most non-intrusive and user-friendly biometric authentication methods currently available; a screensaver equipped with face recognition technology can automatically unlock the screen whenever the authorized user approaches the computer.

Face is an important part of who we are and how people identify us. It is arguably a person's most unique physical characteristic. While humans have had the innate ability to recognize and distinguish different faces for millions of years, computers are just now catching up.

Visionics, a company based in New Jersey, is one of many developers of facial recognition technology. The twist to its particular software, FaceIt, is that it can pick someone's face out of a crowd, extract that face from the rest of the scene and compare it to a database full of stored images. In order for this software to work, it has to know what a basic face looks like. Facial recognition software is designed to pinpoint a face and measure its features. Each face has certain distinguishable landmarks, which make up the different facial features. These landmarks are referred to as nodal points. There are about 80 nodal points on a human face. Here are a few of the nodal points that are measured by the software:

Distance between eyes
" Width of nose
" Depth of eye sockets
" Cheekbones
" Jaw line
" Chin

These nodal points are measured to create a numerical code, a string of numbers that represents the face in a database. This code is called a faceprint. Only 14 to 22 nodal points are needed for the FaceIt software to complete the recognition process.


Facial recognition software falls into a larger group of technologies known as biometrics. Biometrics uses biological information to verify identity. The basic idea behind biometrics is that our bodies contain unique properties that can be used to distinguish us from others. Besides facial recognition, biometric authentication methods also include:
" Fingerprint scan
" Retina scan
" Voice identification

Facial recognition methods generally involve a series of steps that serve to capture, analyze and compare a face to a database of stored images. The basic processes used by the FaceIt system to capture and compare images are:

Detection - When the system is attached to a video surveillance system, the recognition software searches the field of view of a video camera for faces. If there is a face in the view, it is detected within a fraction of a second. A multi-scale algorithm is used to search for faces in low resolution. The system switches to a high-resolution search only after a head-like shape is detected.

2. Alignment - Once a face is detected, the system determines the head's position, size and pose. A face needs to be turned at least 35 degrees toward the camera for the system to register it.
3. Normalization -The image of the head is scaled and rotated so that it can be registered and mapped into an appropriate size and pose. Normalization is performed regardless of the head's location and distance from the camera. Light does not impact the normalization process.
4. Representation - The system translates the facial data into a unique code. This coding process allows for easier comparison of the newly acquired facial data to stored facial data.
5. Matching - The newly acquired facial data is compared to the stored data and (ideally) linked to at least one stored facial representation.
Universal Asynchronous Receiver Transmitter
Universal Asynchronous Receiver Transmitter

The Universal Asynchronous Receiver Transmitter (UART) is the most widely used serial data communication circuit ever. UARTs allow full duplex communication over serial communication links as RS232. UARTs are available as inexpensive standard products from many semiconductor suppliers, making it unlikely that this specific design is useful by itself. The basic functions of a UART are a microprocessor interface, double buffering of transmitter data, frame generation, parity generation, parallel to serial conversion, double buffering of receiver data, parity checking, serial to parallel conversion. The data is transmitted asynchronously one bit at a time and there is no clock line.

The frame format of used by UARTs is a low start bit, 5-8 data bits, optional parity bit, and 1 or 2 stop bits. Universal Asynchronous Receive/Transmit consists of baud rate generator, transmitter and receiver. The number of bits transmitted per second is called baud rate and the baud rate generator generates the transmitter and receiver clocks separately. UART synchronizes the incoming bit stream with the local clock.

Transmitter interfaces to the data bus with the transmitter data register empty (TDRE) and write signals. When transmitting, UART takes eight bits of parallel data and converts it into serial bit stream and transmit them serially.
Receiver interfaces to the data bus with the receiver ready and the read signals. When UART detects the start bit, it receives the data serially and converts it into parallel form and when stop bit (logic high) is detected, data is recognized as a valid data.

UART Transmitter

The UART transmitter mainly consists of two eight bit registers the Transmit Data Register (TDR) and Transmit Shift Register (TSR) along with the Transmitter Control. The transmitter control generates the TDRE and TSRE signals which controls the data transmission through the UART transmitter. The write operation into the TDR is based on the signals generated from the microprocessor.
Post: #2
i am a seminars beginner.can you please send the full seminars report, abstract and if possible ppt on the topic compressed air energy storage system at the earliest if possible to resa89[at]
thaking you
Post: #3
visit this thread for compressed air energy storage system report:
Post: #4
Need report about "Micro Batteries". please send
Post: #5
surge current protection using super conductor
Post: #6
for surge current protection using superconductors, visit this thread:

and for micro battery, visit:

Important Note..!

If you are not satisfied with above reply ,..Please


So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: latest aeroelasticity seminar topics with abstracts, southern abstracts prince frederick md, abstracts artist, abstracts for dna based computers, rfid applications technical seminor abstracts, aasld abstracts 2005, previous vlsi abstracts which are selected for ppt,

Quick Reply
Type your reply to this message here.

Image Verification
Image Verification
(case insensitive)
Please enter the text within the image on the left in to the text box below. This process is used to prevent automated posts.

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  Electrical Engineering Projects? shakir_ali 3 1,297 10-03-2016 02:56 PM
Last Post: seminar report asees
  Electrical best project bilalmalikuet 2 1,048 05-10-2015 10:47 PM
Last Post: aniruddhachavan7
  Electrical Seminar lists3 computer science crazy 401 157,615 27-01-2015 02:35 PM
Last Post: Guest
Bug Electrical impedance tomography seminar projects crazy 3 4,310 27-05-2013 11:28 PM
Last Post: Guest
  Embedded Linux seminar report electrical engineering 1 2,755 17-12-2012 02:32 PM
Last Post: seminar details
  Comparison Of Different Electrical Machines For Hybrid Electrical Vehicles Wifi 2 2,121 26-11-2012 08:33 PM
Last Post: Guest
  ELECTRICAL HEATING METHODS seminar surveyer 2 3,029 26-11-2012 03:43 PM
Last Post: seminar details
  ELECTRICAL MACHINES seminar surveyer 1 2,262 12-11-2012 01:27 PM
Last Post: seminar details
  Axial-Field Electrical Machines computer science crazy 9 7,538 12-11-2012 01:27 PM
Last Post: seminar details
  Variable Voltage And Variable Frequency drives full report seminar presentation 2 4,813 04-10-2012 12:20 PM
Last Post: seminar details