Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Mapping & Positioning seminar report
Post: #1

This topic enlightens about the task of obtaining a set of points which when congregated together results in a "map". These set of points which are obtained are basically the distances from the target object and they are obtained by using the common perception sensors like laser range finders and sonars.
Implementing this above mentioned map in these robots, they will be able to pinpoint each and every point in the map and also move to that particular location thereby making them autonomous or in other words self-governed. The reason for this is that, as stated before the map is a set of points which define the distance between the autonomous vehicle and the target. The advantage of this is that these robots need not be programmed each and every time to move to a particular location but instead can be programmed to only close the distance between the target and the autonomous vehicle ( that is if the function of the vehicle is to move to the target pinpointed by the user or programmer in the map ) again by using these perception sensors which are installed on the autonomous vehicle.
Also this topic throws light on the disadvantages of using the conventional cameras which are installed on the robots due to reasons like loss of depth in vision of the and images captured by the camera, and other views like light being a dependant resource for the camera.
This topic also discusses about the main two types of visionary systems that are being used for mapping and positioning; their advantages and disadvantages over the each other and their basic working along with their applications prior to the advantages over each other.


A capable remote sensing system is an essential component in the development of an autonomous vehicle. Without some form of remote sensing, the autonomous vehicle will not be able to navigate around stationary and moving objects except perhaps in well defined environments, where a map of the area is available. The tasks that must be Carried out by an autonomous vehicle, for which information is required from the remote sensing system, can typically be classified into navigation, obstacle avoidance, bottom contour following (if required), mine hunting, classification and neutralization, and surveillance.
The requirements of the remote sensing system for each of the foregoing tasks are as follows. Navigation requires the generation of images which can be transformed into landmarks and local and global area maps which contain an explicit description of the environment. These maps are then used together with pattern recognition and matching processes, to determine the location of the vehicle, and to navigate the vehicle through the optimum path that will satisfy some predefined cost function. For this purpose, the remote sensing system must have high resolution and, if possible, be able to generate a 3-D representation of the ocean bottom.
For obstacle avoidance, the important requirement is that obstacles in the path of the vehicle are detected well in advance so that the vehicle can maneuver around the objects. The required performance of the obstacle avoidance system depends on the control, response and motion characteristics of the vehicle. Obstacle avoidance thus requires both long range and a wide field of view, as well as real time images and processing over the full area of view. In the surveillance mode, the remote sensing system observes the environment and detects activities that can potentially be of threat to the vehicle.
The most important requirements for surveillance are extended range and directionality such that the location and range to threat objects can be determined. Bottom contour following has the same requirements as obstacle avoidance, except in those instances when the mode of operation of the vehicle is to move very close to the ocean bottom. In this case the bottom following system (altitude information) may be a completely separate system. Additionally, under certain circumstances the depth information can be used for navigation if an area map with depth information is available. The location of the vehicle is determined from the present and past depth readings as these match with the onboard depth map. Each of these requirements can be translated into constraints on the choice of the sensors and the data conditioning algorithms.
Typically, machine readable images can be produced by video, active or passive sonar or laser systems. Each of these systems has its advantages and disadvantages when used in an underwater vehicle, and a comparison between these systems is given in Table 1. Video cameras usually have a limited range, not more than perhaps 20 ft under ideal conditions. However, high resolution, as compared to images from sonar systems, can be produced, especially with the use of stereo cameras. Also, video images can give information about the physical properties of the image, color and reflectance, which can be useful for identification of the object. Active sonar systems can be designed to have a much longer range as compared to video cameras. However, although sonar may be the only type of remote sensing (vision) device that can be implemented because of the water turbidity, usually sonar systems have a poor resolution as compared to video. The resolution (azimuth and range) can be increased by increasing the frequency of operation of the sonar but this will be accompanied by a reduction in range because of the increased absorption with the increased frequency. Another disadvantage of active sonar systems is that under certain circumstances it may not be possible to operate an active sonar. Passive sonar systems are in general ideal for surveillance operations. These offer a long range as compared to other forms of sensing devices, and can be used to determine bearing as well as range of incoming objects. The main advantage apart from range is that these, by definition, are quiet systems. However, the main disadvantage is that passive sonar is not only sensitive to threat objects that generate a self-noise level higher than the ambient noise levels in the vicinity of the passive sonar. but also to any other sound source within the range of the device. Underwater laser range finders are still to some extent in the development stages. Laser systems have the advantages of high resolution, and if high energy short wavelength laser (such as blue-green laser) is used, the range can be up to 60 m (200 ft), which may be acceptable for a medium speed vehicle.

Types of Sonars
Acoustic sonar systems are presently the most extensively used systems for underwater image generation. For most of the tasks described in the foregoing, the ideal remote sensing system would provide a 3-D description of the world around it. In general it may be difficult and possibly more expensive to recover 3-D information from 2-D data, and therefore the employed sensors must directly provide for 3-D data and the processing must be able to handle 3-D imagery. For underwater applications, stereo-acoustical techniques would in general be impossible to implement. Unless the separation distance between the two components of the system is several wavelengths, interference can result between the two transducers due to reverberation which would reduce the performance of the system. Therefore, although ideally 3-D images are required, it may not be possible to generate 3-D images with good resolution with present systems. A number of systems that generate 2-D images are commercially available. These systems use different methods of operation and there are trade-offs between resolution and speed of image refresh rates. In what follows, a description of some typical systems and their method of operation is given together with their major advantages and disadvantages. Following in the next section is a description of a post-processing technique that can be wd with side scan sonar-type data to estimate a 3-D representation from essentially 2-D images.
Active sonar systems have in general the same basic principle of operation, where the area or object to be identified is insonified by acoustic energy and the range to the object is determined by measuring the time delay between the transmitted and returned (back scattered) signal . Of interest in underwater moving vehicles is forward-looking sonar (FLS), since this will give the information required to plan the motion of the vehicle. The main differences between the various FLS that are available are mainly in the way the information is retrieved once the forward direction is insonified. Generally available FLS can be categorized in four types which are pulsed mechanically scanned, continuous transmission frequency modulated (CTFM), which is also mechanically scanned, electronically scanned sonar and multiple beam sonar. The latter type comes in either of two forms, that is either with multiple projected and receiver beams or alternatively with a projected broad beam and multiple preformed receiver beams. Pulsed mechanically scanned sonar is very similar to side scan sonar and information is obtained one sector at a time. The angular and range resolution are controlled by the beam width of the projector transducer and the duration of the pulse, respectively. To scan a wide angle of view, this system would be very slow unless the azimuth resolution is compromised.
The time for a complete forward scan is dependent on the range and azimuthal beam width. A 90-deg forward scan for a range of 400 m and an azimuth beam width of 2-deg. Would typically take about 48 S. TO decrease the scanning time, either the range is reduced or the beam width increased resulting in a reduction in azimuth resolution. The relatively slow coverage rate is the main disadvantage of this type of FLS. its main advantage is its simplicity. The slow coverage rate can cause significant distortion in the generated images because of the vehicle motion, which would have to be compensated. This type of FLS can, however. be coupled to an intelligent system which can control the sectors to be scanned, and thus, to some extent, improve the coverage rate. For a relatively slow-moving vehicle, changes in the environment are also going to be relatively slow, except perhaps for other moving objects. Furthermore. the system Can be instructed to limit its scan to those sectors where moving objects have been detected. If motion compensation is included for a high-speed vehicle, then instead of the sonar element being rotated to scan over a particular sector. the sonar element can be operated in a rocker mode (21. In this case, the transducer is rotated about a horizontal axis parallel to the direction of motion instead of about a vertical axis. as in the sector scan mode. The main disadvantage of this operation is the limited scan angle. However, objects stay in the same line as they become closer to the vehicle which simplifies motion compensation. CTFM mechanically scanned sonar systems address the problem of slow coverage inherent in pulsed mechanically scanned sonar, by transmitting a continuous saw tooth frequency slide signal. With this method of operation, the CTFM process transforms the range (time) information into the frequency domain in the form of frequency shifts. This improves the scanning rate as compared to the pulsed sonar. Typical scanning rates for CTFM sonar systems are 30 deg/s. That is, a +90 to -90 sector can be scanned in 6 s, which is also the interval between image updates. Although this type of sonar has a much faster scan rate, it can still potentially distort the image output for a fast-moving vehicle if no compensation is allowed. The main advantage of CTFM sonar systems is the improved scan rate which thus allows a high azimuth resolution, typically about 1 to 2 deg. The main disadvantage of CTFM sonar systems is the range resolution. This is determined by the number of processor filters employed to detect shifts in the frequency, and for a fixed number of filters, the resolution degrades with increasing range. In general, the number of filters employed gives about one-tenth of the resolution of a pulsed mechanically scanned sonar. If the number of processor filters is increased to improve resolution, then the scanning rate will drop and the CTFM sonar loses its coverage rate advantage. The within pulse electronic scanning sonar, generally referred to as phased modulation scanning, has a very fast scan rate, typically about 15Mz, which is dependent on the pulse length and the sector angle of scan. For this system the sonar beam is scanned over the whole sector of interest for every range resolution cell, which is equivalent to the pulse length, hence within pulse scanning. The azimuth resolution is similar to that of the pulsed and CTFM mechanically scanned sonar systems. This type of FLS combines fast sweep rates with high-range resolution. Its main disadvantage is its complexity, especially the electronic steering. Because of the fast sweep rates, electronic scanning must be used. Also, this type of sonar is usually limited to high-frequency operation because of the otherwise large size of the required transducer array.
The FLS systems described in the foregoing operate very similarly to side scan sonar. That is, the beam pattern is narrow in the horizontal direction (approximately 1 to 2 deg), and wide in the vertical direction (approximately 80 deg). The images generated by these systems are 2-D images, similar to an aerial photograph when the object is illuminated from the side.
The resulting images can be used by the vehicle for navigation, provided the resolution is adequate. The estimated 3-D information can be used for optimum path planning given some desired set of operational modes and for bottom contour following. Another type of FLS system for obstacle avoidance uses the concept of acoustic whiskers. The whiskers are multiple acoustic beams that project in different directions to watch for obstacles. A typical arrangement presently used on an underwater vehicle is 3 horizontal layers of 5 acoustic beams per layer pointing to the front of the vehicle [3]. Systems with a larger number of beams in the horizontal direction are now becoming available. The main advantages of multi-beam sonar systems are the higher rates of data gathering which make these system less prone to platform motion distortion (the required compensation is much less for these systems as compared to other slower systems), the range resolution is related to the pulse length, and these have no moving parts. However, multi beam FLS have rather poor azimuth resolution (typically 10deg), although on some new and projected systems with 40 or more beams, a higher resolution is possible. The main disadvantages of multi-beam systems are the complexity in the hardware and the fact that a number of transducers are pinging at the same time making the sound level generated by the system high as compared to other systems. If detection during the operation of the sonar is to be avoided, the fact that the generated sound level is higher for this system creates an additional disadvantage.
The multiple beam sonar system can be considered to consist of an amalgamation of a number of individual sonar systems all pointing in different directions simultaneously. Therefore, to process the data, either multiple identical channels are used, malting the system bulky, or alternatively some form of multiplexing is used. The use of such FLS for contour generation would require an inordinate number of beams making this type inappropriate for such a task.

Sonar and lasers, like radar, uses the principle of echo location. For echo location, a short pulse is sent in a specific direction (see figure1). When the pulse hits an object, which does not absorb the pulse, it bounces back, after which the echo can be picked up by a detector circuit.
In the case of sonars, high frequency sound is used for range detection and in the case of lasers, high frequency light is transmitted for range detection. Here the basic working principle of a sonar is described below.
By measuring the time between sending the pulse and detecting the echo, the distance to the object can be determined. That is sound travels at a speed of 343 meters per second through air at room temperature. By multiplying the time between pulse and echo (in seconds) with 343, you will get twice the distance to the object in meters (since the sound traveled the distance twice to get to the object and bounce back):
2d = Vsound X (Tpulse “ Techo)
Vsound = speed of sound-travel (343meters/second)
Tpulse = time in seconds of pulse Transmission
Techo = time in seconds of echo detection
d = distance to object onto which pulse bounces back.
Sonar Transducers

Autonomous Vehicle Implementing
Sonars As Perception Sensors

Using the principle of the perception sensors as discussed above, we create the map of that particular location. The following discusses how we can achieve this.
A map generally can be said to be consisting of a set of points over the plane on which the map is said to rest on. Also we know that by using the above perception sensors we can obtain the distance of the target from the position at which the perception sensor is located. This particular distance that is obtained is actually the distance between the sensor and a point on the target.
So this means that if by any particular means we are able to get all or some of the points of the target, then we have achieved in getting a set of points; these points that are obtained can be then integrated by means of an appropriate software.
There are two types of maps that can be developed depending on how the autonomous vehicle is designed.
The two types of maps are:
Two Dimensional Map
Three Dimensional Map
A two dimensional map is said to take only some of the points of its surroundings (target); that is the map that is generated is said to consist of a set of points that are present only in the plane at which the perception sensor axis is located. Here the perception sensor is limited to only moving in that one plane. These types of maps are used basically in small systems where a 3D mapping is not necessary but it is to be noted that the map generated though not 3D can estimate the distance of the object since the perception sensor axis here in this case is basically PARALLEL in the same plane and NOT PERPENDICULAR to the plane.

To explain further we can say that suppose the base or ground is the x-y plane then the scanning only occurs in the x-y plane for a particular value of the z axis. That is only points present in the x-y plane for that particular value of z component is scanned and with this data the two dimensional map is produced
The following figure is obtained can be explained as follows: the perception sensor that is located on the autonomous vehicle is given the freedom to move the perception sensor in one plane (x-y plane) and that plane preferably chosen to be the plane that is parallel to the base (ground). The sensor get all the data present in that plane and hence it is able to construct a map. This map that is created would seem to us like the top view of that path. Here the z-axis is parallel to the wall as shown in the side view and here only a point of the z-axis (part of the height of the wall) is scanned whereas in the top view the wall is fully scanned (width of the wall) by rotation of the sensor in that x-y plane

The following diagram illustrates as to how the autonomous vehicle views the surroundings.

Picture of The Scene

The above figure depicts a view that is to be mapped by a robot
Map As Viewed By The Robot

The thick black lines indicate the surface areas of the walls that were mapped by the autonomous vehicle when it is located at the position shown in the figure
Three dimensional mapping is basically the collection of a set of points of a plane that is perpendicular to the axis of the perception sensor. That is the sensors that are employed here are actually made to scan the surrounding plane that is perpendicular to the axis of the sensor.
So, how then is it possible to inscribe information that is of three dimension onto a two dimensional plane without any loss?!. Well this can be explained as follows: the set of points that are obtained are considered to be the vector that imparts the 3D effect; so this feature ( distance ) of the target from the perception sensor is denoted here in color coded form, thereby sustaining simultaneously the three dimensional components in a two dimensional plane.
Here the perception sensors are made to scan the plane that lies perpendicular to it. That is, if the x-y plane were to be the plane on which the autonomous vehicle rests on, then the scanning takes place in the plane containing z as one of the planes.
Usually this is achieved by either using a number of sensors arranged in a peculiar fashion (multi-beam) to achieve 3D or the more common concept consists of just one sensor which is mounted on precision motor and controlled through a micro-controller. By this way, the sensors are able to scan each and every detail of the target thereby achieving high resolution. Say if the precision motor that is to be used is of high precision, then the scanning would consist of more points which would lead to a higher resolution. Also the resolution would depend on factors like beam width and azimuthal angle which varies with the type of sensor that is used. Other factors include the range of the target and also on the scanning factor.
The below shown diagram depicts a 3D map that was obtained when the sonar was used as the perception sensor.

Interactive 3-D visualization of 2.5-m resolution bathymetry (color coded) and 30-m resolution land elevation data (gray scale) from San Francisco Bay, Ca. View is from Golden Gate Bridge looking towards Angel Island. Bathymetry collected with EM- 1000 multi beam sonar by U.S.

Another example include the image scanned by one laser range finder with the help of a servo-motor:

The conventional video camera is basically an extension of a normal camera which is used to take images perpendicular to the camera at a rate of at least 26 images per second. Either way both the video camera and the still camera captures images that are on the plane that is perpendicular to the camera. Therefore all the components of the protruding plane (that is the third axis) is ignored, which results in loss of 3D data and limits itself to just 2D data. Whereas the perception sensors can be used to create 3D data, since it records an extra feature and that being the component along the z-axis which is the distance between the sensor and the object.
The conventional camera relies on the factor light which when falls on the target reflects and reaches back the camera and in this way captures images, so if there were no light, then no capturing of images takes place; but in the case of the perception cameras the high frequency is used for mapping the image and therefore these can be implemented in military purposes and serve as night vision.
For underwater purposes where, light cannot be used since the maximum range by which it can view an object ideally itself only to about 20 feet (due to water turbidity which causes scattering effect); whereas in the case of these sensors (sonars) can be used which gives a range of about 200 feet

Since the frequency of the sonar sensor is much lesser than that of the laser, the scattering factor decreases and therefore it can be used in places like underwater since there is turbidity in the water. Where laser can reach in the range of 50 feet, here the sonar can go all the way 200 feet. Whereas in the case of a laser since the frequency of the wave is high, by Rayleigh Criterion we know that the scattering factor is inversely proportional to the fourth power of the wavelength and hence the laser would have a tendency to scatter especially in places where the wavelength of the laser becomes comparable to those of the particles present in the traveling medium; and this is why it has short range in underwater applications where scattering takes place due to turbidity of the water.
Also since lasers are of high frequency (which implies that the wavelength is very small) their resolution power is more than those of the sonars and hence these can be used to scan a target whose high details are required. This is because the resolution power is directly proportional to the wavelength of the signal.
Also an another advantage of lasers is that since are of high frequency, their directivity is much greater than that of the sonars and hence the beam width of the laser is smaller than that of the sonar which again results in better resolution.
Since lasers are of high frequency, these signals gets attenuated very quickly and therefore has got limited range unlike the sonars whose range is greater than the lasers.
The scanning rate (that is the time taken for a perception sensor to scan the target) in the case of lasers is much higher in the case of lasers since they travel in the speed of light, whereas the scanning rate of the sonars is much lesser mainly for the fact that they travel in the seed of sound which is much lesser than the speed of light.
From the above points we can see that no perception sensor can have a high range as well as a high resolution since the high range can only be achieved in low frequency and high resolution can be achieved only if there is high frequency.
Usually, todayâ„¢s robotic system consist of both these sensors thereby fulfilling both the criteria of having high resolution and high range.

Sonars come out to be the best for mapping underwater due to features like long range and also because they not affected by the water turbidity.
They are basically used to map the ocean floor for geological studies. Here either multiple beam sonars or one sonar with a precision device is used. The following diagram shows how it is used.

The above diagram depicts how a multibeam sonar scans the ocean floor

Lasers are used in many military applications such as to guide the missiles, also it is also being implemented for face recognition analysis where every depression or hump on the face is marked and since this is unique for everyone.
Also lasers are being implemented in todayâ„¢s vehicle which is fixed with the earlier discussed two dimensional mapping feature which gives the driver ease since the map avails the direction of his travel. A laser range scanner is installed on the roof of the vehicle so that it profiles the surroundings HORIZONTALLY (that is two dimensional map). Laser range finder does not only serve as a mapping sensor but also serves as a means of positioning the automobile in the map as well as positioning the destination of in the map. The map that it produces can be combined with the GPS technology which in turn would give the vehicles current position in world.
This has been installed in the latest of the Mercedes Benz automobiles also in many research institutes vehicles. This concept actually gives the current position of the car and also through the satellite by GPS technology, the car is able to find out all the current paths that are present to reach the destination from the source and chooses a path based on factors like-the minimum path that is needed to reach the destination, traffic block, elevation factors etc. By using this one can actually sit back and relax while the car which is fixed with this mapping and positioning technology automatically takes you to the destination.

The use of this mapping and positioning technology not only gives rise to maps on which any autonomous vehicle can be positioned along with the positioning of the surrounding targets but this also gives rise to the many other concepts like vision which maybe one day can be implanted in us and help the blind and even let us to walk in the pitch dark without a torch!!!!!

1. Seafloor Map Generation for Autonomous Underwater Navigation -by Andrew E Johnson & Martial Hebert - The Robotics Institute, Carnegie Mellon University
2. Three Dimensional Map Generation From Side-Scan Sonar Images “by J M Cuschieri & Martial Hebert - Transations from ASME
3. High Accurate Positioning & Mapping Using Laser Range Scanner “ by H Zhao & R Shibashaki “ Proceedings of IEEE Intelligent Vehicles Symposium

Post: #2
Its nice to see my paper on this site..thank u!
actually surprising... Smile


Important Note..!

If you are not satisfied with above reply ,..Please


So that we will collect data for you and will made reply to the request....OR try below "QUICK REPLY" box to add a reply to this page
Popular Searches: what is curriculum mapping, ppts on self organization mapping, kindergarten curriculum mapping language arts, brain mapping seminar report, interactive mapping application that, seminar topics on geochemical mapping using gis, roadway mapping system using java,

Quick Reply
Type your reply to this message here.

Image Verification
Image Verification
(case insensitive)
Please enter the text within the image on the left in to the text box below. This process is used to prevent automated posts.

Possibly Related Threads...
Thread: Author Replies: Views: Last Post
  Seminar Report On Wearable Bio-Sensors mechanical wiki 3 3,237 30-03-2015 10:07 AM
Last Post: seminar report asees
  gsm pdf and final seminar report suvendu9238 10 9,233 19-11-2014 09:34 PM
Last Post: jaseela123
  Seminar Report on Night Vision Technologies computer girl 2 2,516 05-07-2014 09:17 PM
Last Post: Guest
  Seminar Report On Optical Computing Technology mechanical wiki 3 3,428 27-07-2013 12:41 PM
Last Post: computer topic
  Seminar Report On NavBelt and GuideCane mechanical wiki 8 4,727 19-07-2013 11:31 AM
Last Post: computer topic
  optical switching seminar report electronics seminars 7 8,183 29-04-2013 10:55 AM
Last Post: computer topic
  memristor seminar report project report tiger 21 19,129 25-01-2013 12:02 PM
Last Post: seminar details
Smile smart note taker seminar full report [email protected] 58 25,969 25-01-2013 12:00 PM
Last Post: seminar details
  iris scanning seminar report electronics seminars 7 9,349 17-12-2012 11:36 AM
Last Post: seminar details
  Indoor Positioning and Navigation with Camera Phones computer science technology 1 1,225 24-11-2012 11:59 AM
Last Post: seminar details