SLAM

What is SLAM?

Simultaneous localization and mapping. Slam is the process of constructing or updating a map of an unknownenvironment while simultaneously keeping track of an agent’s location within it, for example a robot. The maincomponents of a SLAM process are the sensing, processing of data, visualization of data, and decision making.

What does a robot need for SLAM?

The first thing that is needed in a robot in order to be able to map the environment are sensors which can be ultrasonor,laser or any type of depth camera. A locomotion system is also needed in order to be able to map a bigger area from different angles and perspectives. The last thing is a processing unit, which should be a powerful processor, able to run an operating system, like ROS. An overview of the slam process can be seen in the figure on the left:



SLAM in ROS


Ros is one of the most powerful software/firmware tools for robots nowadays. It already contains a lot of implemented SLAM tools and algorithms which is universal for any shape of robot with a wide range of sensor types. It can easily combine data from different sources, and put it in the same software 3d environment. For example, in the image on the right you can see a ROS robot mapping a room based on laser sensors and a Kinect camera: 


SLAM algorithms

SLAM is not based on a unique formula or general rule. However, there are some key elements that are present in any type of SLAM algorithms. For example, Landmarks are one of these elements. Landmarks are features which can easily be re-observed and distinguished from the environment. These are used by the robot to find out where it is (to localize itself). One way to imagine how this works for the robot is to picture yourself blindfolded. If you move around blindfolded in a house you may reach out and touch objects or hug walls so that you don’t get lost. Another important element is the approximation of the incoming data. Sensors cannot be trusted 100%, there are lot of errors and unprecise values; that’s why a lot of approximation algorithms are needed. For example an array of points from a laser scan , can be approximated to a line, which can be a wall or an obstacle. In the image below, there is a good example of this type of approximation:





SLAM in cooperative robotics

Also called C-SLAM; This is a new technology which started developing in 2015. Most of the research and papers about this topic are not open for the general public, or have to be paid. The main idea of C-SLAM is to combine mapping data from multiple sources. The robots should share the landmarks, and also the information about their current position, in such a way that it can be defined a difference between an obstacle and another robot.
Merging of maps can be done in an environment like rViz using ROS. An example of map merging can be seen on the left:




In order to merge the maps, there should be common landmarks in order to define the merging points. That means that the robots should meet at least once during the whole process of mapping. In order to make it easier for the position calibration, the robots should start mapping from the same place, which will be used as a reference point for the general map. An important thing is the efficiency of movement of the robots. The robots should avoid mapping the same landmarks and taking the same directions.
However, a thing to keep in mind is that robots should easily detect each other. A solution to that would be placing markers with different ids on each robot, like in the picture on the right: 

------------------------------------------------------------------------------------------------