Skip to content

Odometry General Information

Erick Mejia Uzeda edited this page May 31, 2020 · 1 revision

Written by: Erick Mejia Uzeda

What is Odometry?

In short, odometry is the use of some form of motion data to calculate the change of a robot's pose (position and orientation) over time. Hence it enables one to localize a robot given its initial pose.

Note: One of the issues with odometry is that it is a form of dead reckoning. In other words, new pose information is based off of previous pose information, meaning any errors that develop, are accumulated over time.

Odometry Techniques

Odometry can be performed using a multitude of sensors/technologies. We provide a brief list below, each with some advantages and disadvantages.

Note: The following table was reproduced from the paper Review of visual odometry: types, approaches, challenges, and applications and serves as a good summary.

Sensor/technology Advantages Disadvantages
Wheel odometry Simple to determine position/orientation. Short term accuracy, and allows high sampling rates. Low cost solution. Position drift due to wheel slippage. Error accumulation over time. Velocity estimation requires numerical differentiation that produces additional noise.
Inertial Navigation System (INS) Provides both position and orientation using 3-axis accelerometer and gyroscope. Not subject to interference outages. Position drift (position estimation requires second-order integral). Have long-term drift errors.
GPS/GNSS Provides absolute position with known value of error. No error accumulation over time. Unavailable in indoor, underwater, and closed areas. Affected by RF interference.
Ultrasonic sensor Provides a scalar distance measurement from sensor to object. Inexpensive solution. Reflection of signal wave is dependent on material or orientation of obstacle surface. Suffer from interference if multiple sensors are used. Low angular resolution and scan rate.
Laser sensor Similar to sonar sensors but has higher accuracy and scan rate. Return the distance to a single point (rangefinder) or an array of distances (scanner). Reflection of signal wave is dependent on material or orientation of obstacle surface. Expensive solution.
Optical camera Images store a huge meaningful information. Provide high localization accuracy. Inexpensive solution. Requires image-processing and data-extraction techniques. High computational-cost to process images.

Rotary Encoders: (Used to measure Wheel Odometry)

In general, the wheel's angular rotation is encoded through some means (mechanical, optical) into a binary code. For some incremental rotation (called the encoder’s resolution), there is a change of state that occurs within the mechanical (eg. physically opening and closing of a circuit) or optical (eg. different optical patterns read) sensor mechanism which is then translated into a binary code. Knowing the resolution and the angle readings in time, one can integrate to determine the change in position.

  • Suffers from position drift and inaccuracy due to wheel slippage, which leads to accumulated errors.
  • Good short term accuracy since accumulated error is small for small sample times.

IMU [Inertial Measurement Unit]: (Used for INS [Inertial Navigation System])

Given a known position, orientation, and velocity it continuously calculates a future position, orientation, and velocity using motion sensors (accelerometer) and rotation sensors (rate gyroscope).

  • Self contained and thus unaffected by external sources.
  • Highly prone to drift accumulation due to integration to determine position/velocity.
  • Need to correct position periodically using other navigation system.
  • Hence bad for long distances but can supplement extra accuracy with other systems (eg. GPS).
  • Go here for more errors and their effects.

GPS [Global Positioning System]: (Used for Absolute Positioning)

Uses 4 satellites to perform trilateration (similar to triangulation but uses spheres and not lines w/ angles) to determine an absolute position on the Earth.

  • This is a localization technique (not Odometry specifically) but it is used to correct Odometry accumulated errors. See this Robotics Stack Exchange for how GPS can supplement Encoders.
  • Immunity to error accumulation.
  • Long-term stability.
  • Common GPS have accuracies around 1 - 5 m. Can drift within its error range.

Sonar/Ultrasonic Sensors:

Uses Time-of-Flight (TOF) technique which essentially uses the time it takes the signal to travel (acoustic signal in this case) to calculate distances. Then one can triangulate or model match to determine position/displacement.

  • High accuracy can be obtained using a sonar array.
  • Sensitive to noise from the environment and acoustic interference.

Lasers: (Used to Compute Point Clouds)

LIDARs are generally TOF to calculate point clouds. Motion is determined by optimally matching previous point cloud readings. Higher angular resolution than sonars.

Optical Camera: (Used for Visual Odometry)

Passive technique that captures images using a stereo camera pair to triangulate points in the image. Point clouds from different frames are matched optimally to determine motion. Due to the large amount of distinct visual features found in images, it is generally more robust and reliable than other sensors. Due to its passive nature, it does not face interference issues that other TOF sensors do.

NOTE: it is possible to use a single camera and calculate motion data using pixel displacement between frames.

  • Generally computationally expensive.
  • Sensitive to camera calibration.
  • Sensitive to lighting environment, lack of textured surfaces, presence of shadows, and blur.