FIND ME ON

GitHub

LinkedIn

SLAM Presentation

🌱

Motivation

With the never ending march towards progress & innovation, the field of robotics has been identified as a technological area of particular interest by academics and industry alike.

The main motivating factor with researching and investing in robotics is that they have the potential to change society in a variety of ways that help: - Reduce costs for humans - self-driving cars taxi service reducing transportation costs - Take the burden of physically intensive and/or dangerous work: - Automating heavy/dangerous machinery - Exploring mines, - Cleaning up nuclear disaster sites. - Making our world safer: - Self-driving cars remove human error from the equation and theoretically create safer roads. - Improving quality of life: - Household robotic assistants that can help elderly people with mobility issues or people with disabilities. - Improving efficiency in a variety of ways: -

The action of causing a machine or device to operate.

In performing these tasks, robots perceive the world through their sensors and interact with the world through actuation.

Both sensing (i.e. measurement) and actuation (i.e. motion) are subject to uncertainty. - - Sensors are noisy and subject to random failures. - Actuators can be imprecise and subject to sporadic failures. - Automating robots requires embedding them with their own models of the world or subset of the world they interact with. These models can often simplify the world in ways that are not realistic (i.e. assuming a flat surface when in reality there are intermittent bumps and imperfections).

This uncertain behaviour can be modelled in our actuation with the following motion model xt+1=f(xt,ut,wt)wtiidμx_{t+1}=f(x_{t},u_{t},w_{t}) \quad w_{t}\overset{iid}\sim\muwhere xt+1x_{t+1} represents the state of our robot (e.g. pose, arm configuration, etc.). We also model our measurements in a similar noisy fashion with what is called the measurement model yt=h(xt,vt)vtiidν.y_{t}= h(x_{t},v_{t}) \quad v_{t}\overset{iid}\sim\nu.

Localization, Mapping

Localization

In most of the aforementioned applications, robots are required to have an accurate representation of their state in order to function properly. For example household robotic assistants need to be able to know where they are in relation to the house they reside in in order to coherently navigate. A self-driving car needs to know where it is in order to formulate the necessary plan to navigate to its programmed destination. This problem is called the “localization” problem.

Localization is akin to your standard state estimation problem. We have the following POMDP: (X,U,Y,T,Q,c)(\mathbb{X},\mathbb{U}, \mathbb{Y},\mathcal{T}, Q,c) defined with the following dynamics: where Eμ=0,Eν=0\mathbb{E}\mu=0,\mathbb{E}\nu=0.

The goal is either formulated as 1. a Bayesian inference problem where the goal is to compute: πt+1:=P(xt+1y0:t,u0:t)\pi_{t+1}:=\mathbb{P}(x_{t+1}\mid y_{0:t},u_{0:t})or alternately; 2. It can be one of Maximum A Posteriori (MAP) where we would like to determine an estimate x^t+1\hat{x}_{t+1} such that x^t+1:=argmaxxX P(xt+1=xy0:t+1,u0:t)\hat{x}_{t+1}:=\underset{x\in \mathbb{X}}{\arg\max} \ \mathbb{P}(x_{t+1}=x\mid y_{0:t+1},u_{0:t}) ## Mapping Another important problem in robotics is the mapping problem. Often, robots have no pre-existing knowledge of the environment they are to interact with and hence constructing a map is one way of making sense of that environment in order to achieve whatever goal it may have. In other cases, a robot may already have some prior (like an old map) that needs to be updated. A good example of this in action is an autonomous vehicle mapping a mine over time. As a byproduct of mining the mine will expand and be subject to changes that need to be updated over time.

Mapping is formulated as a Bayesian inference problem where we’re often dealing with similar dynamics as above but in the strict mapping case we assume knowledge of the state. We define a map as a list of objects in the environment along with some associated properties: m={m1,mN}.\mathbf{m}=\{ m_{1},\dots m_{N} \}.Maps are indexed or represented in two main canonical forms: 1. Feature-based. 2. Location-based. ### Feature-based maps

Location-based maps

Problem definition

SLAM, as its name suggests, is the problem of localization and mapping, simultaneously. This problem is regarded as a chicken-or-egg problem due to the apparent interdependency between the two simultaneous tasks. As, in order to construct an accurate map, one must know where it is located, conversely in order to effectively localize oneself, it must have an accurate map of its surroundings. SLAM -

Chronological account of SLAM methods

SLAM has three primary approaches

Kalman Filter

Recall that the Kalman filter is the optimal linear estimator for Gaussian linear systems. Not only is it an optimal estimation model in these cases but it is also powerful in the sense that it has an easily computable closed form update equations. In practice though, many system models have nonlinearities that cannot be ignored and hence the EKF attempts to use the machinery developed in the KF and “extend” it to the nonlinear case.

EKF

Given the previous dynamics, we linearize the motion and observation models about the current state estimate mean: f(xt,ut,wt)xˇt+Ft(xtx^t)+wth(xt,vt)yˇt+Ht(xtxˇt)+vt\begin{align*} f(x_{t},u_{t},w_{t})&\approx \check{x}_{t}+\mathbf{F}_{t}(x_{t}-\hat{x}_{t})+w'_{t}\\ h(x_{t},v_{t})&\approx \check{y}_{t}+\mathbf{H}_{t}(x_{t}-\check{x}_{t})+v'_{t} \end{align*}where

Particle Filter

Graph SLAM

# Current SOA

Future directions