🌱
We introduce a novel formulation of motion planning, for continuous-time trajectories, as probabilistic inference. 1. We first show how smooth continuous-time trajectories can be represented by a small number of states using sparse Gaussian process (GP) models. 2. We next develop an efficient gradient-based optimization algorithm that exploits this sparsity and GP interpolation. We call this algorithm the Gaussian Process Motion Planner (GPMP). 3. We then detail how motion planning problems can be formulated as probabilistic inference on a factor graph. This forms the basis for GPMP2, a very efficient algorithm that combines GP representations of trajectories with fast, structure-exploiting inference via numerical optimization. 4. Finally, we extend GPMP2 to an incremental algorithm, iGPMP2, that can efficiently replan when conditions change. We benchmark our algorithms against several sampling-based and trajectory optimization-based motion planning algorithms on planning problems in multiple environments. Our evaluation reveals that GPMP2 is several times faster than previous algorithms while retaining robustness. We also benchmark iGPMP2 on replanning problems, and show that it can find successful solutions in a fraction of the time required by GPMP2 to replan from scratch. # Related Work ## Sampling-based algorithms These can effectively find feasible trajectories for high-dimensional systems, but the trajectories often exhibit jerky and redundant motion and therefore require post-processing to address optimality.
These minimize an objective function that encourages trajectories to be both feasible and optimal. A drawback of these is that in practice, a fine quantization of the trajectory is necessary to integrate cost information when reasoning about thin obstacles and tight constraints.
In addition, these algorithms are locally optimal and hence require different starting and ending points (perturbations to initial conditions) on repeated runs to find a feasible solution, which can incur high computational cost.
The latter problem can be circumvented using a feasible solution found by a Sampling-based algorithm as an initial trajectory.
In contrast to sampling-based planners, trajectory optimization starts with an initial, possibly infeasible, trajectory and then optimizes the trajectory by minimizing a cost function. - Covariant Hamiltonian optimization for motion planning (CHOMP) and related methods (Byravan et al., 2014; He et al., 2013; Marinho et al., 2016; Ratliff et al., 2009; Zucker et al., 2013) optimize a cost functional using covariant gradient descent - Stochastic trajectory optimization for motion planning (STOMP) (Kalakrishnan et al., 2011) optimizes non-differentiable costs by stochastic sampling of noisy trajectories. - TrajOpt (Schulman et al., 2014, 2013) solves a sequential quadratic program and performs convex continuous-time collision checking. In contrast to sampling-based planners, trajectory optimization methods are very fast, but only find locally optimal solution. The computational bottleneck results from evaluating costs on a fine discretization of the trajectory or, in difficult problems, repeatedly changing the initial conditions until a feasible trajectory is discovered.
Our goal is to find trajectories where denotes the dimension of the state, that 1. satisfy our constraints (are Feasible) and; 2. minimize the cost (Optimal) We defined motion-planning in Motion-Planning Algorithm.
We take continuous-time trajectories as samples from a vector valued GPwhere - is a VV mean function and; - is a matrix-valued covariance function. A VV GP is a collections of Random Variables any finite number of which have a joint Gaussian pdf.
We can say that for any collection of times has a joint Gaussian distribution: with mean vector and covariance kernel defined as We use the bold to denote the matrix formed by vectors which are support states that parameterize the continuous-time trajectory .
We use a structured kernel generated by a LTV SDE where is our control and is a white noise process i.e. Here is the power-spectral density matrix and is the Dirac delta function. The solution to the initial value problem of this LTV-SDE is where is the transition matrix. The mean and covariance functions of the GP defined by this LTV-SDE are calculated by taking first and second moments where and are the initial mean and covariance of the start state.