Probabilistic Robotics - Dalhousie University

Probabilistic Robotics - Dalhousie University

Markov Localization & Bayes Filtering with Kalman Filters Discrete Filters Particle Filters Slides adapted from Thrun et al., Probabilistic Robotics 1 Markov Localization

The robot doesnt know where it is. Thus, a reasonable initial believe of its position is a uniform distribution. 2 Markov Localization A sensor reading is made (USE SENSOR MODEL) indicating a door at certain locations (USE MAP). This sensor reading should be integrated with prior believe to update our believe (USE BAYES). 3

Markov Localization The robot is moving (USE MOTION MODEL) which adds noise. 4 Markov Localization A new sensor reading (USE SENSOR MODEL) indicates a door at certain locations (USE MAP). This sensor reading should be integrated with prior believe to update our believe (USE BAYES).

5 Markov Localization The robot is moving (USE MOTION MODEL) which adds noise. 6 Bayes Formula P ( x, y ) P ( x | y ) P ( y ) P ( y | x ) P ( x )

P ( y | x) P ( x) likelihood prior P( x y) P( y) evidence 7 Bayes Rule with Background Knowledge P ( y | x, z ) P ( x | z ) P ( x | y, z )

P( y | z ) 8 Normalization P( y | x) P( x) P( x y) P ( y | x) P ( x) P( y) 1 1 P ( y )

P( y | x)P( x) x Algorithm: x : aux x| y P ( y | x) P ( x) 1 aux x| y x x : P ( x | y ) aux x| y 9

Recursive Bayesian Updating P( zn | x, z1, , zn 1) P( x | z1, , zn 1) P( x | z1, , zn) P ( zn | z1, , zn 1) Markov assumption: zn is independent of z1,...,zn-1 if we know x. P( zn | x) P( x | z1, , zn 1) P( x | z1, , zn) P( zn | z1, , zn 1) P( zn | x) P( x | z1, , zn 1)

1...n P( z | x) P( x) i i 1...n 10 Putting oberservations and actions together: Bayes Filters Given:

Stream of observations z and action data u: d t {u1 , z1 , ut , zt } Sensor model P(z|x). Action model P(x|u,x). Prior probability of the system state P(x). Wanted: Estimate of the state X of a dynamical system. The posterior of the state is also called Belief: Bel ( xt ) P ( xt | u1 , z1 , ut , zt )

11 Graphical Representation and Markov Assumption p( zt | x0:t , z1:t , u1:t ) p( zt | xt ) p( xt | x1:t 1 , z1:t , u1:t ) p( xt | xt 1 , ut ) Underlying Assumptions Static world Independent noise Perfect model, no approximation errors 12

Bayes Filters z = observation u = action x = state Bel ( xt ) P ( xt | u1 , z1 , ut , zt ) Bayes P ( zt | xt , u1 , z1 , , ut ) P ( xt | u1 , z1 , , ut )

Markov P ( zt | xt ) P ( xt | u1 , z1 , , ut ) Total prob. P ( zt | xt ) P( xt | u1 , z1 , , ut , xt 1 ) P( xt 1 | u1 , z1 , , ut ) dxt 1 Markov P( zt | xt ) P( xt | ut , xt 1 ) P( xt 1 | u1 , z1 , , ut ) dxt 1

Markov P( zt | xt ) P( xt | ut , xt 1 ) P( xt 1 | u1 , z1 , , zt 1 ) dxt 1 P ( zt | xt ) P ( xt | ut , xt 1 ) Bel ( xt 1 ) dxt 1 13 Prediction bel ( xt ) p ( xt | ut , xt 1 ) bel ( xt 1 ) dxt 1 Correction

bel ( xt ) p ( zt | xt ) bel ( xt ) Bel ( xt ) Filter P( zt | xt ) P( xt | ut , xt 1 ) Bel ( xt 1 ) dxt 1 Bayes Algorithm 1. 2. 3. 4.

5. 6. 7. 8. Algorithm Bayes_filter( Bel(x),d ): 0 If d is a perceptual data item z then For all x do Bel ' ( x) P( z | x) Bel ( x) Bel ' ( x)

For all x do Bel ' ( x) 1 Bel ' ( x) 9. Else if d is an action data item u then 10. For all x do Bel ' ( x) P( x | u, x' ) Bel ( x' ) dx' 11. 12. Return Bel(x) 15

Bayes Filters are Familiar! Bel ( xt ) P( zt | xt ) P( xt | ut , xt 1 ) Bel ( xt 1 ) dxt 1 Kalman filters Particle filters Hidden Markov models Dynamic Bayesian networks Partially Observable Markov Decision Processes (POMDPs) 16

17 Probabilistic Robotics Bayes Filter Implementations Gaussian filters SA-1 Gaussians p( x) ~ N ( , 2 ) : 1 ( x )

2 1 p( x) e 2 2 2 Univariate

- Linear transform of Gaussians X ~ N ( , 2 ) Y aX b Y ~ N (a b, a 2 2 ) Multivariate Gaussians

X ~ N ( , ) Y AX B Y ~ N ( A B, AAT ) X 1 ~ N ( 1 , 1 ) 2 1 p

( X ) p ( X ) ~ N

2 , 1 2 1 X 2 ~ N ( 2 , 2 ) 1 2 1 2 1

1 1 1 2 We stay in the Gaussian world as long as we start with Gaussians and perform only linear transformations. Discrete Kalman Filter Estimates the state x of a discrete-time controlled process that is governed by the linear stochastic difference equation

xt At xt 1 Bt ut t with a measurement zt Ct xt t 21 Linear Gaussian Systems: Initialization Initial belief is normally distributed: bel ( x0 ) N x0 ; 0 , 0

22 Linear Gaussian Systems: Dynamics Dynamics are linear function of state and control plus additive noise: xt At xt 1 Bt ut t p ( xt | ut , xt 1 ) N xt ; At xt 1 Bt ut , Rt bel ( xt ) p ( xt | ut , xt 1 )

bel ( xt 1 ) dxt 1 ~ N xt ; At xt 1 Bt ut , Rt ~ N xt 1 ; t 1 , t 1 23 Linear Gaussian Systems: Observations Observations are linear function of state plus additive noise:

zt Ct xt t p ( zt | xt ) N zt ; Ct xt , Qt bel ( xt ) p ( zt | xt ) bel ( xt )

~ N zt ; Ct xt , Qt ~ N xt ; t , t 24 Kalman Filter Algorithm 1.

Algorithm Kalman_filter( t-1, t-1, ut, zt): 2. Prediction: 3. t At t 1 Bt ut 4. t At t 1 AtT Rt 5. Correction: 6. K t t CtT (Ct t CtT Qt ) 1 7. t t K t ( zt Ct t ) 8. t ( I K t Ct ) t 9. Return t, t 25

Kalman Filter Summary Highly efficient: Polynomial in measurement dimensionality k and state dimensionality n: O(k2.376 + n2) Optimal for linear Gaussian systems! Most robotics systems are nonlinear! 26

Nonlinear Dynamic Systems Most realistic robotic problems involve nonlinear functions xt g (ut , xt 1 ) zt h( xt ) 27 Linearity Assumption Revisited

28 Non-linear Function 29 EKF Linearization (1) 30 EKF Linearization (2)

31 EKF Linearization (3) 32 EKF Linearization: First Order Taylor Series Expansion Prediction: g (ut , t 1 ) g (ut , xt 1 ) g (ut , t 1 )

( xt 1 t 1 ) xt 1 g (ut , xt 1 ) g (ut , t 1 ) Gt ( xt 1 t 1 ) Correction: h( t ) h( xt ) h( t ) ( xt t ) xt h( xt ) h( t ) H t ( xt t )

33 EKF Algorithm 1. Extended_Kalman_filter( t-1, t-1, ut, zt): 2. Prediction: 3. t g (ut , t 1 ) 4. t Gt t 1GtT Rt t At t 1 Bt ut t At t 1 AtT Rt 5. Correction:

6. K t t H tT ( H t t H tT Qt ) 1 7. t t K t ( zt h( t )) 8. t ( I K t H t ) t 9. Return t, t h( t ) Ht xt K t t CtT (Ct t CtT Qt ) 1 t t K t ( z t Ct t ) t ( I K t Ct ) t

g (ut , t 1 ) Gt xt 1 34 Localization Using sensory information to locate the robot in its environment is the most fundamental problem to providing a mobile robot with autonomous capabilities. [Cox 91]

Given Map of the environment. Sequence of sensor measurements. Wanted Estimate of the robots position. Problem classes Position tracking

Global localization Kidnapped robot problem (recovery) 35 Landmark-based Localization 36 EKF Summary Highly efficient: Polynomial in

measurement dimensionality k and state dimensionality n: O(k2.376 + n2) Not optimal! Can diverge if nonlinearities are large! Works surprisingly well even when all assumptions are violated! 37 Kalman Filter-based System

[Arras et al. 98]: Laser range-finder and vision High precision (<1cm accuracy) [Courtesy of Kai Arras] 38 Multihypothesis Tracking 39

Localization With MHT Belief is represented by multiple hypotheses Each hypothesis is tracked by a Kalman filter Additional problems: Data association: Which observation corresponds to which hypothesis? Hypothesis management: When to add / delete hypotheses? Huge body of literature on target tracking, motion correspondence etc.

40 MHT: Implemented System (2) Courtesy of P. Jensfelt and S. Kristensen 41 Probabilistic Robotics Bayes Filter Implementations Discrete filters

SA-1 Piecewise Constant 43 Discrete Bayes Filter Algorithm 1. 2.

3. 4. 5. 6. 7. 8. Algorithm Discrete_Bayes_filter( Bel(x),d ): 0 If d is a perceptual data item z then For all x do

Bel ' ( x) P( z | x) Bel ( x) Bel ' ( x) For all x do Bel ' ( x) 1 Bel ' ( x) 9. Else if d is an action data item u then 10. For all x do 11. Bel ' ( x) P( x | u , x' ) Bel ( x' ) 12. Return Bel(x)

x' 44 Grid-based Localization 45 Sonars and Occupancy Grid Map

46 Probabilistic Robotics Bayes Filter Implementations Particle filters SA-1 Sample-based Localization (sonar) Particle Filters Represent belief by random samples

Estimation of non-Gaussian, nonlinear processes Monte Carlo filter, Survival of the fittest, Condensation, Bootstrap filter, Particle filter Filtering: [Rubin, 88], [Gordon et al., 93], [Kitagawa 96] Computer vision: [Isard and Blake 96, 98] Dynamic Bayesian Networks: [Kanazawa et al., 95]d Importance Sampling Weight samples: w = f / g

Importance Sampling with Resampling: Landmark Detection Example Particle Filters Sensor Information: Importance Sampling Bel ( x) w

p ( z | x) Bel ( x) p ( z | x) Bel ( x) p( z | x) Bel ( x) Robot Motion Bel ( x) p( x | u x' ) Bel ( x' ) ,

d x' Sensor Information: Importance Sampling Bel ( x) w p ( z | x) Bel ( x) p ( z | x) Bel ( x) p( z | x)

Bel ( x) Robot Motion Bel ( x) p( x | u x' ) Bel ( x' ) , d x' Particle Filter Algorithm

1. Algorithm particle_filter( St-1, ut-1 zt): 2. St , 0 3. For i 1 n 4. Generate new samples Sample index j(i) from the discrete distribution given by wt-1 i j (i ) p

( x | x , u ) u x x t t 1

t 1 Sample t from using t 1 and t 1 wti p( zt | xti ) 5. 6. wti 7. St S t { xti , wti } 8. 9. For i 1 n

wti wti / 10. Compute importance weight Update normalization factor Insert Normalize weights Particle Filter Algorithm Bel ( xt ) p( zt | xt ) p( xt | xt 1 , ut 1 ) Bel ( xt 1 ) dxt 1 draw xit1 from Bel(xt1) draw xit from p(xt | xit1,ut1)

Importance factor for xit: target distribution w proposal distribution p( zt | xt ) p( xt | xt 1 , ut 1 ) Bel ( xt 1 ) p ( xt | xt 1 , ut 1 ) Bel ( xt 1 ) p ( zt | xt ) i t

Motion Model Reminder Start Proximity Sensor Model Reminder Laser sensor Sonar sensor Initial Distribution

61 After Incorporating Ten Ultrasound Scans 62 After Incorporating 65 Ultrasound Scans 63

Estimated Path 64 Localization for AIBO robots Limitations The approach described so far is able to

track the pose of a mobile robot and to globally localize the robot. How can we deal with localization errors (i.e., the kidnapped robot problem)? 66 Approaches Randomly insert samples (the robot

can be teleported at any point in time). Insert random samples proportional to the average likelihood of the particles (the robot has been teleported with higher probability when the likelihood of its observations drops). 67 Global Localization

68 Kidnapping the Robot 69 Summary Particle filters are an implementation of

recursive Bayesian filtering They represent the posterior by a set of weighted samples. In the context of localization, the particles are propagated according to the motion model. They are then weighted according to the likelihood of the observations. In a re-sampling step, new particles are drawn with a probability proportional to

the likelihood of the observation. 71

Recently Viewed Presentations

  • Report from the Secretariat of Clergy, Consecrated Life

    Report from the Secretariat of Clergy, Consecrated Life

    Rev. John Guthrie,[email protected] NOCERCC CONVENTION. PASADENA, CA. JANUARY 29, 2013. Report from the Secretariat of Clergy, Consecrated Life and Vocations
  • Species Diversity in Communities Photo from Wikimedia Commons

    Species Diversity in Communities Photo from Wikimedia Commons

    "reefs with (closed symbols) and without (open symbols) cleaner fish at Casuarina Beach (circles) and Lagoon (squares) sites" Please do not use the images in these PowerPoint slides without permission. Wikipedia "Bluestreak cleaner wrasse" page; accessed 10-III-2015 ["Cleaner wrasse with...
  • Chapter 1 Making Economic Decisions

    Chapter 1 Making Economic Decisions

    Branch and Cut Search. Branch and cut algorithms modify the basic branch and bound strategy of Algorithm 12A by attempting to strengthen relaxations with new inequalities before branching a partial solution. Added constraints should hold for all feasible solutions to...
  • THE VALUE PROPOSITION OF THE SEMINAR Executive Director:

    THE VALUE PROPOSITION OF THE SEMINAR Executive Director:

    Strategic HRM &D Planning Sourcing & Placing Capacity Building Performance Management Remuneration & Reward Employee Relations Exit Management HRM&D Administration & Reporting Organisational Culture Employee Wellness Talent Management Technology 27 33 16 33 21 33 25 65 3 43 6...
  • Neurons, Neurotransmitters, and Neurotransmission

    Neurons, Neurotransmitters, and Neurotransmission

    Neuron Analogy Review . Dendrite Axon Myelin Sheath. Terminal buttons Synapse Resting potential All-or-none Law. Like a tree. Also, each branch is a telephone wire that carries incoming messages to you.
  • Kiwanis Club of Jacksonville Beaches Florida District ...

    Kiwanis Club of Jacksonville Beaches Florida District ...

    Walt Pfeil 221-6810 Neil Powell 343-3571 Lal Ramdeen 992-8275 ... Howard Brown 249-5877 Brownie Brown 249-0414 Bill Bull 241-0432 Ray Cerni 246-0716 Barry Chandler 241-5634 ... Dr. John Wise, Duval County Superintendent of Schools, was the featured speaker at this...
  • G6PD Deficiency -induced Hemolysis

    G6PD Deficiency -induced Hemolysis

    Red blood cells burst due to oxidative stress. How do Cells Become Oxygen-Stressed? No Build-up of Reactive Oxygen Species. G6PD. G6PD. G6PD deficiency Build-up of . reactive oxygen species. What is the function of normal G6PD? NAD binding domain.
  • Transforming the Auto Service Customer Experience

    Transforming the Auto Service Customer Experience

    Wheel Bearing/seals - take photos and videos. Filters - take photos of dirty engine air and cabin filters. Body damage - take photos or videos of rim scratches, body dents. 4/8/16. ... Patrick Egan ...