Basic Sensor Mapping and Modeling

download Basic Sensor Mapping and Modeling

of 7

Transcript of Basic Sensor Mapping and Modeling

  • 8/7/2019 Basic Sensor Mapping and Modeling

    1/7

    CENG 585: Fundamentals of Autonomous

    Robotics - Project 2

    Kadir Firat Uyanik

    KOVAN Research Lab.

    Dept. of Computer Eng.

    Middle East Technical Univ.

    Ankara, Turkey

    [email protected]

  • 8/7/2019 Basic Sensor Mapping and Modeling

    2/7

  • 8/7/2019 Basic Sensor Mapping and Modeling

    3/7

    Introduction

    This report explains the problems that I have come across during the mappingexperiment with the robotic platform Kobot. The overall prosedure can begiven as follows:

    1. Model the sensor characteristic by using the dataset given

    2. Calculate pose of the robot by using the encoder readings and locate therobot on the world/reference coordinate system

    3. Transfer possible object locations represented in the sensor coordinatesystem to the world coordinate system

    4. Update probabilities of the grids on which objects can be located by using

    Bayes rule

    Sensor Modeling

    Figure 1: Average sensor reading for each grid is indicated

    Rather than fitting a function that represents a radial distance and spanangle for the IR range sensors, I have used the sensor reading dataset to obtainconditional probabilities that are going to be used during Bayesian update stage.To do this, the algorithm makes use of the following property of the probabilisticevents:

    P(s= s) =r

    n(1)

    where s represents the random variable/event which can happen in r differentways out of a total of n possible equally-likely ways. This is the probability of arandom event or the relative frequency of occurance of an experiments outcome,when repeating the experiment as Frequentists argue.

    In order to obtain conditional probabilities of the sensor readings conditionedon the occupancy of the grids of the sensors observation area, I utilize theproperty given in the equation 1 and obtain the following:

    P(s= s|Hij) =#sensor readings given that the ijth grid is occupied

    #all sensor readings given that ijth grid is occupied(2)

    1

  • 8/7/2019 Basic Sensor Mapping and Modeling

    4/7

    Equation 2 is used to obtain the probabilities of each grid having a specific

    sensor measurement value when it is occupied with an object. This helps us tomodel the noise comes from the electronics instability of the sensor itself or theother components inside the robot. Besides, we obtain the sensor observationarea by using this method.

    Figure 2: Experimental setup being used during data set collection

    Figure 3: From top left to bottom right P(Hij|s = s) for s having the valuesfrom 7 to 0. Sub-figures are consisted of small grid elements that are indicatedearlier in figure 1. The more the brightness of the grid the more possible tohave corresponding sensor reading value given that the grid is occupied with anobject

    In figure 3, sub-figures shows that if a specific sensor reading s is given what

    is the probability of a grid to be occupied. Please note that the proability valuesare not normalized in each sub-figure just for the sake of easy visualization. Butwe know that:

    s=7s=0

    P(Hij|s = s) = 1 (3)

    Hence, it is not possible to have more than one fully bright grids (indicatingprobability of one).Please note that the posterior probabilities-indicated in fig-ure 3- are obtained via Bayes rule since we already know sensor probabilitiesconditioned on the occupancy of any grid by using the method explained in theequation 2.Another thing worths mentioning here is about the bottom-right subfigure.In

    2

  • 8/7/2019 Basic Sensor Mapping and Modeling

    5/7

    this subfigure sensor reading is given to be zero, and the probability of being

    occupied of an arbitrary grid is investigated. Here, we can see that all the gridsoutside of the sensors observation area becomes white. In other words, if thesensor returns 0 reading (= no object observed), probability of these grids to beoccupied is more than the ones located in the sensors observation area. In thereal world, such a result makes no sense at all, because we cannot argue aboutthe grids that we are not observing.

    Kinematic Modeling

    In this study, we have used the robotic platform, Kobot.

    Figure 4: Differential drive robots moves around a point known as ICC - In-stantaneous Center of Curvature

    The parameters shown in the figure 4 can be obtained by using the followingequations:

    Vr = (R +l

    2) (4)

    Vl = (R +l

    2) (5)

    where l is the distance between the two wheels. Vl and Vr are the left and rightwheel velocities, respectively. R is the signed distance from the ICC and the is the rotational velocity around the ICC. And ICC can be found as follows:

    ICC =

    xRsin()y Rcos()

    By using the equations above, one can find the state transition function as inthe following matrix operations:

    xy

    =

    cos(t) sin(t) 0sin(t) cos(t) 0

    0 0 1

    x ICCxy ICCy

    +

    ICCxICCy

    t

    But this equation does not hold if the wheel velocities are the same or thecounter same. In these cases pose update function becomes as the followings:

    3

  • 8/7/2019 Basic Sensor Mapping and Modeling

    6/7

    1. vl = vr = v xy

    = x

    +vcos

    (t

    ) 0y + vsin(t) 0

    2. vl = vr = v xy

    =

    x

    y

    + 2v tl

    The problem with pose update operation is that shaft encoders are notoriouslyinaccurate. Besides surface and wheels are not ideal as well. If these inaccuraciesare summed up, robot deviates from its path. What is worse is that this

    error accumulates in time if it is not reset by utilizing another sensor or thefeatures related to the world or map. To experiment this error, I have set up anenvironment in the shape of a square. And robot is programmed to follow thewalls around it, results are shown in the figure 5.

    Figure 5: Updating the pose of the robot by taking encoder readings into con-sideration. This figure is supposed to be a square but due to the errors in shaft

    encoders and other non-ideal circumstances, it deviates from its actual course.

    Sensor Mapping

    After obtaining the probably occupied grids with respect to the sensor coordi-nates, these grids are mapped to the world/reference coordinate system via twosuccessive homogenous coordinate transformations.

    T = Trans(xr, yr, 0)Rot(z, )Trans(xs, ys)Rot(z, s 90) (6)

    4

  • 8/7/2019 Basic Sensor Mapping and Modeling

    7/7

    Hence, a grid in world coordinate system v can be obtained by pre-multiplying

    the grid u in sensor coordinate system by the transformation matrix given inequation 6 (v = Tu). The final step is updating the grid occupancy probabilitiesby utilizing the following recursive Bayes rule:

    P(Hij |s = st) =P(s= st|Hij)P(Hij|s = st1)

    P(s= st|Hij)P(Hij|s = st1) + P(s= st|!Hij)P(!Hij|s = st1)(7)

    Conclusion

    If the encoders and surface are not that noisy, we hope that the robot will beable to extract the map of its environment by the method of occupancy-gridmapping as explained in this report. As a feature plan, some kind of a featurebased method can be added to this system to reset the errors accumulatedduring the exploration phase.

    References

    [1] Murphy R R, Introduction to AI Robotics, 2000, MIT Press.[2] Dudek G. and Kenkin M., Computational Principles of Mobile Robotics,2000, Cambridge University Press.

    5