Mobile Robotics Navigation Project: Testing Roborealm & Kinect

Mobile Robotics Navigation – Kinect & Roborealm testing

This page is about testing Kinect with RoboRealm, check out the project index for the rest.

There is a YouTube video about this testing, or scroll down for pictures of the way I’ll probably do this.

I previously mentioned I will be using Roborealm to process the data from the Kinect. Roborealm has many many image processing functions which takes the heavy lifting out of writing your own code.

Here’s a picture of the depth map with the Kinect looking down my hallway, and also a similar image showing a cardboard tube in the image. You can see that closer objects appears darker in the depth view.

I’m using several of Roborealm’s processing functions to get data from the image that’s useful for navigation. First I’ve reduced the colour depth to 64 colours using the ‘Colour_Depth’ function. This allows the gradient of colours to be broken up into several clear steps. The larger or smaller the colour depth, the larger or smaller the number of steps will be.

Next up is ‘Blob_separate’ and ‘Smooth_hull’. This divides the coloured sections into blobs which can be identified by Roborealm’s blob processing functions, and smoothes out the image to get rid of any small errors:

Once we are happy with the number of blobs and granularity of the image (we don’t want too much data to have to process after all), we can use the ‘Blob_fiilter’ to identify each section. I’ve used ‘Darkest’ as a parameter to add a weight to each blob. As lighter objects are further away, this means I can identify the location of each blob and it’s distance from the robot.

All this data is availbale through the Roborealm API so it can be further processed and used by the robot to navigate.