Friday, 6 November 2015

Parrot Rolling Spider Drone

Hey Friends,
I am back again with some projects that I am currently working on. I would love to hear new possibilities as well as what could be done to improve the same.

I recently got my hands on the Parrot rolling spider drone. It is a small sized drone of around the size of a palm that can be controlled by Bluetooth BLE. I selected the following setup:



The round thingies are IR reflector markers which I am using in the lab setting along with Optitrack cameras to keep track of the drones position. I intend the drone to be able to do certain tasks autonomously such as going from one point of area to other or traversing through a series of waypoints or maybe following another ground bot. I also aim to develop voice controlled app to control the drones via speech.

I order to achieve my goal, I started off with controlling the drone manually from any PC. To do so, I came across a wonderful library:

This library can be used to hack the rolling spider or send commands to the rolling spider. The link is self sufficient, but I would like to reiterate it here. Please follow the following steps to be able to use the library:
  1. Install Nodejs.
    The link for the same is:
    https://nodejs.org/en/
  2. Install Python v2.7.3(recommended)
    The link for it is:
    https://www.python.org/download/releases/2.7.3/#download

    After this, we need to add the path variable for python.exe.
  3. Install Microsoft Visual Studio C++ 2013(Express Edition)
    The link:
    http://www.microsoft.com/en-gb/download/details.aspx?id=44914
  4. Windows 7 SDK.
    The link :
    http://www.microsoft.com/en-us/download/details.aspx?id=8279
  5. Install node-gyp.
    The zip file for the same needs to be downloaded from:
    https://github.com/nodejs/node-gyp#installation

    It can be installed by executing the following command in command prompt.:
    npm install -g node-gyp
  6. Next u need a bluetooth 4.0 BLE adapter. Then the driver needs to changed to Winusb using the zadig tool.
    The link for the same is:
    http://zadig.akeo.ie/
  7. Install Noble. It is a node.js BLE central module. The link and detailed information can be found on:
    https://github.com/sandeepmistry/noble#prerequisites

    The zip file needs to downloaded from the link. It can be installed through command prompt by the following command:
    npm install -g noble
  8. Now we go to the last step, that is installing the rolling-spider library.
    For this download the zip file from:
    https://github.com/voodootikigod/node-rolling-spider

    To install, do it through command prompt:
    npm install rolling-spider
More information can be found on the libraries in the respective links given by the developers. Its simply amazing :)

So, now we are all set to be able to get our drone flying. We need to type in a set of actions using the rolling-spider library to command the drone. I started off with controlling the drone via keypress events. The code can be seen below:


'use strict';

var RollingSpider = require('rolling-spider');
var temporal = require('temporal');
var rollingSpider = new RollingSpider();

var keypress = require('keypress');



rollingSpider.connect(function () {
  rollingSpider.setup(function () {
    rollingSpider.flatTrim();
    rollingSpider.startPing();
    rollingSpider.flatTrim();
console.log('Now Active for receiving commands');
keypress(process.stdin);
console.log('\n');
console.log('Give your input');
// listen for the "keypress" event
process.stdin.on('keypress', function (ch, key) {
  console.log('got "keypress"', key.name);
  if (key && key.ctrl && key.name == 'c') {
    process.stdin.pause();
  }
  else if (key && key.name == 'w') {
    
console.log('Going forward');

rollingSpider.forward({step:10, speed: 30});
rollingSpider.flatTrim();
  }

  else if (key && key.name == 's') {
    
console.log('Going backward');
          rollingSpider.backward({steps: 15, speed: 20});
        
  }
  
   else if (key && key.name == 'a') {
    
          console.log('Going left');
          rollingSpider.tiltLeft({steps: 15, speed: 20});
        
  } 
  
   else if (key && key.name == 'd') {
    
          console.log('Going right');
          rollingSpider.tiltRight({steps: 15, speed: 20});
        
  }

  else if (key && key.name == 't') {
    
 console.log('Getting ready for takeOff!');
          rollingSpider.takeOff();       
  }

  else if (key && key.name == 'l') {
    
          console.log('Time to land');
          rollingSpider.land();
        
  } 
  else if (key && key.name == 'up') {
    
          console.log('Going up');
          rollingSpider.up({steps: 15, speed: 20});
        
  }

  else if (key && key.name == 'down') {
    
          console.log('Going down');
          rollingSpider.down({steps: 15, speed: 20});
        
  }
  
   else if (key && key.name == 'left') {
    
          console.log('Turn Left');
          rollingSpider.turnLeft({steps: 15, speed: 20});
        
  } 
  
   else if (key && key.name == 'right') {
    
          console.log('Turn Right');
          rollingSpider.turnRight({steps: 15, speed: 20});
        
  }
  
  else if (key && key.name == 'f') {
    
          console.log('OMG Flip!');
          rollingSpider.frontFlip();
        
  }
  
    else if (key && key.name == 'q') {
    
          console.log('Bye Bye');
          temporal.clear();
          process.exit(0);
        
  }
  
});

process.stdin.setRawMode(true);
 });
});



Then after being able to run the drone, I wanted to create a voice controlled app to control the drone. I started learning with android studio. I have to say its fun :)
This is when I came across MIT appinventor. I was amazed at how good it is and how simple and easy have they made for us to make an app. I was able to make the voice controlled app and the run video is:


After this, I decided to use Optitrack to keep track my Drones position and make it follow a set of waypoints. The waypoints here consist of points on the track. It needs to be more stable and perfect. But still it works :D. The result of the same is:


The problems that I faced for my project were many. Sometimes the drone refused to move, sometimes got freezed. I am still working on it. Moreover the Optitrack PC and my laptop communicate using socket communication and I am facing error that sometimes it throws socket has been closed error. Hope I get it fixed soon :D

I recently got my hands on the Parrot AR drone 2.0. Time to get to work :D

Friday, 15 February 2013

Robotic navigation in presence of static and dynamic obstacles


Hello friends.
Today I post the results of my final year project. 
Kinect has been used for image and depth analysis and hence serves as the eye of our autonomous vehicle.


Thursday, 9 February 2012

Go-visible

Finally got my kinect to work, thanks to the CL NUI library :)
I am now able to access the image as well as the depth map from the kinect, in rgb format.

This is the first thing that I am trying with my kinect. What I have done is thresholding the depth map to find objects within a particular image.


video

Saturday, 3 December 2011

Virtual keyboard

Today virtual projection keyboards are a craze. It is used as modules which can be connected to computers like the cellulon projection keyboard, are used in mobile, PDA's, and small equipments, where space is very small forcomputing.
I, myself am dying to have one. So, inspired by it, I take it as my seminar topic. I was much interested in making one myself also, but I didnt have either the red laser diode projector or the IR laser diode for the detection plane. So, I made a short program in OPENCV to immitate it, by just taking 11 keys.

video

The text being typed can be seen at the bottom of the video.
I have used color marker on fingers along with blob detection to get these results.
 

My new Kinect :)

Hiii friends.
Finally I was able to get myself a kinect.
Kinect is a  motion sensing input device developed by Microsoft for the Xbox 360 gaming console. Based around a webcam-style add-on peripheral for the Xbox 360 console, it enables users to control and interact with the Xbox 360 without the need to touch a game controller, through a natural user interface using gestures and spoken commands.
Now, it is also being used with P.Cs for developing applications for kinect for easier Human computer interaction systems.
There's a trio of hardware innovations working together within the Kinect sensor:
  • Color VGA video camera - This video camera aids in facial recognition and other detection features by detecting three color components: red, green and blue. Microsoft calls this an "RGB camera" referring to the color components it detects.
  • Depth sensor - An infrared projector and a monochrome CMOS (complimentary metal-oxide semiconductor) sensor work together to "see" the room in 3-D regardless of the lighting conditions.
  • Multi-array microphone - This is an array of four microphones that can isolate the voices of the players from the noise in the room. This allows the player to be a few feet away from the microphone and still use voice controls.
  This is the snapshot of my kinect :)


While I was trying to figure out how to use it, I came across CL NUI platform for kinect users, at http://codelaboratories.com/downloads/ 
It was just cool. There is no command line required to use it. All u have to do is plug in your kinect, install it and run it. It gives you the functionality to see the image of camera, the depth data. Moreover, it also has the feature to move the kinect to face up and down to some angle using motors, which was really cool.


video 




video




Friday, 29 July 2011

U.G.V(Unmanned Ground Vehicle)

It started with the "DRDO DRIVING INNOVATION STUDENT ROBOTICS COMPETITION 2010". This is where everything started, it has been one hell of a time since then. 

The basic requirements for making a UGV are:
  • Camera - to take visual input and find the path.
  • Laser range detector - for detecting obstacles and avoiding them.
  • Locomotion system or the chasis along with motors and wheels for making the bot move.
  • GPS- for tracking its position.
  • Wireless module for E-Stop.
  • A team with madly interested people. In my case, our team consisted of my friends Deepankar, Gigyanshu, Rahul, Keshav and Abhishek.
  • A lot of night outs and workshop deliveries :)   
We started off with Matlab. But as it was very slow, we switched on to OpenCV.
This is the video of our UGV following the lane.

video


An example of lane detection by our UGV can be seen below:


video

In the video, the bot is working only through lane detection and there is Wireless E-Stop for emergency.
Now, we have added the obstacle detection and avoidance to it, as well as GPS tracking. 

Next time I will be posting the videos regaurding obstacle avoidance, and other modifiactions to it..

Untill then....Next time.....

Augmented reality

I have been trying to do augmented reality since a long time, and while surfing, I came across the ARToolkit and SUDARA.


video