Home > english, My Experiments > Processing meets Box2D and blob detection

Processing meets Box2D and blob detection


For the Programming II Workshop at our Interface Culture department I decided to do an small experiment with Box 2D. For a long time I wanted to play around with Box 2D. Especially merging real world objects with virtual object fascinates me very much. I don't like so much the common Augmented Reality stuff. However, some stuff is really cool and inspires me. Here are some projects I got my inspiration from: EdgeBomber, Laser Sound test, Phun, Crayon Physics, 2d Sketches becomes 3d Reality, ILoveSketch, MotionBeam, Tangible Fire Controlls.

Now talking about the technical stuff. In my experiment I am using the Blobscanner library and the Box2D code of Daniel Shiffman. It is really a small experiment, I just wanted to check out how difficult it is to combine camera data and virtual data. For the first test I am using a easy .jpg file with 3 rectangles. This examples works pretty well. The next map has some diagonal rectangles. The first problems appears. The upper right rectangles are drawn in the wrong direction. This is the reason why the physics simulation fails. At the moment I am using the Surface object of Box2D for drawing more complex objects. For the surface object is the direction of drawing very important. You have to draw counter clockwise. So  that the normal vectors does not point inside the object (check chapter 4.4 Polygon Shapes). Even using a polygon object would make more sense instead of using the surface object...

Another issue I have, is the correct recognition of the shapes. This problem is caused by two challenges. First challenge is to order the edge points array correctly. I get only edge points and I don't know is it the right or the left side of the object. My sorting algoritm is not implemented very well. For this reason some of my recognitions fails. But for doing a fast check on simple objects to get an idea, it was enough. However, this paper about edge detection can solve my problem, or I just have to implement a "find the shortest distance algorithm". If you have some better advices, please leave a comment. Thx! The second challenge is minimizing the size of the edge points array. For this I found a nice article: Line Generalization (Smoothing, Simplifying). I ported the ActionScript code to processing and it seemed to work. Tough a better approach could be to vectorize the camera data. Nicolas Barradeau wrote two nice blog posts about vectorization v0 and vectorization v1. His code I defintely have to check out. I guess there are some  hidden solutions for my problems.

So far about my experiment. My code is online on my google project site or download it. Please consider that my code is far far away from perfect. Big thanks to the great tutorial writers. I shit love the ActionScript community and the processing community 😉 Knowledge sharing, ahoi!!

  1. June 28th, 2011 at 22:54 | #1

    Hi Florian,

    There is an open-source C++ project http://outliner.codeplex.com/ that vectorizes edges in RGB pictures.

  2. admin
    June 29th, 2011 at 09:34 | #2

    Hi @wladik

    thank you very much for the link! 🙂

  3. July 14th, 2011 at 05:11 | #3

    Hi
    I just posted this on vimeo, anyway I thought you’ll get it faster if I posted here too.
    You should get a sharper blob contour if you apply a threshold filter to your camera image before sending it to Blobscanner for blob detection.
    e.g. imageFromTheWall.filter(THRESHOLD);
    bs.imageFindBlobsIimageFromTheWall);

    If you need to sort the edge points, here there is a fast and robust chain code algorithm by Golan Levin http://www.openprocessing.org/​visuals/​?visualID=30029. Hope this helps. Good luck !

  4. admin
    June 23rd, 2013 at 18:51 | #4

    Emanuele Feronato posted a very nice summary of tools for Flash / ActionScript 3. The edge and contour detection is done by a marching squares algorithm. For simplify the polygones I like more the Lang Simplification and McMaster’s Slide Averaging Algorithm than Emanuele’s approach of the Ramer-Douglas-Peucker algorithm. However for translating the data into polygones and do the triangulation I really like his approach and the use of PolygonClipper Class.

  5. admin
    October 2nd, 2014 at 15:49 | #5

    This article by Felix Niklas provides some nice information about segmentation of objects within an image:

    http://felixniklas.com/imageprocessing/segmentation

  1. No trackbacks yet.