Dashpi #6 – Detected Sign

It has been a while since the last update and post. But the project is still alive, even with some time off between certain versions. At this stage, I’m checking for POIs (Pixel of Interests) with a simple threshold algorithm. I assume, that every red pixel on the image belongs to a sign. This list of POIs is later on sorted and the four outer are connected to form a box and highlight the sign. You can see the output on the screenshots.

If there are more pixels in the image, that pass the threshold algorithm, they are included in the highlight and falsify the result. A new approach to sort out the non-interest pixels  needs to be established. I’ll work with with different AOIs (Area of Interest), which give me the ability to detect more than one sign at once.

photo2(4) photo1(5)

The idea of AOIs can be transferred to other algorithms finding POIs, like the before mentioned separation algorithm. But I’ll stick at the threshold algorithm and see how far I gen get with that. Until this point, I only searched for red signs. I’m curious, how the search for every possible sign colour influences the speed of the analysis. Just remember, that the program should do this quite fast in order to highlight signs in a live video stream while driving at a certain speed.

But until I can work with camera input, I need to improve my algorithms. Until then,

Stay tuned, lyinch!

Dashpi #5 – Finding an AOI

The file structure as well as my helper files have changed. I’m using a second branch for my updates and only combine big updates into the master branch.

|–\build (contains the compiled files as well as the log files)
|–clear (shell script deletes the log files)
|–\doc (additional documentation)
|–\input (data like images and videos determined as input)
|–\lib (additional written extensions and libraries)
|–ocvcomp (shell script to compile project)
|–\src (source files of the project)
|–\test (test classes and files written for further use or special test cases)

The compilation script deletes the log file and the clear script now takes the log filename as argument.

 

The sign detection should work as shown on this flow chart. I have two different approaches to find an “Area of Interest” (AOI) and two different approaches to finally detect the sign in this area.

dashpi

The initial idea, of deleting everything that isn’t red is called colour thresholding. I’ll stick at this idea, but change the algorithm. The dominant colour in the image is often the sign. To split the image in an AOI, instead of checking the whole image for a sign, I have two options.

  • Either, I create a colour map of my initial image. The image is scanned and a pixelized version of the image is created. It has a smaller resolution and every pixel is the average colour from 5×5 pixels from the original image. The colour map is scanned and the threshold algorithm searches for red parts. As soon, as the scan is finished, the sign searching algorithm searches for signs on the original image, using the cube data gained from the colour map.
  • Or, I separate the dominant colour from the background. I take the average value of every column and compare each pixel with the average. If the result isn’t in the normal distribution, this may be a front colour. Every pixel is afterwards analysed in order to find a sign. To find red signs, I need to compare the found pixels with the threshold algorithm.

The threshold algorithm compares every pixel to a certain value. It either saves important pixels or changes their colour into white and the others into black, creating a black and white representation of the image highlighting the searched colour.

More information will follow soon… Don’t forget to checkout the project on Github the see the source code and implementation of the algorithms!

Stay tuned, lyinch!

 

Dashpi #4 – The first algorithm

I’m not using a private git, but github to make the project open source. You can fork me here. Most of the posts are written a few weeks in advance. If you want to be up to date, have a look at the github repository.

To clean my log, I created a new shellscript. Maybe I can combine them all in one…

In order to fully detect signs in the street traffic, I need to start from the very basic. I try to analyse different signs on monochrome backgrounds, to see how the different methods behave.

The first idea is to reduce the background by changing it into white (or another colour). I’m using 720p images (1280×720) in order to reduce calculation time. The quality should be fine enough, though.

  • To detect a red sign, check each pixel of the image. If the pixel is not red, change it to white. With this method, I can isolate the traffic sign from the background. After the process, I need some kind of sign detection.

The sample images are:

simple_oneshield_greenbgsimple_oneshield_bluewhitebg

I’m using this loop, to check if the colour is red or another one. Note that the images channels are not RGB but BGR. The colour values  are actually guessed with try and error in photoshop. I need to find the exact RGB values of a red tone. But you get the idea.

The final results are shown here. With this method, it’s quite easy to subtract the monochromatic background. But, I need a new loop for every sign colour, which isn’t very efficient. Different lightning conditions and other red colours on the street may give the algorithms a hard time… A better approach is needed!

photo1(4)photo2(3)

To improve the algorithms and, of course, the result, I’m reading papers and publications on sign detection. Hopefully, the next posts will cover some of those methods, where I try them out and maybe even modify and combine them. A lot of the publications assume that the reader has a solid mathematics and informatics base as well as knowledge in physics, visual computing and, of course, previous publications of sign detections. I try my best to understand each publication and implement their method. But until now, most of them aren’t really comprehensible and usable for me.

Stay tuned, lyinch!

Dashpi #3 – The setup is done

Instead of connecting directly to my Raspberry Pi with a keyboard, I’m using SSH from an Ubuntu computer. This gives me the possibility to transfer data faster from one place to another. I also have more coding resources on my computer which can be copied easily via SSH.

An new Ethernet hub connects my Raspberry Pi with my home network. But I haven’t yet set-up a git server at home. As soon, as I have one, I’ll synchronize my Raspberry Pi with it.

The compilation is done by a bash script which measures also the compilation time.

To ease debugging, I created a logfile.  It uses two parameters, one is the message it should output and the second one is the message type, which is used as prefix in the log. The application closes with an error message, if the user has no permissions to write in the file. This prevents also further errors related to the needed sudo rights.

A second shell script connects my Raspberry Pi to my GoPro’s Wifi. Is uses basic shell commands.

In order to work on the analysis algorithm, I’m using a pre recorded video instead of my live stream. But before analysing the video file, I need to develop the algorithm on steady images.

photo1(2)

I hope, that I can find enough resources on cross compiling, because the compilation time is tremendously increasing.

Stay tuned, lyinch!

Dashpi #2 – GoPro Video Stream

I managed to get a live stream from my GoPro 3 Black Edition to my Raspberry Pi. The GoPro streams it’s video at http://10.5.5.9:8080/live/amba.m3u8 (apparently for every camera). I connected the Raspberry Pi’s WiFi with my GoPro’s WiFi.

After a lot of code and even more error messages I finally got my first frame! It was a fixed frame and the Raspberry Pi froze, but I still managed to get an image!

photo1

With some modifications, I ended up with a stable stream for a few minutes. I guess that the crash occurs due to the streaming loop and the restricted RAM from my Raspberry Pi. A better algorithm is needed.

photo3(1)

The stream has a delay of approximate 3.2s to 3.6s (measured by hand!) at 1080p and 60fps. Every few seconds a minor lag occurs. But it still works. Before the actual openCV programming can start, I need to set-up more things…

Stay tuned, lyinch!

Dashpi #1 – The first steps

My Raspberry Pi model B 512Mb just arrived! My Project is, to use my Raspberry Pi as a Dashcam and analyse the street traffic. I installed the Arch Linux ARM to not waste computational power because video analysis requires a lot. My plan is to connect the GoPro Hero 3 Black edition to it and use this video stream. The output should be on an LCD Screen in the car. Until every works fine, I work at home on a bigger screen. Here are a few screens of my first steps with the Raspberry Pi.
photo6
The first run of Arch Linux! Why is it called alarmpi?
photo5photo4
After hours of installing packages and figuring out, how to run openCV without installing a Desktop Environment, the first openCV output worked! Even if it showed only half a circle though.
photo3photo2

I still have to set-up an environment. I need to write compile scripts, install and configure git and connect to my GoPro. As soon as the video stream works, I’ll let you know!

Stay tuned, lyinch!