OSX Virtual Hosts Manager v.1 C++

If you setup your virtual hosts as shown in my post before , you might enjoy this little application. It is able to list you all your virtual hosts, their path and used space. Further versions will include creating hosts, deleting hosts and later on a GUI. It is written in C++ and xCode as IDE.

We start by creating our main. It’s quite simple. A loop triggered by a boolean flag let’s you run multiple commands after another. The switch is used to execute the action. The struct is used later in the program.

Our first function is getHosts();

It starts by saving the config file names in a vector (used as dynamic array). We’ll look at that function soon. The iterator is used to access the vector elements. The function simply looks for the substring “DocumentRoot” and “ServerName”. If they have been found, it saves them (subsrt) into a new string and sends them do dataFound();

The function getFiles(); calls the function GetStdoutFromCommand(); this is a function which simply executes bash commands and saves the output in a string. So we execute the ls command for our vhosts folder and receive all the files. The function then saves the config files and ignores the others (and escapes the \n, line breaking).

The dataFound(); functions currently only saves the data in the struct.

More information about the GetStdoutFromCommand(); can be found here.

The last function showData(); extracts the filesize of our htdocs using a shell command. It then outputs the virtual hosts.

 

The whole program looks like this:

Don’t hesitate to ask questions or improve the code.

Stay tuned, lyinch!

OSX Apache2 Virtual Hosts

I assume that you have installed apache2 correctly and you’re able to access localhost.

For most commands you need sudo rights. Login as sudo or precede every command with sudo.

In order to use Virtual Hosts, modify your apache configuration file.

Add the following line

Under this line. (search with CTRL+W)

Apache scans now the folder and includes every config file you add to the folder. We still need to create the folder and move in it with those two commands.

Now we create the default config file for localhost.

And add the most basic configuration.

Apache includes this file first, because it has an underscore as prefix. This file is used if no other configuration is found and can be later user as a template for new hosts.

We can now create our first virtual host with the same commands.

I prefer the postfix .local to avoid conflicts with real webpages and to see at first glance that it’s a local link to my apache server.

Now we add a small configuration

We now only need to create the htdocs folder if we haven’t one, yet. To activate our changes, we need to restart the apache server.

If we encounter errors, this command displays us error messages.

To use our local addresses, we need to map them into our hosts file.

And add the following line, which redirects the server name (given in the config file) to our local IP which is then processed by apache.

Note that in some browsers you need to invoke your address with http:// prior to your server name. (like Chrome)

If you encounter a 403 error, there is a simple solution (only use it local, not for webservers accessible from the internet)

Add your Username to the apache config file. Open the file.

Search for _www (with CTRL+W) and add your Mac username below User _www. It should now look like this.

Now simply restart apache and it should work.

 

Stay tuned, lyinch!

Control Spotify with Media Keys in Awesome WM

I recently started using Awesome WM as my main Desktop. One thing that really bothered me was, that my Media Keys couldn’t control neither Spotify nor even change the volume of the system. So I wrote this small key bindings for the rc.lua.

If you want to use different keys or your media keys don’t work, just run xev and use those output names. Note that my media keys need an FN button, but it isn’t necessary to included them in the script. (xev doesn’t even detect it).

It’s also interesting to say, that you have way more options. You can start a specific album or song, open and quit Spotify. If you still want more, there is a Spotify Command Line Controller on Github. Give it a look!

With awesome, I had the problem that my monitor froze for a few seconds, when the song changed. This was caused by the changing track notification. Just add this line to your ~/.config/spotify/Users/<spotifylogin>-user/prefs file and it should be done.

 

Stay tuned, lyinch!

Dashpi #8 – Improved code

I improved the code by adding a boundary check and reducing the array requests. Let’s compare an old line with an improved one:

It’s not much, but this change has a huge impact on the code. The whole expression is based on the logical expression AND (&&), which means that every single condition needs to be true. We can rewrite this with a lot of  if conditions.

If the first condition is false, every other condition is ignored. The && works by the same logic. It evaluates from left to right. As soon, as a condition isn’t true, it stops the evaluation. With the replacing of the boundary check, false array invocations are prevented. The two checks at the beginning.

Another new feature is the colored console output. This gives a better overview of warnings and errors.

Screenshot from 2014-07-25 15:41:44

I finally included argument support. I can now trigger the debugging mod on with an argument and change the image path on start up. Before this feature, I needed to recompile the code every time I wanted to have an other image.

Some minor changes like different keys to close the application and a few more debugging information have been added as well. The image dimensions are now also checked, to prevent a wrong evaluation. The project currently only supports 1280×720 images.

A bigger change is the displacement of my logging function into an external file in the /lib/ folder. I plan to write more and more functions into external files, to create later on a shared library which can be used by different algorithms.

Stay tuned, lyinch!

Dashpi #7 – Creating AOIs

I improved the threshold algorithm to find every red tone. In addition, the algorithm now searches an Area of Interest (AOI) like a flooding algorithm but without recursion. Every pixel which is related with another one, is now stored with the same value in a struct. This gives me now the possibility, to work better with the found values.

Output_screenshot_23.07.2014.3Output_screenshot_23.07.2014

 

I added some rules to highlight the areas. Yellow shows every area. Pink is the biggest area and the blue areas are potential signs.  Those are currently filtered out, by saying that a sign must have more and less than certain values of pixels. The rule is nothing more than a basic if condition and works surprisingly quite well

But, I’m sure that it won’t work on every image. I need to create more test cases, check the values and create other conditions. One idea I have is looking at the area, to see the pixel scattering.  In order to find other signs than red ones, I want to use several threads, to detect them all at the same time.

Don’t forget to check out the latest version on github!

Stay tuned, lyinch!

Student card check #1

I wanted to know, how secure our school card system is. Every student and teacher has his own card, which is used to open doors, log into printers and pay the meals. Those cards use RFID to communicate. While reading through Wikipedia, I thought that the cards use a frequency of 125kHz. So I bought myself a cheap RFID card reader for 125kHz from a wired Japanese webpage.

photo1

I tried to read the card but the reader gave no output which means that they don’t the card doesn’t send on this frequency. To check that the reader works, I found another card which worked. This card is used to enter a certain building.

photo2Screenshot from 2014-07-27 14:28:12

But to my frustration, the reader gives me only the 10 first digits of the card, also known as the card ID. This information is also printed on the card itself. Wow astonishing result… The reader is useless and I still can’t read my student card.

To do this, I ordered an Arduino with an RFID reader/writer for 13.56mHz(and one for 125kHz as well, just in case). As soon as it arrives, I’ll make the next post. This should be in a month approximately. I didn’t want to pay for shipping, so it can take some time. But until then, I can still figure out what the printed numbers on my student card mean. I have two student cards on my name. On apparently doesn’t work anymore and the new one. It’s interesting to have both, to later on compare the data found on a card and see what changed and what stayed the same.

photo1 (1)

 

It is interesting, that this string has also exactly 10 characters. It’s quite easy to identify the meaning of this “code”. The first four digits represent the year the card has been made, the ‘S’ means “student” and the other five represent a unique ID. As soon as I have all the components together, I should be able to read this card number. In hex and binary, it would be:

The front of the student card also has a bar code. After several attempts to read it, I finally found a webpage which could decode this bar code. The bar code represents my social security number. (Obviously, I don’t post it here). The bar code is in Code 128 encoded. I honestly hope, that this analog method doesn’t verify the card owner and grant someone access to pay with this bar code. Because this can easily be reproduced. I also fear, that the RFID identification uses a static variable, such as the social security number to identify the owner, instead of encrypted data if the bar code uses such obvious data. Why obvious data, if it’s the social security number?

To log into the school computers and the webpage and email service provided by the school ministry the username is a combination of the students name and it’s social security number. In my country, the numbers are based on the birthday with three more random generated digits(from this year on five more digits, but the usernames are still in the “old” format). Let’s say the student John Smith is born the 28th August 1991. The social security number and the username for the education system would be:

If you know the birthday of the person, you only need to look at the username or even the email address provided to recreate the whole social security number and fake this data. The email address the students receive are created by the username and the domain appendix.

If the data is that easy to recreate, it should be easy to create a fake card and pay from another’s bank account. But those are only assumptions, until the pieces arrive. Until then…

Stay tuned, lyinch!

Dashpi #6 – Detected Sign

It has been a while since the last update and post. But the project is still alive, even with some time off between certain versions. At this stage, I’m checking for POIs (Pixel of Interests) with a simple threshold algorithm. I assume, that every red pixel on the image belongs to a sign. This list of POIs is later on sorted and the four outer are connected to form a box and highlight the sign. You can see the output on the screenshots.

If there are more pixels in the image, that pass the threshold algorithm, they are included in the highlight and falsify the result. A new approach to sort out the non-interest pixels  needs to be established. I’ll work with with different AOIs (Area of Interest), which give me the ability to detect more than one sign at once.

photo2(4) photo1(5)

The idea of AOIs can be transferred to other algorithms finding POIs, like the before mentioned separation algorithm. But I’ll stick at the threshold algorithm and see how far I gen get with that. Until this point, I only searched for red signs. I’m curious, how the search for every possible sign colour influences the speed of the analysis. Just remember, that the program should do this quite fast in order to highlight signs in a live video stream while driving at a certain speed.

But until I can work with camera input, I need to improve my algorithms. Until then,

Stay tuned, lyinch!

Dashpi #5 – Finding an AOI

The file structure as well as my helper files have changed. I’m using a second branch for my updates and only combine big updates into the master branch.

|–\build (contains the compiled files as well as the log files)
|–clear (shell script deletes the log files)
|–\doc (additional documentation)
|–\input (data like images and videos determined as input)
|–\lib (additional written extensions and libraries)
|–ocvcomp (shell script to compile project)
|–\src (source files of the project)
|–\test (test classes and files written for further use or special test cases)

The compilation script deletes the log file and the clear script now takes the log filename as argument.

 

The sign detection should work as shown on this flow chart. I have two different approaches to find an “Area of Interest” (AOI) and two different approaches to finally detect the sign in this area.

dashpi

The initial idea, of deleting everything that isn’t red is called colour thresholding. I’ll stick at this idea, but change the algorithm. The dominant colour in the image is often the sign. To split the image in an AOI, instead of checking the whole image for a sign, I have two options.

  • Either, I create a colour map of my initial image. The image is scanned and a pixelized version of the image is created. It has a smaller resolution and every pixel is the average colour from 5×5 pixels from the original image. The colour map is scanned and the threshold algorithm searches for red parts. As soon, as the scan is finished, the sign searching algorithm searches for signs on the original image, using the cube data gained from the colour map.
  • Or, I separate the dominant colour from the background. I take the average value of every column and compare each pixel with the average. If the result isn’t in the normal distribution, this may be a front colour. Every pixel is afterwards analysed in order to find a sign. To find red signs, I need to compare the found pixels with the threshold algorithm.

The threshold algorithm compares every pixel to a certain value. It either saves important pixels or changes their colour into white and the others into black, creating a black and white representation of the image highlighting the searched colour.

More information will follow soon… Don’t forget to checkout the project on Github the see the source code and implementation of the algorithms!

Stay tuned, lyinch!

 

Dashpi #4 – The first algorithm

I’m not using a private git, but github to make the project open source. You can fork me here. Most of the posts are written a few weeks in advance. If you want to be up to date, have a look at the github repository.

To clean my log, I created a new shellscript. Maybe I can combine them all in one…

In order to fully detect signs in the street traffic, I need to start from the very basic. I try to analyse different signs on monochrome backgrounds, to see how the different methods behave.

The first idea is to reduce the background by changing it into white (or another colour). I’m using 720p images (1280×720) in order to reduce calculation time. The quality should be fine enough, though.

  • To detect a red sign, check each pixel of the image. If the pixel is not red, change it to white. With this method, I can isolate the traffic sign from the background. After the process, I need some kind of sign detection.

The sample images are:

simple_oneshield_greenbgsimple_oneshield_bluewhitebg

I’m using this loop, to check if the colour is red or another one. Note that the images channels are not RGB but BGR. The colour values  are actually guessed with try and error in photoshop. I need to find the exact RGB values of a red tone. But you get the idea.

The final results are shown here. With this method, it’s quite easy to subtract the monochromatic background. But, I need a new loop for every sign colour, which isn’t very efficient. Different lightning conditions and other red colours on the street may give the algorithms a hard time… A better approach is needed!

photo1(4)photo2(3)

To improve the algorithms and, of course, the result, I’m reading papers and publications on sign detection. Hopefully, the next posts will cover some of those methods, where I try them out and maybe even modify and combine them. A lot of the publications assume that the reader has a solid mathematics and informatics base as well as knowledge in physics, visual computing and, of course, previous publications of sign detections. I try my best to understand each publication and implement their method. But until now, most of them aren’t really comprehensible and usable for me.

Stay tuned, lyinch!

Dashpi #3 – The setup is done

Instead of connecting directly to my Raspberry Pi with a keyboard, I’m using SSH from an Ubuntu computer. This gives me the possibility to transfer data faster from one place to another. I also have more coding resources on my computer which can be copied easily via SSH.

An new Ethernet hub connects my Raspberry Pi with my home network. But I haven’t yet set-up a git server at home. As soon, as I have one, I’ll synchronize my Raspberry Pi with it.

The compilation is done by a bash script which measures also the compilation time.

To ease debugging, I created a logfile.  It uses two parameters, one is the message it should output and the second one is the message type, which is used as prefix in the log. The application closes with an error message, if the user has no permissions to write in the file. This prevents also further errors related to the needed sudo rights.

A second shell script connects my Raspberry Pi to my GoPro’s Wifi. Is uses basic shell commands.

In order to work on the analysis algorithm, I’m using a pre recorded video instead of my live stream. But before analysing the video file, I need to develop the algorithm on steady images.

photo1(2)

I hope, that I can find enough resources on cross compiling, because the compilation time is tremendously increasing.

Stay tuned, lyinch!