AI Fun

Joe's blog about AI things!

Category: Gaze Location

Template matching and the power of Averaging.

So I have decided that the OpenCV function cvMatchTemplate() is just far too slow. Everything is too jerky, and just has no awesomeness. I decided to take it from the program, and work on the general jerkiness of everything. I figured if I average everything over the last 10 or so values, then the estimates will be smoothed out.

When initialising the centre eye position, I considered 20 estimates (more would be better, but also take longer), took the standard deviation, and then threw out everything that was 2 standard deviations away from the mean. I then took the mean of the remaining estimates for the overall estimate.

For the eye estimates, I considered the last 10 estimates. I would consider more, but then we get the old eye positions influencing new ones. I didn’t use standard deviations, as if they eye does move, it might move outside of a standard deviation. I just used a simple mean. The same for gaze location.

So here is a video of it at work. Once again there is no match template, so we can’t really look at where the gaze estimate is, but more at how smoothly it moves. The estimate lags, but never mind!

I also looked a bit at the calibration process. Up till now I had just been guessing the “eye radius”… this is what I called the distance the eye moves to look from one side of the screen to the other. But guessing is never acceptable. Ever. So I devised a simple calibration process. First the person looks to the centre, then the left, then the right. Through all this the computer is finding the average distance moved, (throwing out all the points 2 standard deviations away etc). This gives us a good guess at the eye radius.

Quick Update

I have decided to use the OpenCV template matching function, and just using it in Square Difference mode. It is not amazingly efficient, but its not bad.

It works quite well, I have made the box around the eyes stick in place, on the face, and so now I really just need to make everything more efficient and accurate. Then I can do a bit of maths and work out some estimates for where the person is looking. Hopefully.

When it is a bit better I will post a video, but for now here  are some pictures of the template matching working:

Read the rest of this entry »

I Track Eye Tracking.

As the title clearly states, I have used the powers of OpenCV to make some eye tracking happen in real time. I then used it to make a VERY rough estimate of the gaze location.

So I open up a stream from the webcam, and then draw around the general eye area. This then remains fixed in space and does not move. The eyes are then located using my method. When the person is looking at the center of the screen, a button is pressed and this initialises the initial eye position. From then I just calculated the approximate eye displacement, and used the proportion displaced to relate to the proportion of the screen that the gaze had travelled. Please note that this is the most inaccurate thing ever, and will be fixed as soon as I have the time and motivation. Ok I lied, its not the very biggest problem…

The fact that the box around the eye area does not move is the BIGGEST problem! This means that if your head moves at all, the computer wont know that it was your head and not your eye. and it is very easy for your head to twitch by the radius of your eye. I will fix this using the power of template matching. Maybe. I still need to experiment. My plan is to take the initial box around the eye area, and use it as a template, and make it stay fixed on the face, rather than the webcam image. If that makes sense.

Once the eye box is fixed on the face, we can use that to tell us how the face moves, and the eye positions to tell us how the eyes move, and then all we need is some tasty maths!

Here is a video of my work so far…

 

http://www.youtube.com/watch?v=5v9S9zfnE8k&feature=plcp

There are 3 different sensitivity levels in this.