The gift of sight is precious; that is why we tried to model an artificial retina with the properties of color detection, saccades, and pursuit tracking.
Structure of a Retina:
A retina lies in the back of the eye and consists of a layer of photoreceptor cells called rods and cones. The cones are responsible for color detection in light, and there exists three kinds of cones that respond to long, medium, and short wavelengths; these typically have peak wavelengths of 564, 534, and 430 nanometers respectively. When the correct stimulus is applied, this sends distinct action potentials through the retinal ganglion cells, which get interpreted by the brain as perceived color.
RGB Light Representation:
In terms of colored light, almost any color can be formed by adding different amounts of red, green, and blue light. This is a technique familiar to almost everybody and is utilized ubiquitously in televisions, graphic editing software, etc. We will be modeling our retina by utilizing simple red, green, and blue photodiodes. We will utilize this ability to detect color to implement a scheme to track specific colors through saccades and pursuit movements with the use of servomotors and photovoltaic light sensors.
When choosing a project topic, we were both interested in something that had to do with neural modeling. We asked Professor Land if he had any recommendations, and he directed us to the project page on his website (which can be accessed simply by Googling his name.) We thought the Lateral Inhibition project was interesting, so we decided to try to model an artificial retina. We also created a control interface to the eye using an LCD touch screen. The driver for this graphical LCD was obtained from the Spring 2006 Final Project, HDD clock. We also learned how to use servomotors by looking at the Swing Bot project in spring 2006. Other than these, we did not use any other sources when designing our project. However, we did find some sources that we tried basing additional features on, which ended up not working.
This project exists for the purpose of biological and neural modeling; it is not intended to be a toy (though it may amuse some people). There are an abundant amount of reasons for neural modeling. The primary reason is to create something that exhibits the behavior of particular neural systems. In our case, we model how the visual system processes image movement. There are two basic structures for eyes in nature; one that exists by using a waveguide structure to detect light, and one that uses lenses. Typically, insects use the former, and humans use the latter. Since we found it to be an easier structure to build, we create a quadra-sector retina, each with their own set of photodetectors. In this way, we can decouple these sectors from direct light and achieve crude position sensing. With this physical structure, having the opposite ended detectors can allow for differential input values which makes edge detection of objects possible.
There are no relevant patents or standards that involve color tracking artificial retinas related to our purposes.
There is no complicated math required besides for scaling and comparing ADC values and the intuition of geometry for how light reflects off surfaces. We also attempt to use some form of a proportional difference control system. The eye processes movement by constantly analyzing derivatives of edges in time and space. In our program, we analyze derivative values in time for each position sensor with itself, as well as the difference between values of the left & right position sensors and the top & bottom position sensors. In this sense, we decouple the yaw and pitch movements of the eye and move them independently.
By comparing the magnitudes of the ADC readouts of the pair of position photodetectors to be compared, we can acheive the direction in which the retina needs to turn in order to center itself on a presented object. However, this difference tends to make the movement overshoot. This is rectified by adding the derivative terms to minimize the overshoot errors. More on how this was implemented will be described in the software section.
Since our prototype eye had to be under 50 dollars, we could not follow any current artificial eye projects that we found on the web. We decided to use cheap LED’s to model the red, green, and blue cone receptors in the retina. Realizing that we cannot detect color accurately with cheap LED’s since they are not the most sensitive things in the world, we decided to quantize the representable color values for each color. We had initially tried to have 4 levels of color intensity for each RGB value to represent 4^3 = 64 different colors. However, since the resolution of our LED voltages were so low, we decided that it was realistic to achieve 9 different colors: White, Red, Green, Blue, Dark Red, Dark Green, Teal, Yellow, and Pink.
The logical flow of our program is as follows. Upon reset, there is a title menu that gives two options: Search or Tracking Mode. Search Mode allows the user to move the retina freely by pressing or dragging his finger on the LCD touch screen. The coordinate on the touch screen map onto the ranges of where the retina can move. When a detectable color is in site of the retina, the name of that color will be displayed on the screen. For Tracking Mode, the user will be presented with nine different colors to choose. Until the user chooses a color and until that specific color is recognized by the retina, it will move in a square pattern in an attempt to look for that color. Once the color is recognized, it will begin tracking that color and not move unless it sees that specific color.</>
The hardware construction was by far the most time consuming and irritating process of this project. The construction of the eye itself was extremely time consuming. Since we were building a color sensor that needed to be directed towards an object, we had to find a way to block out interference noise (anything emitting unwanted light, such as room light).
We wanted to show that we could model a retina (and build a decent color sensor) using bare bone LEDs that were in the lab cabinets. We spent several days performing rigorous proof of concept tests to determine whether this scheme would work.
The results of these tests showed that red, green, and blue LEDs could indeed create a crude color sensor. A red led will produce a current if light with a wavelength less than or equal to red wavelengths (~ 630 nm ) is shined on it. The same goes for blue and green LEDs (~470 nm and ~525nm). However, if the LEDs have a colored cap on it, this has the effect of band-passing the light that can enter the LED. Remarkably, we found that red LEDs are bandpassed very well and only respond to red light. Green LEDs respond to green wavelengths and everything below (including blue light). Blue LEDs came with clear caps, so they would not be bandpassed, but since blue is at the end of the visible light spectrum, the band-gap below blue is so large that blue LEDs detect only blue.
By providing a very bright source on a single LED of any color, the LED can produce up to a volt between its leads by generating a very small current. In this sense, they act somewhat like photoactive current sources. At room light, we can expect that a single LED operating in photovoltaic mode would produce very small current (in the order of microamperes), we needed to connect many of the same colored LED’s in parallel to produce enough current that would be detectable by the MCU ADC. We take 16 red LEDs in parallel, 16 green LEDs in parallel, and 24 blue LEDs in parallel. We found that the blue LEDs are much weaker in detecting blue light than red and green LEDs, therefore we clustered more blue LEDs in parallel to increase their relative current output. We needed to connect these parallel LEDs on an appropriate surface to create our color sensor.
The current output would have to be connected through a large resistor (7.5 MegaOhms) so that the current will result in a voltage drop across the resistor which will be used as an input to the ADC. A capacitor was put in parallel to such a resistor in order to make the voltage across the resistor more stable. This capacitance value had to be very small in order for it to charge and discharge very rapidly in accordance to the change in the Led current output; we set it to equal 2.2 nanofarads, which yielded fairly stable results on the order of hundreds of millivolts. The voltages across the resistors and capacitors which correspond to each color were amplified by using a non inverting amplifier which was powered by a 5 Volt power supply. This output of the amplifier is fed into the ADC of the MCU. The governing equation goes as follows:
It turns out that the retina is most sensitive to green since its wavelength stimulates two cones instead of only one. Not surprisingly, green leds were the most sensitive, which is also due in part to the fact that the green leds picked up blue wavelengths constructively. For this reason, the green leds were amplified by a factor of 11 whereas the other colors were amplified by 16. The photodiodes were amplified by a factor of 11 as well since they were qute sensitive.
We found an Easter egg shaped container in the trash…and decided to use that as our retina shell. We painted it white on the outside to reflect outer light and black on the inside to absorb any negative light effects on the inside. The color sensor was built on a solder-board that was cut into a circular shape to fit into the shell of the eye. This solder board was segmented into 4 sections to represent: up, down, left, and right. These sections were partitioned with aluminum foil to help isolate light in each section. Clusters of Red, Green, and Blue Leds were fit onto the board in an evenly distributed pattern. These would be used for RGB detection and calculation. Four photo-detectors were placed on the top, bottom, left, and right sides of the solder board segments, with the intention of using these for position detection. These segments were hot glued into place inside the retina shell and a four sided reflective divider (aluminum foil) was used to separate them.
We used servo motors to implement precise movements and positioning. Because we wanted movement of 2 degrees of freedom (yaw and pitch rotation), we used 2 standard futaba servo motors. To better simulate the human retina, we initially tried using rubber chords together with servos that would be similar to tendons in the eyeball. However, we encountered many problems in the mechanical aspect in that the rubber chords would not position or move the eye in the way that we wanted it. After several futile attempts, we decided to use a much simpler mechanical scheme: one servo would be directly attached to another servo, and the eye would be attached to one servo to have 2 degrees of freedom. This scheme gave us precise movements and the torque of the servo was enough to overcome the weight of the other servo plus the weight of the eye. The retina shell was hot glued onto these servomotors and the servomotors were hot glued onto a wooden frame to provide height and stability.
To prevent the spiking outputs of servo motors from damaging or resetting the MCU, we had to build opto-isolator circuits, which would electrically isolate the mcu from the motors. A separate power supply was used for this.
When we detect color using the LEDs, we found out that the LEDs give off more current when there is a concentrated source of light directed from behind the retina. This will make the intensity of the reflected light off the object greater, thus making the sensors more sensitive to presented objects. Therefore, we decided to mount large, bright LEDs that could take up to 12 Volts, which would help us get better outputs from the LED’s and the photodiodes. These LEDs were extremely directed, so they did not affect the measured readings unless an object was directly in front of the retina shell.
The State Machine:
Since we wanted our eye to make motion related decisions after every new ADC values, and the state machine did not have to cycle fast for our timing scheme, it made sense for us to group the tasks of sampling, tracking and searching together to make one big state machine. This state machine is controlled by the variable named MainState, which takes the state definitions: RedScan, GreenScan, BlueScan, Photoscanbottom, Photoscanleft, Photoscanright, Photoscantop, and MoveState.
The ADC condition was initialized by setting the following:
ADMUX = 0b01100000;
ADCSR = 0b11000111;
Here, we want to point out that the ADMUX.7 and ADMUX.6 were set to 0 and 1 respectively, instead of the values 0 and 0 used in previous labs. Because the PCB board connects the AVCC with the AREF input of the Atmega32 through an external capacitor, this ADMUX setting allows to use the AVCC (5V) for voltage reference in analog to digital conversion.
To implement our sensors (LED’s and photodiodes), we needed 7 different ADC inputs to poll values for: red, green, blue LED, and left, right, top, bottom photodiodes. Since the MCU can only read from one ADC input at a time, we had to design a state machine that would quickly analyze the changing ADC inputs that was read by the MCU in real time. The 7 ADC values are converted in every appropriately labeled state except MoveState. The motor movement is handled inside MoveState depending on whether the flags ControlMove or TrackMove are set. If ControlMove is set to 1, then the control mode will be entered and the user can move the eye with his finger on the touch screen. If TrackMove is set to 1, then the tracking mode will be entered where a color will be searched for.
|Atmega 32||1||$0 Sampled from AVNET|
|Custom Protoboard||1||$ 6|
|4N35 Optoisolator||2||$0 Free from lab|
|Servo Motor||1||$0 Sampled from Parallax|
|5mm Blue LEDs||24||$ 1.00 ebay|
|5mm Green LEDs||16||Free (in Lab)|
|5mm Red LEDs||16||Free (in Lab)|
|10 mm White LEDs||9||$ 4.05 Purchased from Ebay|
|Photodiodes||4||Sampled from Vishay|
|Wooden Stand||1||Free (junk)|
|LCD Touch Screen||1||Previously Owned (purchased Summer 2006)|
|Power Supply for motor||1||$ 5.00|
|Power Supply for opamps and white LEDs||1||$ 5.00|
|Solder Board (large)||1||$ 2.50|
|Eye Ball Shell (Easter Egg)||1||Free (junk)|
|Op Amps LM358||7||Free (in Lab)|
|TOTAL COST||$ 40.05|
For more detail: Model retina: color tracker Using Atmega32