Vision Logic Open File Report

Updated 10/25/09

Key Search Words: ROBOT, ROBOTICS, ROBOTIC VISION, ARTIFICIAL INTELLIGENCE, AI

 Here we are going to initiate a new type of project, in which in book like fashion, I will be adding chapter write ups as I progress through this very difficult subject. Robotic vision is at this point in time on of the most challenging aspects of modern robotics. Literature and research papers are extremely scarce on a simplified method for home robots to navigate, or gather information about their environments except on a very basic level using vision.

Non technical people seem to think the problem is really quite simple: Just connect a TV camera to the robots processor and were done! Unfortunately, this isn't going to work with todays digital processors. Even if you could digitize the image coming from a camera, the most powerful super computers can only interpret this stream of digital ones and zeros in a most simplified and basic way. No one really knows how to program robotic vision like the neural nets in a living brain, and progress has been excruciatingly slow in past history on making any forward developments in this highly specialized field. Even worse, there is nearly no hope of a small home robots, which have typically stripped down micro controllers for the center of intelligence to make any sense of complex visual data. Herein lies the challenge!

The direction I am heading on this problem will be to evolve the visual process, exactly like the 3.8 billion years of evolution in the natural world has done with biological systems. In an escalating process, my thought is to start at the most basic level such as a simple light sensitive spot on a bacterium, then methodically escalate the visual complexity upward in small steps carefully paralleling an evolving biological system. How far I will get in this remains to be seen, however I feel this bio-mimetic approach will enable my home robots to use visual information at a far greater level, at an affordable cost in both time and money. Lets start this journey.

Project chapters are listed most recent at the bottom. 

 CHAPTER 1: The Vision Logic base Robot

1/4/09 - Base Robot Overview
3/4/09 - Level 1 base finished
3/15/09 - All Levels Done for Base Robot
CHAPTER 2:  Single Pixel Vision
4/3/09 - One Eyed Monsters
4/16/09 - Euglena AI Demonstration
5/28/09 -  Cave Hiding Demonstration
CHAPTER 3:  Dual Pixel Vision
7/24/09 - The Rotifer Project:  Aim at light source, Follow light source, and Follow black line, and now Frame 
Differencing - all using 10 bit greyscale vision and two adjustable phototransistor sensor "eyes".
8/29/09 - The Proto-Trilobite Project:  Here we delve into a one dimensional scanning vision system, with 9 pixels of
1024 grey shades on a motorized scanning platform. This emulates natures first multi element vision system.

9/21/09 - Proto Trilobite Project 2:  Now we move and talk.  The robot drives toward both black and illumiated targets
and docks.  It also demonstrates its ability to measure the distance to the target with its crude vision of 9 pixels.

10/25/09 - Counting Darks objects:  Here we introduce flat fielding, and counting of edges in a wide low resolution 
visual field.
HOME