Final Design

Introduction:
The final competition was a culmination of not only the concepts and labs of ECE 3400: Intelligent Physical Systems, but also the material covered in many foundational electrical and computer engineering classes. We explored and practiced circuitry, digital logic, programming microcontrollers, sensors, signal processing, image processing, mechanical design, and teamwork on a daily basis and were able to enhance and refine our skills in these topics. By the day of the final competition, our robot should have been able to:

While on the day of the final competition we were unable to demonstrate all these capabilities, we can confidently say that we learned how to implement each of these milestones on our robot throughout the semester. We discluded the camera feature in order to improve our chances of winning on competition day. Our final robot was able to perceive its environment using different kinds of sensors, reason through the information it was given by processing data using fast fourier transforms and updating appropriate data structures, and act accordingly determined by the logic and algorithms that we had set up in the code.

Pictures of our robot: alt text
Figure 1: Final Robot Design
Figure 1 shows our final robot design with everything mounted, including the camera. The IR hat sits under the top level and the camera and FPGA sit on top.

alt text
Figure 2: Robot at Competition
Figure 2 shows the robot design at competition. The camera has been removed, and the IR hat placed on the top level.

Final Schematics
alt text Figure 3: Arduino Schematic alt text Figure 4: Mux Schematic

Robot Cost:
The breakdown of our robot cost is as follows:

alt text

Our final robot was $14 under budget.

Design:

Start Tone:
To begin at a start tone, we used the microphone circuit designed in lab 2. In our original robot integration code, we ran a 256 point FFT to detect peaks in frequency data at 660Hz, indicating a start tone. For our final code, however, a 256 point FFT used far too much memory. We therefore chose to reduce the number of bins generated by the FFT from 256 to 64 by changing the FFT_N variable from 256 to 64 and the for loop limit from 512 to 128. We then recalculated which bin would contain the 660Hz peak using the calculation described at the beginning of lab 2. We found that a peak above 120 in bin 3 indicated that the robot should start.

Movement:
In order to move, we had two servos connected to each wheel of the robot. These servos controlled the forward motion, turning left or right, and turning around. In order to move forward, the servos were sent the same signal to turn the wheel clockwise. If the robot needed to turn, one servo turned clockwise and the other turned counterclockwise depending on whether the turn was left or right.

Line Following:
We had three QTR sensors positioned in the front of our robot. To go straight, we used sensor values to choose one of three options in order to stay on a white line:

  1. If the middle line sensor indicated it was on white, while the left and right sensors indicated they were on black, we set both servos to the same speed and went straight.
  2. If the left sensor saw white while the right sensor saw black, we set the right servo to be slightly faster than the left servo, thus correcting left.
  3. If the right sensor saw black while the left sensor saw right, we set the left servo to be slightly faster than the right servo, thus correcting right.

In order to turn left or right, we ultimately chose to implement a mix of a timing based and sensor based turn. We began by moving slight forward and turning briefly in whichever direction we chose in order to shift the robot off its original line. The robot would then continue to turn until its middle sensor detected white, indicating it had reached the next line. We were able to get a smooth and reliable turn, which took a lot of trial and error to accomplish.

Detecting Walls:
For our final design, our wall sensors were connected to a mux. We read the mux channels by setting the appropriate select bits high whenever we were at an intersection, as indicated by all three line sensors reading white. We implemented software debouncing in order to get a more consistent wall sensor reading. Based on the walls we detected present or not present, we modified a case variable to be used in our DFS algorithm.

Maze Traversal:
Our final competition algorithm was very similar to the algorithm described in lab 3. We stored all visited nodes in an array and favored visiting unvisited nodes. If several nodes surrounding the robot were unvisited, we first favored going straight, then going left, and then going right, given that going in any one of these directions would not cause us to crash into a wall, as determined by the case variable generated by our wall detection. If all nodes around the robot were visited, we randomly choose a direction to move given that it was allowed by the state of the surrounding walls. This randomness reduced our risk of getting stuck in an infinite loop in any part of the maze. We kept track of our current and next node using a stack. The current node was popped from the stack every time we reached an intersection. We used this current node to calculate the coordinates of possible forward, back, left, and right nodes. We then pushed one of these calculated nodes into the stack to visit next based on the algorithm described above.

Camera:
Our camera implementation was nearly identical to the implementation described in lab 4 and milestone 4. We mounted the camera and FPGA on the robot and rewired them to the onboard Arduino. We programmed non-volatile memory on the FPGA so that the FPGA could be reprogrammed each time it was powered on without losing its configuration data. We then slightly tuned our threshold values for image processing in order to accurately detect shape and color based on the height of the mounted camera and the average distance of our robot from walls at intersections.

Transmitting Maze Information:
We transmitted maze information for each square we visited in order to appropriately update our GUI. We calculated our coordinates based on the current node popped from the stack, as described in the maze traversal section above. We transmitted wall and treasure data using the determined information from our wall and treasure detection. In order to generate a correct GUI, the robot needed to remain oriented throughout the maze, so that we could assign the north, east, south, and west walls of each square to true or false dependent on the case variable generated by our wall detection. For example, if the robot was facing towards the top of the maze (using the GUI as reference), and detected a front wall, we needed to place a north wall for that square on the GUI. If the robot was instead facing left and detected a front wall, we needed to place an east wall for that square on the GUI. We therefore implemented an orientation function to reorient our robot after each turn by iterating through the enum representing the orientation information.

Robot Avoidance:
To successfully avoid robots, we used the phototransistor circuit we designed in Lab 2. This circuit was placed on the lower tier of our robot, so we had to angle our phototransistor upwards in order to detect IR at the appropriate distance away. Our original robot integration code implemented a 256 point Fast-Fourier Transform (FFT) to detect peaks in frequency data at 6kHz, indicating the presence of an IR hat and thus another robot. For the final code, however, we reduced this to a 64 point FFT as described above. We recalculated the bin for IR using the calculation modeled in our lab 2 description, and found that a peak in bin 11 that rose above 170 indicated that we should avoid an oncoming robot. For the final competition, we decided to stop for 10 seconds if we detected a robot, which worked out well.

Competition Day:
On the day of the competition, we decided to not bring all these different working components together in order maximize the number of points that we could earn. Due to the rule that detecting an incorrect treasure could penalize our robot, we did not want to risk mounting image processing, even though we have demonstrated that the camera and FPGA can detect colors and shapes for a good majority of the time. In this situation, the risks of losing points outweighed the benefit that we might have lost many points for false positives. We also chose to leave out the feature that starts the robot after hearing the 660 Hz tone because we did not want to risk the robot starting early by picking up other sounds. We thought that the 5 second penalty using the push button would be more beneficial in our case. We also knew that our GUI would sometimes detect walls that did not exist as it traversed the maze, so as our robot travelled, the GUI would update with the maze shape, but sometimes add a random wall. However, due to time constraints and the infrequency of this issue, we were not able to fully address this bug.

After a long semester of learning, developing, and debugging, we started competition day excited to see our hard work pay off. For the first round, our robot kept getting stuck looping through our corner of the maze because of the way that we implemented depth first search, and we only explored and mapped 6 squares of the maze. The second round was much more successful, as we were able to explore 15 squares and successfully avoid another robot. Even though we were not able to advance to the final round, we had a great time showcasing our efforts to the engineering community at Cornell. We’d like to thank Kirsten and the TA’s for organizing this class and competition, and guiding us as we built our brainchild, LEAK! alt text Future improvements:
One of the main things that we would focus on if we had more time would be to implement depth first search with backtracking so that we could avoid getting stuck in open mazes with less walls. We would have also liked to improve treasure detection in order to make it perform more consistently given different lighting conditions and wall distances.

it's a leek