The goal of this lab was to connect our arduino, camera, and fpga in order to detect treasures and distinguish red and blue.
Emma & Avisha
We first connected the VGA Driver and the M9K RAM and wrote a test pattern to memory. In an always block, we set the write enable bit, W_EN, high whenever the PIXEL_X and PIXEL_Y output from the VGA driver was within the screen bounds. We hard coded the RGB332 pixel data to be red whenever PIXEL_X was equal to PIXEL_Y, and green everywhere else. When we uploaded this code to the FPGA, we were able to see our expected test pattern. Downsampler
The next part of this lab was dependent on several clocks. We used a phase-locked loop in order to divide the 50 MHz clock generated by the FPGA into a 24MHz clock, a 25MHz clock, and a 50MHz clock all in phase with each other. The M9K RAM writes using the 50 MHz clock and reads using the 25 MHz clock. The VGA driver uses the 25MHz clock to read the contents of memory onto the display screen. The camera takes the 24 MHz clock into its XCLK input. The camera additionally outputs three clocks, PCLK, whose positive edge indicates eight bits of pixel data is available, HREF, whose negative edge indicates the end of a horizontal line of pixels and whose positive edge indicates the start of a horizontal line of pixels, and VSYNC, whose positive edge indicates the end of a frame.
We used the clocks outputted from the camera in order to write a downsampler. The camera outputs sixteen bits of pixel data in two PCLK cycles, in eight bits per cycle. This data is only valid if HREF is high, meaning the camera is outputting data from a horizontal line of pixels. The downsampler takes the three most significant bits of red, three most significant bits of green, and two most significant bits of blue from two valid cycles of data, and saves them into pixel_data_RGB332.
We needed to save this downsampled pixel data into memory. To do this, we needed to update the X address and Y address of our write address so that the data was saved to the correct location, as well as set W_EN high when the data was ready to be written to memory. The X address had to be incremented every two data cycles, after we obtained all the pixel data for a given location. The Y address had to be incremented every negative edge of HREF, indicating the end of the line. The Y address had to be reset every positive edge of VSYNC, indicating the end of a frame. Every time the Y address was incremented or reset, the X address had to be reset to 0. We enabled writing by setting W_EN high every time we incremented our X address. This logic was implemented using the code below.
With this code, we were able to first view the color bar test, and then adjust our registers to view the live camera output on our display screen. An image of our color bar test is shown below, and our live output is shown in our video.
We then had to implement image processing. In our image processing module, we begin by checking all pixels in the screen and determining their color. If the pixel we were checking was more red than blue and more red than green, we incremented our REDCOUNT variable. If it was more blue than red and more blue than green, we incremented our BLUECOUNT variable. At the end of frame, as signaled by VSYNC, we checked which count was larger, and output our result to the arduino through GPIO pins as described by the Arduino Team section below. This method consistently determined whether a treasure was red or blue, however, we also needed to find a way to determine whether a treasure was present or not. To accomplish this, we added a threshold to our comparison of REDCOUNT and BLUECOUNT. The larger count additionally had to be above 17,000, meaning the color took up more than half of the number of pixels in the screen, or else the result indicated no treasure. This is implemented with the following code, and the working system is shown in our video below.
Arduino Group: Katarina & Liam
For the Arduino, it took a long time to figure out which registers to set to enable the camera to properly function. Below is a screenshot of the registers we ended up setting.
First we set a bit in COM 7 (address 0x12) to reset all of the other registers, and then we delay to make sure that this function can be carried out. Then we set more bits in COM 7 to enable the color bar, choose QCIF, and output in RGB form. Setting this to 0x0C will disable the color bar for the color test but set the other correct bits. Then we set a bit in CLKRC to use the external clock. Then we set a bit in COM 3 (address 0x0C) to enable scaling. Then we set bits in COM 15 (address 0x40) to choose RGB 565 format. Then we set a bit in COM 17 (address 0x42) to enable the color bar test. Explicitly setting this to 0x00 will turn off the color bar test. Finally, we set bits in COM 9 (address 0x14).
After this, we hooked up the Arduino to a circuit that connects the SCL and SDA lines to the camera. We ensured that the internal pullups were disabled in this version of the IDE. We also set up the camera and plugged it into the FPGA. We wrote a small program which can read two digital inputs as high or low. These inputs will come from the FPGA, which writes two GPIO pins to be high or low depending on which color it has detected.
A 00 indicates neither blue nor red has been detected, a 01 indicates red has been detected, and 10 indicates blue has been detected. Currently we have enough digital pins in order for this to work, but when we also need to detect shapes we will need 3 bits to encode with. For this, we will use the 8:1 mux we already have on board, to conserve pins.
Note about video: The “camera detecting blue” clip is edited to slow down the printout of the serial monitor because when we took the video we accidentally taped this too quickly. The camera can detect blue though!