CMOS Video Cameras and EECS 452
K.Metzger, February 24, 2009
Figure 1: C3088/OV6620 CMOS video camera mounted on Spartan-3 Starter
Board via connector B1.
1 Introduction
Several past EECS 452 projects have made use of video cameras. A significant
amount of project time was consumed researching possibilities, choosing a cam-
era, physically interfacing it to either a DSK or FPGA and then programming
and/or designing support logic. At this point the team was ready to get on with
their project in the time remaining in the semester. Usually not much. There are
a number of significant steps involved in going from saying "I’m going to locate
tennis balls." and actually getting an image on which to start trying.
The primary purpose of this note to gather in one place useful overview in-
formation about video options that might be useful to an EECS 452 project. The
information in this note can be augmented by studying the relevant data sheets,
1
searching out information on the web and experimenting with actual hardware.
I have learning exercise level OV6620 VHDL support that can be had for the ask-
ing. It was written mostly to help me learn what is involved working with these
small cameras.
A useful source of information for vision oriented student projects can be
found at http://www.cs.cmu.edu/~cmucam/home.html.
I have had at least three groups come by my office this semester to ask about
cameras and their use. I’ve given out copies of a CD holding materials that I’ve
collected and the VHDL for my camera/monitor experimental set up which I also
demonstrated. Unfortunately there isn’t any real way to open-the-hood and see
what’s going on inside. It seemed to me that a more coherent presentation would
be useful, hence this document. The writing is a bit on the rough side and the
content could be complete. It was written/assembled in a two day period at the
start of Spring Break. If it proves useful and desirable more effort will be put into
it for possible use by future teams. For now, it contains much more information
on the topic than was available for any previous EECS 452 video oriented project
team at the start of their project. By a considerable amount.
2 A short walk through
Scene—Camera—Interface—Frame Buffer—DSP—Frame Buffer–Interface—Display.
1. Scene content and lighting (sun, incadencent, florescent, etc) will have a
major effect on an image’s color quality. What the camera sees is generally
not what the user sees, color wise. This is because of the adaptability of
the human eye/brain. We see what we expect to see. In the old days, pho-
tographers used films matched to the color temperature of the expected
lighting. There existed separate films to be used in daylight or in incandes-
cent lightening. Modern digital cameras monitor the light characteristics
and adjust their color sensitivity accordingly. Generally, but not always,
this works well.
2. A camera has a lens that focuses an image onto a CMOS imaging surface.
The surface contains photosensitive diodes. These accumulate charge de-
pending on the light falling on them. A picture is started by clearing the
stored charge. The diodes are allowed to accumulate charge. Thses charges
can be read and transferred into other non-light sensitive diodes or tran-
sistors effecting an electronic shutter.
3. The image frame is organized in terms of rows of pixels. Some of the
photodiodes are sensitive to red light, some to green and the remainder to
blue. The color sensitivities are commonly arranged in a specific pattern
2
(Bayer). The Bayer pattern uses 50% green sensitive pixels, 25% red sensitive
pixels and 25% blue sensitive pixels (individually). The camera electronics
interpolates these pixels in order to generate a full color image. There is a
loss of resolution, compared to a “black and white” image generated using
the same pixel information.
"The Bayer arrangement of color filters on the pixel array of an image sen-
sor." Image and text from the Wikipedia.
4. The image frame is scanned out to the outside world. Typically by the
camera itself. This has to be done in a timely manner before the charges
on the photodiodes deteriorate significantly. The data is interpolated and
reformatted in this process.
5. A typical CMOS camera to user interface has 8 or 16 lines for pixel data,
a clock, and synchronization signals. Synchronization signals typically in-
clude start of frame and start of scan line (CMOS sensor row). Most CMOS
camera are configured using a I2C or a similar (proprietary) interface. A
reset line is also likely present.
6. The simple CMOS cameras available to us (i.e., left over from prior projects)
encode the pixel color/intensity values using 8 bits of luminance informa-
tion and 8 bits of chrominance information. It takes a bit of work to convert
these values back to RGB. It is possible to scan the RGB values out as well
but these generally use the Bayer pattern. In this case a pixel is either red,
or green or blue only. The luminance values come from each pixel and can
be used to generate a black and white image with higher resolution than
has a full color picture. The chrominance (color) information results from
an interpolation of near-by pixels. In some sense one is given 16 bits of
combined information and is asked to extract three 8-bit color values. This
is not likely to be an exact process.
3
7. If the image frame is not immediately consumed it must be placed into a
more non-volatile memory where it is can be worked on. Quite often this
memory is sized to hold two image frames. One portion being load while
the other is being worked on. If the camera outpaces the processing one
needs to either get a more powerful processor or perhaps drop the data
rate by skipping frames.
8. The processing an image is application dependent. In normal operation
(frame after frame) there isn’t a lot of time to do much, using our hard-
ware. Moving an image into the C5510 can be accomplished either using
the S3SB via the McBSP (running a fire hose into a straw) or via a DYI or
purchased DMA interface (garden hose). Either way frames likely will need
to be skipped.
9. The remaining steps involve generating a display frame to be sent to a
video monitor.
10. Displays can be generated on the fly for from information contained in a
display buffer or from some combination. Most monitors have three ana-
log color inputs, red, green and blue and two inputs for synchronization
signals, vertical sync and horizontal sync. The vertical sync waveform sig-
nals the start of a display frame. The horizontal sync waveform signals the
start of a row of pixels.
11. Almost all modern LCD monitors support a wide range of display sizes
and have large significant image synchronization abilities. If driven using a
somewhat off standard timing they generally will complain. The old analog
monitors often could be easily damaged using out of tolerance timings.
The above is a reasonably accurate broad stroke description of what happens
going from camera to display via a processing application. Lots and lots of de-
tails need to be considered when creating a working system. When implementing
your own system, your best friends will be the relevant data sheets/manuals and
a systematic step-by-step implement-and-test process.
3 CMOS video cameras on hand
• OV6620 CMOS, nominally quarter VGA
• OV7620 CMOS, nominally VGA
• Samsung/Sparkfun, E700 (MagnaChip HV7131GP), VGA
4
We also have a Digilent VDEC1 composite video in to digital data stream
converter. This has been used several time in conjunction with the Digilent
Virtex-2 FPGA board.
When reading data sheets (and any code that I supply) stay alert for incon-
sistencies and errors. For example, the OV6620 data sheet claims an image size
of 101,376 pixel but specifies an frame size of 356 × 292 = 103952 pixels. The
values tripped up a student who didn’t check the product value and assumed
that the values were consistent.
Projects are not required to use these specific cameras or camera interfaces.
Other choices exist. In particular the Altera DE2 board used in EECS 270 has a
composite video in. The follow on version, the DE2-70, has two such inputs.
Potential image capture/processing units are:
• DIY frame buffer/SPI,
• Spartan-3 Starter Board,
• Digilent XUP Virtex-2 Pro,
• Altera DE2 and DE2-70,
• Other?
Some “standard” image sizes:
Format h × v pixel count
VGA 640 × 480 307200
Quarter VGA 320 × 240 76800
XVGA 1024 × 768 768432
When working with images keep in mind what you see and perceive in terms
of color and what a camera sees and perceives are quite often different.
Cameras and their images sizes:
Camera frame dimensions pixels pixel clock
OV6620 356 × 292 103952 8.86 MHz
OV7620 664 × 492 326688 13.5 MHz
E700 652 × 488 318176 12.5 (?) MHz
The pixel clock numbers for the OmniVision parts are for 16-bit pixels and
are generated by the included interface board. The clock value doubles when
using 8-bit pixels.
Camera generated pixel rates are asynchronous to the S3SB clock. A clock
boundary is crossed moving pixel data from the camera into the FPGA. Metasta-
bility can be a problem and should be taken into account.
5
Figure 2: C3088 camera board with OV6620 camera mounted along with the
camera adapter board used to connect to the Spartan-3 Starter Board B1 connec-
tor.
3.1 OV6620 and OV7620
Requires a 5 Volt supply. However, the output drivers can be powered by 3.3
Volts allowing direct interfacing to the Spartan-3 Starter Board. Camera boards
need to be modified to allow powering the output drivers using 3.3 Volts. The
camera itself still needs to be powered using a 5 Volt supply. I have been modi-
fying cameras on as a needed basis.
Camera is configured using a I2C (?) interface.
3.2 Samsung E700
Camera is configured using a I2C interface. The user supplies the clock. A max
25 MHz clock is allowed resulting in a 30 frames per second image rate. The
pixel clock is likely 12.5 MHz, the data is multiplexed giving a 25 MHz 8-bit byte
rate. I think. Lower clocks can be used to drop the frame and pixel rate. I
couldn’t locate a minimum value.
6
AWBTH/
VTO
image
row
sele
ct
column sense amp
array
CbCr
Y
GAMMA
whiteexposuredetect
WBcontrol
balancedetect
AWBAGCEN
XVCLK1
(356x292)
1/2
sys-clk
rgb
FODD
VSYNCHREF
PCLK
video timing generator
FZEX
exposure control
FREZ
CHSYNC
mx
ADC
form
att
er
Y(7:0)
*UV(7:0)
SCCBinterface
SIO-1 SIO-0
registers
vid
eo
po
rt
ADC
SBB
analog processing
FSIN
MIR
DENB
mx
PROG
AWBTM
* Note: Outputs UV(7:0) are notavailable on the OV6120.
Figure 3: Block diagram showing the internal organization of the OV6620 CMOS
camera. From the OV6620 data manual.
7
Figure 4: Sparkfun breakout board with E700 camera mounted. Image from the
Sparkfun catalog page.
4 Capturing frames
Need to either process on the fly or capture a frame. Or perhaps, a portion of a
frame. The Spartan-3 Starter Board has 1 Mbyte of 10 ns static RAM. This can be
used to capture an image. Then the S3SB-McBSP link can be used to transfer the
image into a file on the PC. One can use the USB-PC link to move images directly
to a PC file and/or MATLAB.
Students have captured images in the past. I presently don’t have any VHDL
and/or C and/or MATLAB code for doing this. I do have pieces that I would use
were I to do this. At present, this is a task left as an exercise for the student.
4.1 Data formats
The default format consists of Y/UV values. The Y refers to luminance (bright-
ness) information and the UV values are chrominance values. U are blue-luminance
values and V are red-luminance values (- mean minus here). Pixels arrive from
the camera in 16-bit values the form YU, YV, YU, YV, . . . . In order to generate
a RGB display pairs of these are used to generate two RGB pixels. One needs to
study the camera data sheet intently to understand how to do this correctly.
The luminance information changes pixel to pixel. The same B and R infor-
mation is used to generate the RB components on two RGB pixels. The G value
8
Figure 5: E700 camera mounted on adapter board used to connect it to the
Spartan-3 Starter Board B1 connector.
is generated using the same U and V values and so also affects two RGB output
pixels. Essentially the color information has less resolution than the intensity
information.
I found the data sheets confusing and had my problems with implementing
the conversion from YUV to RGB. I found the task a challenge.
The OV6620 cameras allow acquiring the raw RGB pixels. One could use these
pretty much directly taking in account the Bayer pattern. I’ve not tried this. I’ve
not checked the E700 data manual to see if this is an option, but I expect it to
likely be.
4.2 Signals and their timing
The basic set consists of:
• Reset: A signal to the camera placing it into a known state. The OmniVision
cameras are put into 16-bit bus mode by reset.
• Pixel clock: Generated by the camera in order allow clocking the data off of
the pixel bus.
• Pixel bus: The pixel information. The OV cameras can be operated in either
16 or 8 bit mode. The 8-bit bus data is clocked at twice the rate as used
with the 16-bit bus. The E700 uses a 8-bit bus.
9
VCLK Y[7:0] C[7:0] VSYNC HSYNC
RESETB MCLK ENB SCK SDA
Pixel Array
652 x 492
Gamma Color
Interpolation
Color Correction
&Color Space Conversion
YCbCr Digital Gain
Control
Output
Formatting
Timing
Control
Row
Deco
de
r
Config
Registers
I2C
Slave
Auto Exposure Control
Auto White
Balance 10bit ADC
Column CDS PGA
Test
Logic
RGBGain
Figure 6: Block diagram showing the internal organization of the HV7131GP
CMOS camera used in the Samsung E700 cell phone camera. From the HV131GP
data manual. The Y and C lines are time multiplexed onto a single set of 8 pins.
Y[7:0]
PCLK
HREF
repeat for all data bytes
Pixel Data 8-bit Timing
U Y V Y 801010 80
(PCLK rising edge latches data bus)
Thd
Tclk
10
Tsu
Figure 7: OV6620 pixel timing diagram. From the OV6620 data manual.
10
• H ref: Indicates the start of a row of pixels.
• V sync: Indicates the start of a new frame.
Figure 7 shows the timing associated with scanning out a single row using 8
bit pixels. The pixel data is sent out starting UYVY. . . . The pixel clock period is
nominally 56 ns and the maximum value of Tsu and Thd is 15 ns.
4.3 Frame buffer
For the OV6620 pixel rate is nominally 8.9 MHz and a new frame is generated
approximately 0.0166 seconds (60.1 Hz frame rate). A frame contains 103952
pixels.
How much memory is needed? Should one implement a DYI frame buffer on
a separate board? Should one use two buffers, one to hold a frame while the
other is being loaded with a new frame? Things to think about.
The SRAM memory on the S3SB is organized as two banks of 256K of 16-bit
words. With reasonable care it should be possible to make use of up to eight
OV6620 frames. Or two E700 or OV7620 frames.
If two 8-bit pixels are placed into a 16-bit word then the data rate into mem-
ory is about 5M words per second, for the OV6620.
4.4 Digilent VDEC1 Video Decoder Board
Camera to be supplied extra. Uses a Hirose FX2 data connector. Compatible with
the Virtex-2 FPGA board and the Nexys-2. This board has been used by three past
projects, with reasonable success.
The ADV7183 is configured ... To be added someday.
WARNING: be very careful when assigning pins in your FPGA design based on
the pins on the VDEC1 schematic! I’ve not checked but they are likely to be
mirror imaged compared to the pin names on the FPGA board with even and
odd possibly being interchanged as well. There is actually a good reason why
this might have been done.
4.5 Altera DE2 and DE2-70
These boards have not been previously used by an EECS 452 project. The DE2
has support for one video input channel. The DE2-70 has support for two input
video channels. The Altera DE2 is used in EECS course and there are boards
available that can be borrowed for use by an EECS 452 project.
The DE2 uses the ... video decoder chip.
11
Figure 8: VDEC1 Video Decoder board and associated block diagram. Compatible
camera to supplied by the user. Images from the Digilent web pages.
The DE2-70 uses the ... video decoder chip.
5 Displaying an image
Something left to write about. Not much more complicated than was done in
lab. A single entity was used to generate the sync pulses and the addresses of
the next pixel to be sent to the monitor.
12
INPUTMUX
DATAPREPROCESSOR
DECIMATION ANDDOWNSAMPLING
FILTERS
STANDARD DEFINITION PROCESSOR
LUMAFILTER
LUMADIGITAL
FINECLAMP
GAINCONTROL
LUMARESAMPLE
LUMA2D COMB(4H MAX)
CHROMAFILTER
CHROMADEMOD
FSCRECOVERY
CHROMADIGITAL
FINECLAMP
GAINCONTROL
CHROMARESAMPLE
CHROMA2D COMB(4H MAX)
L-DNR
OU
TP
UT
FO
RM
AT
TE
R
SYNCEXTRACT
LINELENGTH
PREDICTOR
RESAMPLECONTROL
AVCODE
INSERTION
CTIC-DNR
A/DCLAMP10
10
10A/DCLAMP10
A/DCLAMP10
VBI DATA RECOVERY GLOBAL CONTROLSYNTHESIZEDLLC CONTROL
MACROVISIONDETECTION
STANDARDAUTODETECTION
FREE RUNOUTPUT CONTROL
SYNC PROCESSING ANDCLOCK GENERATION
SERIAL INTERFACECONTROL AND VBI DATA
SCLK
AIN1–AIN12
SDA
ALSB
ADV7183B
CONTROLAND DATA
SYNC ANDCLK CONTROL 16
HS
8
8
PIXELDATA
VS
FIELD
LLC1
LLC2
SFL
CVBSS-VIDEO
YPrPb
12
INTRQ
04997-001
F
igu
re 1.
Fig
ure
9:
Blo
ck
dia
gra
mof
the
An
alo
gD
evic
es
AD
V7183
Mu
ltiform
at
SD
TV
Vid
eo
Decod
er.
Fro
mth
eA
nalo
gD
evic
es
AD
V7183
data
man
ual.
13
6 My practice video VHDL
MemManager
Camera0Top
DisplayMan
CamCom OV6620 VGAtiming
Figure 10: OV6620 camera in to video display VHDL modules. Seven segment
display not shown.
The goal was to use the C3088/OV6620 as a video camera with the video
being displayed on a computer monitor.
The main challenges were:
• moving pixels from the camera output into a display memory.
• converting the camera Y UV pixel format into RGB pixels for the monitor.
• moving data from the frame memory to the display.
• managing the frame memory.
I cheated and did not synchronize the framing of the data between the camera
and the display monitor. Pixels are written into the frame memory as they arrive
from the camera and are sent to the display as needed by the monitor. In a real
sense the frame buffer is used to convert between frame rates. It was felt that
this was a reasonable first cut solution for dealing with non-synchronous pixel
rates. The result works well and thinking more about it, I don’t know what else
one would rationally do otherwise.
My original effort used a 16-bit pixel interface adapter to go from the camera
board to the S3SB B1. This was later replaced by a 8-bit pixel interface board
which also allowed generation of 8-bit VGA output, something the S3SB does not
have provision for. This board was also designed to allow use with the E700
camera.
6.1 CamCom
This entity is used to interact with a user via the S3SB slide switches, push but-
tons, LEDs and seven-segment display.
14
When the OmniVision cameras are first powered or reset they goe into 16-bit
mode. The camera control subsystem is initialized such that all that needs to
be done in order to switch into 8 bit mode is to push push button 0. A some
creaky and inelegant procedure however it was quick and expedient. Someday
to be replaced.
This interface was intended to allow an experimenter to have read and access
to the command/control registers contained on the OV cameras. They were
meant to allow one to ask and answer questions along the lines of “What happens
if . . . ?”.
6.2 OV620
Handles the transition between the clock used by the camera and the clock used
by the S3SB.
Accepts two 16-bit frames and converts into two 8-bit RGB pixels that are sent
to the memory manager as a 16-bit word to be written into the frame memory.
This module also generates the frame address assuming a 640 × 480 display. It
also flips the image in order to make it show right side up.
A simple, nearest 1/16th, approximation to values found in a Xilinx appli-
cation note is used to convert Y/UV values to RGB values. Values were sign
extended and added/subtracted in combinatorial fashion in order to keep the
computational time to a minimum.
6.3 VGAtiming
The display parameters are pretty much those used in lab, a 60 MHz frame rate
and a 25 MHz pixel clock.
This module generates the vertical sync and horizontal sync pulses as well
as the horizontal and vertical components of the address of the next pixel to be
displayed.
The VGA pixel rate is 25 MHz.
6.4 DisplayMan
Picks up 16-bit words containing pixel pairs and sends them to the VGA sequen-
tially. The 8-bit values are unpacked into 3 bits of red, 3 bits of green and 2 bits
of blue.
6.5 MemManager
The Spartan-3 Starter Board contains 1M bytes of static RAM. This organized as
two banks of 256K 16-bit words. The two banks are not totally independent and
15
Figure 11: Lighting and whatever caused my grey sweater to show as blue. It
shows grey/green at night when the sun has gone down. Image is placed into
the top left corner of the display buffer. Whatever was left over in memory from
whenever was used to generate the rest of the display. The C5510 DSK just hap-
pened to be sitting there when I took the picture. It had no other involvement.
can be used to form 32-bit words. The SRAM chips have a nominal minimum 10
ns cycle time. In actual practice the cycle time needs to be somewhat larger.
The frame memory was configured as 640× 480 pixels.
The memory manager allows the camera to view memory as write only and
the display manager as read only.
The camera pixel rate is 8.86 MHz. The monitor pixel rate is 25 MHz. The
memory manager deals with it.
7 Summary
Capturing an image from a video camera is a non-trivial task. It is a task that has
been accomplished a number of times in EECS 452. Almost always this hasn’t
been without significant effort. The data rates are high and the amounts of data
are relatively large compared to the resources available on the lab equipment.
Asking for high resolution either on input or output comes with an associated
16
price. We have become accustomed to high quality video on our PCs. There is a
significant history and infrastructure that has evolved behind this. The proces-
sors used in a modern graphics interface are so powerful that researchers are
networking these processors together to create state-of-the-art super computers.
All that we have in the lab are low end FPGAs and a garden variety DSP system.
These can accomplish a lot but they are not in the same league.
What follows is advice. It might be good advice, it might be bad advice. I have
good intentions, but the decisions (and the consequences) are your’s to make.
• Choose your camera carefully. Generally this will determine the number of
pixels that need to be contained in the frame buffer and the pixel data rate.
One does not have to accept all frames.
• Don’t try the system as a single entity. Divide and conquer. Try to orga-
nize the design in self contained units that share a minimum amount of
information between each other.
• Design for testing. Whenever reasonable, test individual section separately.
Start simple then add complexity evolving the system into its final form.
• Read the data sheets/manuals carefully BEFORE you start designing. Try to
understand what is being said, and perhaps what isn’t. Pay close attention
to timings, which edges are doing what and set-up and hold times.
8 References
OmniVision Advanced Information Preliminary V6620/OV6120.
CMOS Image Sensor with Image Signal Processing HV7131GP V3.4, MagnaChip.
Multiformat SDTV Video Decoder ADV7183B data manual, Analog Devices.
Chu’s book about VHDL on the Spartan-3 Starter Board. There is at least one
copy on reserve in the library.
Pedroni’s VHDL book. A must to own! Copies are on reserve in the library.
Google. Choosing the right key words can lead to a wealth of information.
A Camera adapter board for S3SB
A PC board was designed and manufactured to interface the OmniVision camera
as well as the E700 to the S3SB B1 connector. In addition an 8-bit resistor only
multichannel D/A converter is included for generating multi-bit color. (The S3SB
17
Figure 12: Schematic for the OV/E700 adapter board. Plugs into Spartan-3
Starter Board connector B1.
18
Figure 13: The OV/E700 adapter board, left is camera side view, right is B1 plug
side view.
uses one bit each for R, G and B.) The red D/A channel has 3 bits, the green
channel has 3 bits and the blue channel has 2 bits. The resulting display is
reasonably acceptable.
Four B1 camera interface boards are available. Two are configured for use
with the OmniVision cameras. These have been tested. One board is configured
for use with the E700 camera. This has not been tested. The last board does not
have any components mounted. The boards can be set up to work with either
camera, I’ve just haven’t done so.
JP1 contains the pins that plug into the S3SB socket B1. Because of physical
laws the even pins on JP1 connect to the odd pins on B1. As should be expected
the odd pins on JP1 connect to the even pins on B1.
JP3 and JP4 are used for the E700. Be careful to plug the Sparkfun adapter
board in correctly! You have two choices.
JP2 is used for the OV camera. The camera goes up. Plugging in the camera
should be hazard free. Note: only a 3.3 Volt driver modified camera board should
be used! If you purchase your own, modify it!
JP5 is to allow connecting the analog composite luminance signal to a moni-
tor. This allows black and white monitoring of the camera’s operation.
X1 is the 16 pin connector that goes to the monitor.
The board is designed to support only 8-bit transfers for the OV. The OV
comes up in 16-bit mode so it is necessary to use the SPI interface to set 8-bit
mode.
19
resistor function value (ohms)
R1 red2 511
R2 red1 1000
R3 red0 2000
R4 grn0 2000
R5 grn1 1000
R6 grn2 511
R7 blu0 1000
R8 blu1 511
R9 H sync 100
R10 V sync 100
R11 SDA 200
R12 SCL 200
Figure 14: Resistor signals and Ohm values.
The E700 only uses 8-bit pixel transfers. This camera should simply power
up and work.
Have not worked out the UCF pin assignments for the E700 camera. To be
done. It was my intention that the pin assignments and usage would be very
similar.
20
# OV Camera board in B1
#
NET "camera<0>" LOC = "E10" | IOSTANDARD = LVCMOS33; # Y0
NET "camera<1>" LOC = "T3" | IOSTANDARD = LVCMOS33; # Y1
NET "camera<2>" LOC = "C11" | IOSTANDARD = LVCMOS33; # Y2
NET "camera<3>" LOC = "N11" | IOSTANDARD = LVCMOS33; # Y3
NET "camera<4>" LOC = "D11" | IOSTANDARD = LVCMOS33; # Y4
NET "camera<5>" LOC = "P10" | IOSTANDARD = LVCMOS33; # Y5
NET "camera<6>" LOC = "C12" | IOSTANDARD = LVCMOS33; # Y6
NET "camera<7>" LOC = "R10" | IOSTANDARD = LVCMOS33; # Y7
#
NET "cam_PCLK" CLOCK_DEDICATED_ROUTE = FALSE;
NET "cam_PWDN" LOC = "D12" | IOSTANDARD = LVCMOS33; # PWDN
NET "cam_RST" LOC = "T7" | IOSTANDARD = LVCMOS33; # RST
NET "cam_SDAS" LOC = "E11" | IOSTANDARD = LVCMOS33 | PULLUP; # SDAS
#NET "cam_FODD" LOC = "R7" | IOSTANDARD = LVCMOS33; # FODD
NET "cam_SCL" LOC = "N6" | IOSTANDARD = LVCMOS33; # SCL
NET "cam_HREF" LOC = "B16" | IOSTANDARD = LVCMOS33;# HREF
NET "cam_PCLK" LOC = "R3" | IOSTANDARD = LVCMOS33; # PCLK
NET "cam_VSYN" LOC = "M6" | IOSTANDARD = LVCMOS33; # VSYN
# Camera board VGA connector
#
NET "red2<0>" LOC = "C15" | IOSTANDARD = LVCMOS33;
NET "red2<1>" LOC = "C16" | IOSTANDARD = LVCMOS33;
NET "red2<2>" LOC = "D15" | IOSTANDARD = LVCMOS33;
NET "grn2<0>" LOC = "D16" | IOSTANDARD = LVCMOS33;
NET "grn2<1>" LOC = "E15" | IOSTANDARD = LVCMOS33;
NET "grn2<2>" LOC = "E16" | IOSTANDARD = LVCMOS33;
NET "blu2<1>" LOC = "F15" | IOSTANDARD = LVCMOS33;
NET "blu2<2>" LOC = "G15" | IOSTANDARD = LVCMOS33;
NET "hs2" LOC = "G16" | IOSTANDARD = LVCMOS33;
NET "vs2" LOC = "H15" | IOSTANDARD = LVCMOS33;
Figure 15: S3SB UCF file additions for use with the OV camera boards.
21
Top Related