1 SODSys: Solar Optical Data Transfer System · 1 SODSys: Solar Optical Data Transfer System K....

15
1 SODSys: Solar Optical Data Transfer System K. Hicks, E. Kolker, K. Pletcher, and A. Whitcombe July 30, 2010 Abstract Present methods of space-bound communication require laser modulation, a power hungry process that can consume a significant portion of a system’s power budget. This study investigates the feasibility and characterizing metrics of a communication system which modulates sunlight. Two such proof-of-concept systems were built. The first used an unmodified monitor and webcam. It acheived a maximum speed of 20 bits per second; the limiting component was the detector (webcam). The second used a hacked monitor to allow the use of an external light source (a sun analog) and a laboratory photodetector. It acheived a maximum speed of 7.5 bits per second; the limiting factor was the monitor, a limit we feel is shallow. I. I NTRODUCTION Communication systems are vitally important to space exploration; however, laser based systems are inefficient. Solar cells convert light into electricty, and laser diodes convert electricity into light. If the conversion to electricity and back can be skipped, tremendous gains in efficiency can be had. This project draws inspiration from the heliograph, a ninteenth century invention, which uses two mirrors to precisely aim sunlight at a target. Morse code or some other form of on-off keying is used to modulate the beam. This study investigates adapting the heliograph for use in a modern communication system. For our present day heliograph, we strove for acuracy and speed in spite of a high signal to noise ratio. We measure this using four metrics: bits per second, bits successfully received per bits sent, signal amplitude to noise amplitude, and sample error rate. II. STATEMENT OF WORK The goal for this project is to establish and analyze a communications link which modulates light intensity to send information over long distances. III. CAMERA- BASED SYSTEM A. Transmitting Data Data transmission is accomplished by a Python program using the Pygame package for graphics. The program takes four options: levels, the size of the transmit alphabet (in 2 bits ); speed, which specifies the how many characters from the alphabet will be transmitted per second (in Hz); minimum, the pixel value used for the lowest character; and maximum, the pixel value used for the highest character. Both minimum and maximum are arbitrarily scaled, monotonically increasing units of intensity between 0 and 255. The program first transmitts true black (regardless of the setting defined by minimum) for at least 5 seconds, followed by a calibration sequence which consists of all of the alphabet’s characters in ascending order. It reads lines from stdin, and expects only and exactly one integer per line between 0 and levels - 1. The level specified by this integer is then transmitted. B. Receiving Data The recieve code, written in LabVIEW using the IMAQ toolkit, used a webcam as a photodetector to sample the luminance of the monitor’s image. Due to the properties of the camera, the maximum sampling rate that could be acheived was 15Hz. The PC averaged the values of all the pixels in a sample to produce one value characteristic of the frame, a process which dramaticly reduced noise. This spatial average is recorded to a .csv file for each frame. Figures 1 and 2 diagram the logging process and the effects of averaging, respectively. After the end of the transmission, we locked the data array and began the processing phase. The data were examined by the VI until a string of zeroes of an adjustable length was found. The array was then copied and cropped from the beginning of the “zero band” to the end of the array: the region that corresponded to the actual transmission. This enabled the receiver to sync up with the start of the transmission and not waste processing power on the preceding noise.

Transcript of 1 SODSys: Solar Optical Data Transfer System · 1 SODSys: Solar Optical Data Transfer System K....

1

SODSys: Solar Optical Data Transfer SystemK. Hicks, E. Kolker, K. Pletcher, and A. Whitcombe

July 30, 2010

Abstract

Present methods of space-bound communication require laser modulation, a power hungry process that can consume asignificant portion of a system’s power budget. This study investigates the feasibility and characterizing metrics of a communicationsystem which modulates sunlight. Two such proof-of-concept systems were built. The first used an unmodified monitor and webcam.It acheived a maximum speed of 20 bits per second; the limiting component was the detector (webcam). The second used a hackedmonitor to allow the use of an external light source (a sun analog) and a laboratory photodetector. It acheived a maximum speedof 7.5 bits per second; the limiting factor was the monitor, a limit we feel is shallow.

I. INTRODUCTION

Communication systems are vitally important to space exploration; however, laser based systems are inefficient. Solar cellsconvert light into electricty, and laser diodes convert electricity into light. If the conversion to electricity and back can beskipped, tremendous gains in efficiency can be had.

This project draws inspiration from the heliograph, a ninteenth century invention, which uses two mirrors to precisely aimsunlight at a target. Morse code or some other form of on-off keying is used to modulate the beam.

This study investigates adapting the heliograph for use in a modern communication system. For our present day heliograph,we strove for acuracy and speed in spite of a high signal to noise ratio. We measure this using four metrics: bits per second,bits successfully received per bits sent, signal amplitude to noise amplitude, and sample error rate.

II. STATEMENT OF WORK

The goal for this project is to establish and analyze a communications link which modulates light intensity to send informationover long distances.

III. CAMERA-BASED SYSTEM

A. Transmitting Data

Data transmission is accomplished by a Python program using the Pygame package for graphics. The program takes fouroptions: levels, the size of the transmit alphabet (in 2bits) ; speed, which specifies the how many characters from thealphabet will be transmitted per second (in Hz); minimum, the pixel value used for the lowest character; and maximum, thepixel value used for the highest character. Both minimum and maximum are arbitrarily scaled, monotonically increasing unitsof intensity between 0 and 255.

The program first transmitts true black (regardless of the setting defined by minimum) for at least 5 seconds, followedby a calibration sequence which consists of all of the alphabet’s characters in ascending order. It reads lines from stdin,and expects only and exactly one integer per line between 0 and levels - 1. The level specified by this integer is thentransmitted.

B. Receiving Data

The recieve code, written in LabVIEW using the IMAQ toolkit, used a webcam as a photodetector to sample the luminanceof the monitor’s image. Due to the properties of the camera, the maximum sampling rate that could be acheived was 15Hz.The PC averaged the values of all the pixels in a sample to produce one value characteristic of the frame, a process whichdramaticly reduced noise. This spatial average is recorded to a .csv file for each frame. Figures 1 and 2 diagram the loggingprocess and the effects of averaging, respectively.

After the end of the transmission, we locked the data array and began the processing phase. The data were examined bythe VI until a string of zeroes of an adjustable length was found. The array was then copied and cropped from the beginningof the “zero band” to the end of the array: the region that corresponded to the actual transmission. This enabled the receiverto sync up with the start of the transmission and not waste processing power on the preceding noise.

2

Fig. 1. The live video is cropped to a user-specified region of interest. This region of the image is then converted to grayscale and its pixels’ brightnessesare averaged, yeilding a “discretized analog brightness value” for the frame. Although this value will always be a floating point number, its range is 0.0 to255.0 and its minimum increment is 1

xy, where x and y are the dimensions of the region of interest. These values are stored in an array while the code runs

and logged to file at its completion.

Fig. 2. This diagram illustrates the effect of averaging the brightness across the entire screen compared to taking the brightness of a single pixel. Notice thedecrease in noise, despite the increased distance of transmission, as more brightnesses are factored into the recieved data.

C. Data Handling

Once the raw data–values ranging from 0 to 255 correlating with the perceived brightness from the webcam–were capturedusing LabVIEW, a MATLAB program was used to sort these data back into the discrete levels of the original transmission.To account for the nonlinearity of the monitor’s brightness, a program which identified the levels solely based upon thecharacteristics of the raw data, and not simply a linear distribution of the intended levels, was created.

To identify which raw data values corresponded with the intended levels, the raw data were put through a series ofmanipulations illustrated in Fig.3.

First, a histogram of the raw data was generated giving a distribution of the number of data points falling within binsspanning 0 and 255. (Fig.3A). The envelope of the histogram reveals which raw data values were measured most frequentlyand therefore correspond to the sixteen intended levels, instead of brief transition points. To make the peaks clearer, the datasetwas smoothed by integrating the histogram along a fixed width. This width was 3

4 of the intended separation between levelsto ensure that peaks were not obscured by the filtering (Fig.3B and 3C). The integrated values were then compared to a fixedthreshold corresponding to a quarter of the maximum peak height on the filtered plot. Transition points were identified byfinding the points at which the integrated values crossed the threshold value (Fig.3D). Level boundaries were then determinedby finding the average distance between two adjacent transition points (Fig.3E), and the data were thresholded into the levelsdetermined by these boundaries (Fig.3F). After the continuous raw data (from 0 to 255) were sorted into the intended numberof discrete levels (from 0 to n− 1) in MATLAB, a Python program was used to identify relevant data.

In addition to sorting the raw data back into the intended discrete levels, post-processing of the raw data includes a method ofdetecting actual level changes from the data. The camera’s response to the monitor’s change in brightness is not instantaneous,creating transition points within the data that do not correspond to actual transmitted levels. This is shown in the differencebetween the middle (non-ideal input) and top (processed) sections of Fig.4.

D. Results

Initial results exhibited a high signal to noise ratio, even at distances as close as 8m. After the implementation of averaging,data smoother than any that had been previously acquired were obtained at a distance of 45m. In both cases, tests weresuccessfully run with 16 levels; however, 64 levels could be observed at 8m.

3

Fig. 3. The figure above shows the process by which raw data are separated into discrete levels. A histogram of the raw data was created and then smoothedby integrating along a fixed width correlating with the number of intended levels. To identify level boundaries, these integrated values were then comparedto a threshold proportional to the maximum integrated value. Once these level boundaries were established, the raw data were sorted into discrete levels.

Transmission speed was also varied, and smoother, cleaner data were generally obtained at slower rates. The system’s absolutemaximum speed was limited by the camera’s acquisition rate of 15 frames per second. This was further limited to 5Hz by theneed to be able to distinguish between data and transition bits; without a possible three and guaranteed two consecutive bits, itwould have been impossible for the system to accurately discern one character from the next, or a character from noise. Fig. 4diagrams a few possible data acquisition and interpretation scenarios. The bottom frame shows that at a transmission frequencytoo high in comparison to the data aquisition rate, individual levels cannot be discerned from transition points. The systemis incapable of differentiating signal from noise unless the data stay at a consistent level for two or more timesteps, denotedby vertical lines. The middle frame shows the effects of measuring transition points that are not actually part of the intendedtransmitted signal between levels. To accurately distinguish the intended transmitted levels, post-processing techniques identifyactual data from these transition points, as demonstrated in the last frame.

E. Discussion

Although the applications of this system are numerous, no effort was made to use the system to send real data. Two adjacent16-level characters, however, could be used to encode the entire extended ASCII alphabet. Most of the development time wasspent establishing a working link, testing the system, and gauging its robustnes.

The bit error rate (BER) of the system was measured to be 3.5 · 10−4. We found that the bit error rate was often eithervery near to zero or very near to one. When the MATLAB level sorting code successfully thresholded the data, the rest of thedecoding was likely to go perfectly. When that code failed, it would output no meaningful data. Therefore, we can only saythat our system performed acceptably as long as the signal to noise ratio was above about & 2, and unacceptably otherwise.

IV. DISCRETE SYSTEM

A. Transmitting Data

The program used to transmit data in the camera based system was used unmodified in the discrete system; for more details,see appendix B.

B. Recieve Code

The recieve code for the discrete system is a modified version of the camera system’s code. The two codebases differ inthat the discrete system relies on a National Instruments DAQ (NI USB-6009) and ThorLabs photodetector (DET110) in place

4

Fig. 4. This figure shows the effects of synchronization and transmission rate on the interpretation of the received data. Because of possible asynchronizationof the Tx-Rx link, the data handling system does not interperet a value as data unless the detector records at least two identical, consecutive bits. Note that atan acquisition rate of 15Hz, the fastest interpretable transmission rate free from aliasing is 5Hz, regardless of Tx-Rx synchronization (top and middle graphs).At transmission rates higher than 5Hz, data cannot be discerned from the transitions between bits, and values may be improperly interpreted, repeated, ordropped entirely (bottom graph).

of a webcam. Modifications to the code mainly accommodate these changes, but also help to prevent memory overflow byallowing the VI to log data to file incrementally. The user can also specify the data acquisition rate and the amount of data ineach incremental subset.

Live data interpretation is implemented with a varying threshold based on the average of the maximum and minumum valuesin the current subset. The interpreted results, along with a version of the data with adjustable-length smoothing, are loggedto the same file as the raw data. In order to maximize the speed at which the VI operates, the data are mapped and savedin parallel with the DAQ’s collection of the next subset. In addition to ensuring that every test yields some good data, theincremental logging also allows an outside program to monitor the system’s progress and conduct preliminary analysis on theincoming data. MATLAB was found to be much more useful than Excel for these purposes.

C. Data Handling

As with the camera-based system, the discrete system required that data be converted from a continuous analog reading intodigital form. To accomplish this, the raw data are first smoothed by integrating along a fixed width (as seen in part c of thecamera-based system data handling diagram). These analog data are sorted into two levels by comparing the integrated valuesto a threshold. The sorting threshold for any single point is defined by averaging a fixed number of points (proportional to thetotal length of the data) following the data point to be sorted. To generate binary data, a value of 1 is assigned to the point ifit falls above the threshold, whereas a value of 0 is assigned to data below this threshold. (See Fig.5.)

To smooth the data further, regions of noise in the data (sample errors) are located by detecting pulse widths too small tobe actual data, as shown in the spikes in Fig. 6. The maximum noise pulse width was found experimentally to be one tenthof the maximum pulse width that occurs at least once per interval of one thousand data points.

These binary data must then be sorted back into the Manchester-encoded data. In order to accomplish this, the data handlingprogram must distinguish between two lengths of transmission signal–a short pulse one unit long and a long pulse that laststwo units. As with camera-based system, finding the threshold pulse width that distinguishes between these two levels isaccomplished by creating a histogram of all detected pulse widths. This histogram reveals two dominant peaks correspondingto the two transmission lengths. To locate the transition point between these two peaks, we employ the same process used todetect levels in the camera-based system (see frames b through e in the previous data handling figure). This transition thresholdis used to determine whether each high or low stretch of data is assigned a length of one or two bits in the Manchester-encodeddata. To interpret the original binary data from the encoded signal, every other bit is taken, as per Manchester decoding.

5

Fig. 5. This figure demonstrates the process of sorting raw data into two levels using MATLAB. First, the raw data is smoothed by being integrated for afixed interval of twenty preceding points. The threshold to determine whether or this integrated data is assigned a high or low value is also determined bytaking a local average of points surrounding the data value to be sorted. Using this information, the raw data is sorted into a series of zeroes and ones.

Fig. 6. To smooth the data further, sections of data that do not have a duration beyond a certain width (defined as a fraction of a relatively common maximumwidth) are identified as sample errors and assigned a high value.

D. Evaluating System Performance

In order to characterize the performance of our system, we estimated the fraction of noise in the sorted data (sample errorratio) to evaluate the capabilities of the data receiving process. We decided not to gauge the BER of the full transmit andreceive system because it was evident that simple improvements to the transmission process could be made to vastly improve itsaccuracy. Moreover, we hoped to measure how effectively an LCD screen used in transmission could send signals, somethingthat could be accomplished simply by analyzing the system’s receive capabilities.

To generate a rough estimate of the sample error ratio in the receive system, we used intrinsic properties of the data.Understanding that the ideal data transmission would consist of a long string of zeroes or a long string of ones, shown inthe first frame of Fig.7, we estimated error in the system by counting the number of bits of binary information generated inLabVIEW that did not match adjacent values. This type of error is illustrated in the second frame of Fig.7.

Fig. 7. At the left is a representation of data received without errors, which clearly fall into extended regions of high and low bits. At the right, however, isa representation of data with sample errors, in which some points fall outside of the continuous region of high or low values.

6

To estimate these occasional errors, we wrote a MATLAB script to check whether a given point is different than its neighbors.Instead of simply sampling directly adjacent neighbors, the algorithm we use compares each point with index n against thepoints of index n±2 in the binary dataset for the reasons presented in Fig 8. It counts an error if the data value of interest doesnot match either of the surrounding points. While this error estimation approach is certainly imperfect (seen in the miscountederrors in Fig.8), it served as a starting point to characterize the transmission capabilities of our communications setup.

Fig. 8. The above figure compares sample errors predicted (shown as red points) by two error-detection methods in various scenarios. When errors arecounted by comparing the value of each point to adjacent data points, errors longer than one bit wide are unnoticed (see left). By modifying the points ofcomparison to be two points out instead of only one, these errors can be accounted for (see right). Nevertheless, both systems record false positives when theerror is at the very edge of the pulse.

E. Results

When comparing the sample error ratio found at different signal-to-noise ratios, an approximately logarithmic correlationwas found between the two values, shown in Fig. 9. Although our datasets are not large enough to arrive at firm conclusions,it was found that the sorting process in MATLAB vastly improves the performance of the system, as quantified by the sampleerror ratio. When raw data with a signal-to-noise ratio of around 1.14 was simply compared to a threshold calculated live inLabVIEW with a rolling average, the resulting binary data had a sample error ratio of 0.3283. However, after the data weresorted in MATLAB, this sample error ratio dropped from 0.3283 to 1.954 · 10−5. All sorted binary data generated from datawith higher signal-to-noise ratios were found to be error-free, suggesting that a longer sampling time is necessary to fullyevaluate the system’s limitations.

The system’s BER was repeatedly measured to be extremely high (at 0.21) even after filtering in MATLAB. The largestcontributing factor to BER appeared to be the lack of vertical sync between the Python code and the video card. At highspeeds (& 15Hz) the reciever saw evidence of tearing, a behavior which caused the reciever to lose the clock in spite of thesignal’s Manchester encoding. We believe that we would be able to eliminate this interference entirely and transmit at 60Hz(or the monitor’s refresh rate, whichever is higher) if we were able to drive the LCD more directly.

Fig. 9. Sample error ratio vs. signal to noise ratio for several trials. Note the log scale and strong relationship between low values. We defined the sampleerror rate to be the rate at which the system interprets noise as data and reads one timestep incorrectly. Although this can be problematic if the ratio issufficiently high, there exists a difference of several orders of magnitude between the sample error ratio and BER.

7

V. DISCUSSION

Although the systems we developed are not on par with industry standards, our findings suggest that further improvementcould produce valuable results. Before we discuss where we would like to go, let us first look at the considerations that mustbe weighed before moving forward.

First is the relationship between speed and reliability. As can be expected, slower transmit speeds correlate with smoother,more readable, and more consistent data. It should be noted that our transmission rate was limited first by our reciever and laterby our transmitter, the camera and LCD, respectively. As the system approached its maximum transmit speed, the reliabilityof the signal degraded quite rapidly.

Second is the interplay between the two different types error: the error stemming from noise (sample error rate) and thatfrom malfunctions somewhere in the link pathway. Higher sample error rates generally came with higher SNRs, regardless ofthe source of the errors. Although the correlation between SNR and sample error rate is clearly negative (Fig.9), we did notcollect enough data to find a correlation between the sample error rate and the BER. As a rule, more data are needed in orderto draw firm conclusions regarding the interplay between these two sources of error (for example, our current data suggestthat SNRs above 20 yield perfect reception, a feat which we consider to be statistically impossible).

VI. FURTHER DIRECTIONS

In the event that research continues on a small scale, we suggest that the developers focus on tuning the current discretesystem instead of starting from scratch. We suggest utilizing the vsync function to drive an LCD, an approach which willeliminate the screen’s tearing and allow transmit frequencies to be raised to the refresh rate of the monitor. We also suggestthat the developers explore more advanced encoding techniques and write code which allows the data to be parsed in real time.We expect that adaptive error correction and dynamic thresholding would make the resulting data much more accurate. Finally,we propose that the team’s end goal be to implement a standard data transmission protocol, such as IP, over heliograph.

If research continues on a larger scale, we suggest that an entirely different approach be taken. We recomend that allconsumer electronics be replaced with their high speed laboratory counterparts. We suggest that the project move away fromliquid crystals and on to electro-optical modulators (EOMs). Although specialized liquid crystal arrays would be faster thanconsumer monitors, EOMs have modulation capabilities in the GHz range, as opposed to kHz. Further research might also beconducted on the most efficient means of collecting and pointing vast amounts of sunlight and how well EOMs modulate thisincoherent light.

VII. ACKNOWLEDGEMENTS

We would like to thank Steve Holt and Jeff Livas for their mentorship throughout the duration of this project.

8

APPENDIX

A. MATLAB Data-Handling Code and LabVIEW Recieving Code

9

10

11

12

B. Transmit Code

#!/usr/bin/python# -*- coding:utf-8 -*-

# grayscale_tr.py

USAGE = """Grayscale Transmitter.

grayscale_tr.py -l=levels -s=speed

Options:-l --levels=LEVELS: Specifies the number of different shades of gray to display in

transmission. Valid numbers are between 2 and 255 inclusive.

-s --speed=SPEED: Specifies the speed of transmission in hertz.

Grascale Transmitter will display shades of gray one after another. The number ofshades is determined by the levels option if --levels=2 than black and white frames willbe randomly displayed, if --levels=3 than black, 50% gray, and white frames will be usedetc. Bandwith in bits will be log(LEVELS) * SPEED.

Grascale Transmitter will output the random data being transmitted to stdout. Black is0, white is (LEVELS - 1), and gray are intermideate values."""

import sysfrom optparse import OptionParserimport pygamefrom random import randint

_most_recent_time = 0

def main():parser = OptionParser()parser.add_option("-l", "--levels", dest="levels",

help="Specifies the number of different shades of gray to display in transmission. Valid numbers are between 2 and 255 inclusive.", metavar="LEVELS")parser.add_option("-s", "--speed", dest="speed",

help="Specifies the speed of transmission in hertz.", metavar="SPEED")parser.add_option("-m", "--minimum", dest="min_", default="0",

help="Specifies the pixel value of the lowest transmited level (0, 255)",metavar="VAL")

parser.add_option("-w", "--maximum", dest="max_", default="255",help="Specifies the pixel value of the highest transmitted level, must be" \+ "greater than minimum.")

(options, args) = parser.parse_args()# print options

clock_delay = int(1000/float(options.speed))# this calculates the period from the user input (frequency) in miliseconds.levels = int(options.levels) - 1# 1 is subtracted because numbering the levels from zero is much easier.max_ = int(options.max_)min_ = int(options.min_)# max() and min() are builtin functions in python, the _ is to avoid name conflicts

13

pygame.init() #Boilerplatescreen = pygame.display.set_mode((1024, 768))# our monitor was 1024x768, change this if you need to. This really should# read from a config file. But the lazy hard coded solution is fine for now.pygame.display.set_caption("Transmitter Gray")background = pygame.Surface(screen.get_size())background = background.convert()# initialize the background and ready it for use.background.fill((255, 255, 255))# Set the color of the background to white

screen.blit(background, (0, 0))# put the background directly on top of the screen so that it can be displayed.# the second argument (0, 0) is the offset since we don’t want the background# to fill the entire screen, we set it to zero.pygame.display.flip()# update the display, make the screen actually show what is in memory.

clock = pygame.time.Clock() # initialize the clock

#pygame.time.wait(10) # used to be included for callibration purposes

#ramp(levels, clock_delay, clock, background, screen, pygame.display, min_, max_)# once included for callibration purposespygame.time.set_timer(pygame.USEREVENT + 1, clock_delay)# push a pygame.USEREVENT + 1, which we interpret as a ’transmit new level’# event, once every period.

while 1:clock.tick(60) # run the clock, but limit the framerate to 60Hzfor event in pygame.event.get():

# main event loopif event.type == pygame.QUIT:

# If someone presses the big shiny x, quit.return

if event.type == pygame.USEREVENT + 1:lstr = sys.stdin.readline()# Read a line from standard input, it will tell us what to# transmit.if len(lstr) == 0:

# If there’s nothing on the line, it means an end of file# was hit and we need to exit.return

level = int(lstr)# Convert the string input into an integer we can deal with.transmit(level, levels, background, screen, pygame.display, min_, max_)

def transmit(level, levels, background, screen, display, min_, max_):global _most_recent_timenow = pygame.time.get_ticks()# This was part of prior functionality to print how long each frame# was actually displayed. Sort of as a check to make sure that the# computer was not screwing up the data collection. A global# variable is a bit of a hack, but an elegant solution would have# required way too much work for what was essentially a removable# bug tracer.colorval = int((float(level) / levels) * (max_ - min_) + min_)# Determine what value between 0 and 255 should be sent as a

14

# function of level.print level# output the level so output can be checked against what was actually sent.# This was far more relevant when the levels were dynamically randomly# generated.#sys.stderr.write(str(now - _most_recent_time) + ’\n’)_most_recent_time = nowbackground.fill((colorval, colorval, colorval))# We only care about grayscale, so we set the same value for red green and bluescreen.blit(background, (0, 0))display.flip()# Put the background in screen memory, and send it to the screen.

def ramp(levels, clock_delay, clock, background, screen, display, min_, max_):# ramp(...) was called once at each startup for calabration purposes.# It transmits each level in ascending order. This gave us a clear# worst case scenario, as the difference between each level was at minimum,# and gave us a clear and quick ‘acid test’.clock.tick(60)background.fill((0, 0, 0))screen.blit(background, (0, 0))display.flip()pygame.time.wait(clock_delay * 5)# start by transmitting black, true #000000 black, regardless of the# setting of min_, and then keep it black for a while. This helps identify# the start of transmission.for i in range(levels + 1):

clock.tick(60)pygame.time.wait(clock_delay)transmit(i, levels, background, screen, display, min_, max_)# Transmit each of the levels in ascending order, using the# transmit(...) function to ensure that it works identically# to the sending of data.

clock.tick(60)pygame.time.wait(clock_delay)background.fill((0, 255, 0))screen.blit(background, (0, 0))pygame.display.flip()# Set the screen to green to signal the end of callibration, and the start of# actual data transmission.

if __name__ == "__main__":sys.exit(main())

15