A Low-power SoC-based Moving Target Detection ... - Guo Lab

6
A Low-power SoC-based Moving Target Detection System for Amphibious Spherical Robots Shaowu Pan 1, 2, 3 , Liwei Shi 1, 2, 3,* , Shuxiang Guo 1, 2, 3, 4 , Ping Guo 1, 2, 3 , Yanlin He 1, 2, 3 and Rui Xiao 1, 2, 3 1 The Institute of Advanced Biomedical Engineering System, School of Life Science, Beijing Institute of Technology, No.5, Zhongguancun South Street, Haidian District, Beijing 100081, China 2 Key Laboratory of Convergence Medical Engineering System and Healthcare Technology, the Ministry of Industry and Infor- mation Technology, Beijing Institute of Technology, No.5, Zhongguancun South Street, Haidian District, Beijing 100081, China 3 Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, Beijing Institute of Technology, No.5, Zhongguan- cun South Street, Haidian District, Beijing 100081, China 4 Faculty of Engineering, Kagawa University, 2217-20 Hayashi-cho, Takamatsu, Kagawa, Japan [email protected], [email protected], [email protected] * Corresponding author Abstract – A moving target detection and tracking system is critical important for autonomous mobile robot to accomplish complicated tasks. Aiming at application requirements of our amphibious spherical robot proposed in previous researches, a low-power and portable moving target detection system was de- signed and implemented in this paper. Xilinx Zynq-7000 SoC (System on Chip) was used to fabricate the image processing system of the robot for detection and tracking. An OmniVision OV7670 COMS image sensor controlled by customized IP cores in the PL (Programmable Logic) of the SoC was adopted to ac- quire 640×480 RGB images at 30 frames per second. The Gaus- sian background modeling method was implemented with Vivado HLS in the PL to detect moving targets. And a FCT (Fast Com- pressive Tracking) tracker with motion estimation mechanism was running in the PS (Processing System) of the SoC to track targets captured by the detection subsystem subsequently. Be- sides, the dynamical power management (DPM) and the dynami- cal voltage frequency scaling (DVFS) mechanisms were used for a higher power-efficiency. Experimental results verified the vali- dation and performance of the detection system. The design in this paper may have reference value for vision-based mobile ro- bots or vehicles. Index Terms – Moving target detection. Gaussian background modeling. Low-power. Zynq-7000 SoC. Amphibious spherical ro- bot. I. INTRODUCTION As one of the most effective and feasible tools to sense the surroundings [1], digital cameras have been widely used in mobile robots to realized intelligent methods. Among various vision systems of a mobile robot, the moving target detection and tracking system often plays a critical important role to realize robotic functions, such as autonomous navigation [2], visual servoing [3], path planning [4], robot–human interac- tion [5] and et al. Target detection is the process to probe motions or beha- viors of targets by analyzing the scenarios in image sequences and then mark coordinates of the moving targets [6]. The most common detection algorithms include the background subtrac- tion method [7], the optical flow method [8], the template matching method [9], etc. The background subtraction method extracts the foreground or target by calculating the error be- tween an image from the video and the image of background [10]. The optical flow method detects moving targets by cal- culating the optical flow field of images and then analyzing the velocity vector features [11]. The template matching me- thod, which is usually used for stationary target detection or works as an assist module for robust target tracking, locates specific targets by compare features such as color [12], con- tour [13], SIFT (Scale Invariant Feature Transform) [14], etc. In recent years, most researchers have combined studies of target detection with machine learning theories, and tried to improve the precision of detection with pattern classification algorithms [15]. And some state-of-the-art detection algo- rithms, which were built upon SVM (Support Vector Ma- chine) [17], Adaboost [18], wavelet [19], etc., have been pro- posed. However, most related studies mainly aimed at improving the precision of detection, while the computational consump- tion and usability of algorithms were sometimes overlooked. Unlike theoretical studies conducted on high performance desktops or workstations, target detection and tracking for robotic applications are usually based on embedded micropro- cessors and limited power supply. Thus the real-time perfor- mance of the detection algorithm and the power consumption of the whole system have to be taken into consideration. FPGA (Field Programmable Gate Array) [19] and DSP (Digi- tal Signal Processor) [20] were adopted to implement real- time portable detection and tracking systems in some studies, but few studies pay close attention to low-power system de- sign and global power optimization [21]. To meet requirements of our amphibious spherical robots, which necessitated even stricter constraints on power con- sumption and heat of circuits, a novel low-power moving tar- get detection system was designed and implemented in this paper. The latest Xilinx Zynq-7000 SoC (System on Chip) was adopted to fabricate the electrical system of our robot for control and vision applications. OmniVision OV7670, which is a compact CMOS camera controlled by IP cores in the PL (Programmable Logic) of the SoC, was used to capture 640×480 RGB images at the rate of 30 fps (frames per second). The Gaussian background modeling method was im- plemented with Vivado HLS in the PL of the SoC to detect

Transcript of A Low-power SoC-based Moving Target Detection ... - Guo Lab

Page 1: A Low-power SoC-based Moving Target Detection ... - Guo Lab

A Low-power SoC-based Moving Target Detection System for Amphibious Spherical Robots

Shaowu Pan 1, 2, 3, Liwei Shi 1, 2, 3,*, Shuxiang Guo 1, 2, 3, 4, Ping Guo 1, 2, 3, Yanlin He 1, 2, 3 and Rui Xiao 1, 2, 3 1The Institute of Advanced Biomedical Engineering System, School of Life Science, Beijing Institute of Technology, No.5,

Zhongguancun South Street, Haidian District, Beijing 100081, China 2Key Laboratory of Convergence Medical Engineering System and Healthcare Technology, the Ministry of Industry and Infor-

mation Technology, Beijing Institute of Technology, No.5, Zhongguancun South Street, Haidian District, Beijing 100081, China 3Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, Beijing Institute of Technology, No.5, Zhongguan-

cun South Street, Haidian District, Beijing 100081, China 4Faculty of Engineering, Kagawa University, 2217-20 Hayashi-cho, Takamatsu, Kagawa, Japan

[email protected], [email protected], [email protected] *Corresponding author

Abstract – A moving target detection and tracking system is critical important for autonomous mobile robot to accomplish complicated tasks. Aiming at application requirements of our amphibious spherical robot proposed in previous researches, a low-power and portable moving target detection system was de-signed and implemented in this paper. Xilinx Zynq-7000 SoC (System on Chip) was used to fabricate the image processing system of the robot for detection and tracking. An OmniVision OV7670 COMS image sensor controlled by customized IP cores in the PL (Programmable Logic) of the SoC was adopted to ac-quire 640×480 RGB images at 30 frames per second. The Gaus-sian background modeling method was implemented with Vivado HLS in the PL to detect moving targets. And a FCT (Fast Com-pressive Tracking) tracker with motion estimation mechanism was running in the PS (Processing System) of the SoC to track targets captured by the detection subsystem subsequently. Be-sides, the dynamical power management (DPM) and the dynami-cal voltage frequency scaling (DVFS) mechanisms were used for a higher power-efficiency. Experimental results verified the vali-dation and performance of the detection system. The design in this paper may have reference value for vision-based mobile ro-bots or vehicles. Index Terms – Moving target detection. Gaussian background modeling. Low-power. Zynq-7000 SoC. Amphibious spherical ro-bot.

I. INTRODUCTION

As one of the most effective and feasible tools to sense the surroundings [1], digital cameras have been widely used in mobile robots to realized intelligent methods. Among various vision systems of a mobile robot, the moving target detection and tracking system often plays a critical important role to realize robotic functions, such as autonomous navigation [2], visual servoing [3], path planning [4], robot–human interac-tion [5] and et al.

Target detection is the process to probe motions or beha-viors of targets by analyzing the scenarios in image sequences and then mark coordinates of the moving targets [6]. The most common detection algorithms include the background subtrac-tion method [7], the optical flow method [8], the template matching method [9], etc. The background subtraction method extracts the foreground or target by calculating the error be-

tween an image from the video and the image of background [10]. The optical flow method detects moving targets by cal-culating the optical flow field of images and then analyzing the velocity vector features [11]. The template matching me-thod, which is usually used for stationary target detection or works as an assist module for robust target tracking, locates specific targets by compare features such as color [12], con-tour [13], SIFT (Scale Invariant Feature Transform) [14], etc. In recent years, most researchers have combined studies of target detection with machine learning theories, and tried to improve the precision of detection with pattern classification algorithms [15]. And some state-of-the-art detection algo-rithms, which were built upon SVM (Support Vector Ma-chine) [17], Adaboost [18], wavelet [19], etc., have been pro-posed.

However, most related studies mainly aimed at improving the precision of detection, while the computational consump-tion and usability of algorithms were sometimes overlooked. Unlike theoretical studies conducted on high performance desktops or workstations, target detection and tracking for robotic applications are usually based on embedded micropro-cessors and limited power supply. Thus the real-time perfor-mance of the detection algorithm and the power consumption of the whole system have to be taken into consideration. FPGA (Field Programmable Gate Array) [19] and DSP (Digi-tal Signal Processor) [20] were adopted to implement real-time portable detection and tracking systems in some studies, but few studies pay close attention to low-power system de-sign and global power optimization [21].

To meet requirements of our amphibious spherical robots, which necessitated even stricter constraints on power con-sumption and heat of circuits, a novel low-power moving tar-get detection system was designed and implemented in this paper. The latest Xilinx Zynq-7000 SoC (System on Chip) was adopted to fabricate the electrical system of our robot for control and vision applications. OmniVision OV7670, which is a compact CMOS camera controlled by IP cores in the PL (Programmable Logic) of the SoC, was used to capture 640×480 RGB images at the rate of 30 fps (frames per second). The Gaussian background modeling method was im-plemented with Vivado HLS in the PL of the SoC to detect

Page 2: A Low-power SoC-based Moving Target Detection ... - Guo Lab

moving targets. And a FCT (Fast Compressive Tracking) tracker with motion estimation mechanism was implemented and was running in the PS (Processing System) of the SoC to track targets captured by the detection subsystem. A working mode switching mechanism was used to save power and man-age the detection and tracking subsystems. In the standby or detecting mode, only the detection subsystem worked to sense moving targets, while most robotic functional modules, in-cluding the tracking subsystem, were turned into low-power mode. In the working or tracking mode, the tracking subsys-tem and other robotic functional modules was activated by a positive detection result. Besides, by introducing DPM (Dy-namical Power Management) and DVFS (Dynamical Voltage Frequency Scaling) techniques, the utilizing rate of power supply was increased, which may lead to increased battery life and expanded motion range.

The rest of this paper is organized as follows. An over-view on our amphibious spherical robots and power optimiza-tion techniques will be introduced in Section II. Design details of the SoC-based detection system will be elaborated in Sec-tion III. Power optimization design and experimental results will be described in Section IV. Section V will be conclusion and follow-up relevant research work.

II. RELATED WORK AND APPLICATIONS REQUIREMENTS

A. Amphibious Spherical Robot

Fig. 1 Diagram of the improved amphibious spherical robot [22]

As introduced in reference [22], an amphibious spherical robot for delicate operations in narrow spaces was proposed by our team in 2012. As shown in Fig. 1, the robot consisted a waterproof hemispheric upper hull (250 mm in diameter), in which electronic instruments were installed, and two openable quarter-sphere lower shells (266 mm in diameter). In the land mode, the robot walked with four legs driven by servo motors. And in the underwater mode, it swimmed with water jet mo-tors. Different with most mobile robots or autonomous under-water vehicles, the robot worked in more complex environ-ments and had a more compact size. And a large amount of sensors, driving circuits and information processing units were installed in the small and closed upper hull. Consequently, the power consumption and heat generated by the electronic sys-tem had to be taken into consideration.

As shown in Fig. 2, in order to decrease power of ICs (In-tegrated Circuits) and promote modular design, an Avnet Mi-croZed core-board carrying Xilinx all programmable Zynq-7000 SoC (Z-7020) was adopted to fabricate the electronic system of the improved version of the spherical robot in 2014

[23]. However, a moving target detection and tracking system for the robot was not ready yet. Meanwhile, the power optimi-zation problem was not fundamentally solved because Zynq is a strong and high-power processor consisting of a large scale FPGA (the PL) and an ARM Cortex-A9 dual core processor (the PS) [24].

Fig. 2 Diagram of the Zynq SoC-based electronic system of the robot [23]

B. Power Optimization of Integrated Circuits The total power consumption of an IC consists of the

static power consumption, which is generated by leakage cur-rent and is independent with device operations, and the dy-namical power consumption, which is caused by the device activities. In general, the dynamical power consumption forms the majority of the total power consumption. And numerous studies were carried out to decease the dynamical power con-sumption by optimizing the compiling process and improving the IC configuration [25]. Currently, the commonest and most effective power optimization techniques includes DPM and DVFS. The main idea of DPM is switching the system or some peripherals to low power mode by powering down or gating off clocks when they are not used. And the main idea of DVFS is reducing the supply voltage or clock frequency of the system or some peripherals when they are not at full work load [26]. For programmable hardware devices such as FPGA, the reconfiguration technology is also able to reduce power con-sumption, for the reason that it decreases the utilization area of logics [21].

As to our robot, there is a lot of room to conduct power optimization, which means longer battery life, less heat dissi-pation and expanded robotic motion range. On the one hand, Zynq SoC provides an abundant set of power reduction mechanisms which cover various DPM and DVFS techniques, as it is a hybrid processor combing an ARM processor and an FPGA, as shown in Fig. 2. On the other hand, moving targets detection and tracking are two separate functions in some de-gree, and the robot is idle to detect potential targets most of the time, which make it possible to design a sleeping mode.

III. DESIGN OF THE DETECTION SYSTEM

A. General Design of the Detection System As shown in Fig. 3, the detection system was imple-

mented in the Zynq SoC and was consisted of a detection sub-

Page 3: A Low-power SoC-based Moving Target Detection ... - Guo Lab

system and a tracking subsystem. To facilitate system integra-tion and power optimization, all functional modules of the detection system were packaged into Xilinx IP cores which communicated with each other through standard AXI buses. Considering that the Gaussian background modeling-based detection subsystem and the FCT-based tracking subsystem were relatively independent, and they processed images of different resolutions at various frame rates, multiple image processing channels were constructed in the PL of the SoC in parallel. The modules in green marker constituted the shared image processing channel, which acquired 640×480 RGB565 original images from the CMOS camera (OmniVision OV7670) at 30 fps and accomplished gray conversion and image enhancement. The modules in blue maker constituted the detection subsystem and were only enabled to process 160×120 gray images at 15 fps in the standby or detecting mode. The modules in orange maker constituted the tracking subsystem processing 320×240 gray images at 30 fps and were activated when the detection subsystem provided a posi-tive result. From a system point of view, the detection subsys-tem and the tracking subsystem worked alternately. Accord-ing, sensors and motors of the robot were powered on and off to realize systemic working mode switching.

Fig. 3 Diagram of the Zynq SoC-based detection system

B. PL-based Detection Subsystem The Gaussian background modeling method was adopted

to probe moving targets in the detection subsystem. As a clas-sical but effective detection algorithm, it senses the motion of objects by modeling pixel values with probability density functions and includes four steps. Firstly, the observation val-ue of a pixel in a frame is assumed to be independent of others and is modeled with a Gaussian probability density function denoted as:

( ) ( )t t t tP X G X , ,μ σ= (1) T 11 ( ) ( )

21( ) e2π

t t t tX X

t t tt

G X , ,μ μ

μ σσ

−− − −= (2)

where Xt is the value of a pixel in the t-th frame, tμ is the mean of the Gaussian distribution and tσ is the variance of the Gaussian distribution. Secondly, in the first frame, tμ is set to the pixel value and tσ is set to an initial value initσ . Thirdly, matching checks are conducted towards each pixel, which can be denoted as:

1

1

0, <2.51, >2.5

t t t

t t t

XI Xμ σμ σ

+

+

⎧ −= ⎨ −⎩ (3)

where I represents whether the pixel has a high likelihood of being a part of moving target. Fourthly, parameters of the Gaussian model are updated, which can be described as:

{ 11

1

(1- ) + , I=0, I=1

t tt

t

XX

α μ αμ ++

+= (4)

2 21 11

init

(1 ) ( ) 0 1

t t tt- X - , I

, Iα σ α μσ

σ+ ++

⎧ + == ⎨ =⎩ (5)

where α is the learning parameter. Finally, the position infor-mation of a moving target can be gotten by analyzing the con-nected region of unmatched pixels ( 1I = ).

As shown in Fig. 4, the principle part of the detection sub-system was realized as AXI IP cores in the PL of the SoC. The main part of these IP cores was implemented with Xilinx Vi-vado HLS in C/C++. It consisted of the Gaussian modeling detector, the image convertor, the image resizer, the target positioning module, the AXI DMA (Direct Memory Access) modules and the BRAM (Block Random Access Memory) controller. Because the inner BRAM of Zynq SoC (only 560 kB in the design of this paper) is not so sufficient to process 640×480 RGB565 images with the Gaussian background modeling method. Color images from OV7670 were converted to 160×120 gray images using bilinear interpolation by the image convertor and resizer. At the same time, the counterpart 320×240 gray images, which would be used for the initializa-tion process of the FCT tracker, were buffered into the DDR3 by the DMA module connecting to the AXI_HP1 port. The position information of the detected moving target was trans-mitted by the DMA module connecting to the AXI_HP2 port. A PL to PS interrupt would be provided by the target position-ing module if the detection result was positive.

Fig. 4 Diagram of the detection subsystem

Figure 5 shows the basic structures of the Gaussian mod-eling detector, which mainly segmented the moving target and the background. The working status and functional parameters can be configured by register access through an AXI-Lite bus controlled by the application program running in the PS. The storage control module and the image receive module respec-tively read ( )t t,μ σ and Xt from the BRAM and the image con-vertor and resizer through AXI-Stream buses. For a higher computational efficiency, multiple processing channels were instantiated inside the Gaussian background modeling module to construct superscalar arithmetic units. The storage control module and the image receive module provided data of pixels

Page 4: A Low-power SoC-based Moving Target Detection ... - Guo Lab

to these channels and trigger the computation process succes-sively. The dilate and erode module was implemented by di-rectly using video processing functions of Vivado HLS video library. The 160×120 foreground image from the dilate and erode module was resized to 320×240 and then transmitted to the target positioning module by the binary image transmit module.

Fig. 5 Diagram of the Gaussian modelling detector

Fig. 6 State machine diagram of the target positioning module

Figure 6 shows the state machine of the target positioning module. A pipeline mechanism was adopted for a higher processing speed. In the data receiving part, the detector read the binary image from the Gaussian modeling detector and stored it in the BRAM. In the calculating part, it checked the pixel value, scanned the connected region and calculated area of the connected region. If the area of a connected region was large enough, a positive detection result and an interrupt signal would be generated to wake up the PS and switch the robot to working or detecting mode.

C. PS-based Tracking Subsystem As shown in Fig. 3, the tracking subsystem was centered

with a FCT tracker running in the PS of the SoC [27]. And it tracked the moving target according to the detection result provided by the detection subsystem. Once it was activated, the RGB565 to grey convertor and AXI DMA module con-necting to the AXI HP0 port were enabled. Then the FCT tracker read 320×240 grey images buffered in the DDR3 at 30 fps and executed tracking procedures according to the initial target position provided by the detection subsystem. Consider-ing that the FCT algorithm does not work so well on high-speed targets, a second-order Kalman motion estimation mechanism was added into the tracker to enhance the effec-tiveness and robustness of the tracker. The location and mo-

tion tendency of the target was predicted with the linear dy-namic model denoted as:

1n n nβ+ = +X ΦX W (6)

1 1n n nα+ += +Y HX V (7)

1 0 00 1 00 0 1 00 0 0 1

tt

Δ⎛ ⎞⎜ ⎟Δ= ⎜ ⎟⎜ ⎟⎝ ⎠

Φ (8)

( )1 0 0 00 1 0 0=H (9)

( )T1 1n n n n n- n n-x ,y ,x x ,y y= − −X (10)

( )Tn n nx , y=Y (11)

where Φ is the transfer matrix, H is the measurement matrix, α and β are adjustable parameters of the Kalman filter, nW is the noise vector of prediction and nV is the noise vector of measurement [28]. nX contains real position and velocity in-formation on the target, and nY contains measured position information on the target. Because the second-order Kalman motion estimation mechanism has been proved to be effective for most objects in smooth motion, the improved FCT tracker adopted in this design was reliable.

IV. OPTIMIZATION AND EXPERIMENTS

A. Power Optimization As mentioned in Section II, there are plenty of space for

power optimization of the detection system by utilizing the power reduction mechanism provided by Zynq SoC. And three techniques were mainly adopted for power optimization in the design of this paper.

(1) CPU Frequency Scaling: As elaborated in Section III, the whole detection system was divided into the PL-based detection subsystem and the PS-based tracking subsystem. In the standby or detecting mode, only the detection subsystem was active, while the PS was set to low frequency mode through cpufreq framework of Linux. In the low frequency mode, the PS or ARM processor was running at 333.34 MHz rather than 666.67 MHz. Similarly, the detection subsystem would be shut down in the working or tracking mode. As a result, the computing load and power consumption of the SoC was decreased by reducing the number of peripherals working simultaneously.

(2) DPM: Different from most SoC products, the PL of Zynq SoC is a special peripheral of the PS, which means the PL gets configuration data and major clock signals (from FCLK0 to FCLK3) from the PS. So it is convenient to enable or disable HDL modules by gating off PL clocks. As shown in Fig. 3, the firmware of detection subsystem and the tracking subsystem were packaged into several AXI IP cores which used FCLK1 and FCLK2 separately. In detecting and tracking mode, FCLK1 and FCLK2 was accordingly gated off by the power manager through the video timing controller and the sysfs interface of Linux.

(3) DVFS: The PL of the Zynq SoC adopts the latest Xi-linx 7-series FPGA technologies and supports logics running even faster than 100 MHz, which may be beyond the real-time

Page 5: A Low-power SoC-based Moving Target Detection ... - Guo Lab

processing demand of modules in the system. The model of dynamical power consumption of an IC can be described as:

2dynamicP CfV= (12)

where C represents the load capacitor, f represents the clock frequency and V represents the core voltage. Consequently, scaling the clock frequency of modules in the PL is a feasible way for power optimization. For instance, referring to the pro-ject reports of Xilinx Vivado and Vivado HLS, the detection subsystem took about 3,729,000 clock cycles to process an image. Considering that the frame rate was only 15 fps, the clock frequency of the detection subsystem (FCLK1) was set to 60 MHz (62.15 ms per frame) rather than 100 MHz (37.29 ms per frame).

For the reason that the power consumption of the SoC cannot be measured separately, the power analysis tool in Xi-linx Vivado was used for quantitative analysis towards power optimization in this design. As shown in Fig. 7, the average power of the SoC dropped from 1991 mW to 1698 mW (near-ly 15%) by adopting the power optimization techniques in the standby or detection mode. So it was feasible to reduce the dynamic power of the SoC, even the static power only reduced a little. Meanwhile, an Agilent 34410A multimeter was used to evaluate the average power consumption of the whole ro-botic detection system by measure the current and voltage value. In the standby or detection mode, the average power of the detection system was 3.36 W and most robotic devices were powered off. In the working or tracking mode, the aver-age power of the detection system was 3.84 W and most ro-botic devices were powered on. Given that the robot was idle for commands or in detecting mode in most circumstances, the battery life of our robot was expanded by adopting optimiza-tion techniques mentioned above and shutting down other un-used devices.

Fig. 7 Test results of SoC power optimization

B. Experimental Results Figure 8 (a) shows a picture of the detection system in this

paper, which was implemented with an Avnet MicroZed (Z-7020) core-board, a peripherals board and an OmniVision V7670. The total weight of the system was 125 g. As shown in Figure 9 (a) to (f), a standard benchmark video named “bike.avi” was input into the detection system to test its detec-tion and tracking performance. In the detecting mode, images from the video were resized to 160×120 and a Gaussian background model was established and updated. The down-sampling process, the dilate operation and the erode operation

eliminated the detection noise caused by background distur-bances. Then the contour or position of the moving target was located by analyzing the connected region. In the tracking mode, the tracking subsystem was waken up. Then the FCT tracker tracked the target with a rectangle until the tracking process failed.

Fig. 8 Picture of the detection system

(a) Original image (320×240) (b) Detection result (160×120)

(c) Dilate and erode result (160×120) (d) Positioning result(320×240)

(e) Start tracking (320×240) (f) Tracking result (320×240)

Fig. 9 Test results of the detection system

V. CONCLUSION AND FUTURE WORK

A low power and portable visual detection system was de-signed and implemented for our amphibious spherical robots in this paper. To meet needs of compact size, low power con-sumption and low heat dissipation, a CMOS camera and Zynq SoC was adopted in this design. For power optimization, the whole system was packaged into several AXI IP cores and was divided into a detecting subsystem and a tracking subsys-tem which worked successively. Meanwhile, power reduction techniques including DPM and DVFS were adopted to de-crease the dynamical power consumption. The power analysis showed that the power consumption of the SoC dropped ob-viously. And the detecting test verified the effectiveness of the system.

The studies in this paper may be meaningful to the design of vision-based mobile robots. However, it did not thoroughly solve power and robustness problems of the detection and tracking system for our robots. The detection and tracking algorithms adopted in this design was not able to track mul-

Page 6: A Low-power SoC-based Moving Target Detection ... - Guo Lab

tiple moving targets. Some state-of-the-art real-time algo-rithms may perform better in robotic applications. Moreover, the reconfiguration technique, which is a key feature function of Zynq and may also reduce dynamical power, was not uti-lized in this paper. Our future work will aim at solving above problems.

ACKNOWLEDGMENT

This work was supported by the Excellent Young Scho-lars Research Fund of Beijing Institute of Technology (No. 3160012331522) and the Basic Research Fund of the Beijing Institute of Technology (No. 3160012211405). This research project was also partly supported by National Natural Science Foundation of China (61375094), Key Research Program of the Natural Science Foundation of Tian-jin (13JCZDJC26200) and National High Tech. Research and Development Program of China (No.2015AA043202).

Yuan Wang, Weili Peng and Zhe Wang also contributed to the fabrication work of the amphibious spherical robot used in this paper.

REFERENCES [1] S. Ballesta, G. Reymond, M. Pozzobon and J. R. Duhamel, “A real-time

3D video tracking system for monitoring primate groups,” Journal of Neuroscience Methods, vol. 234, pp. 147-152, August 2014.

[2] D. Lee, G. Kim, D. Kim, H. Myung and H.-T. Choi, “Vision-based object detection and tracking for autonomous navigation of underwater robots,” Ocean Engineering, vol. 48, pp. 59-68, July 2012.

[3] V. Rosenzveig, S. Briot, P. Martinet, E. Ozgur, and N. Bouton, "A method for simplifying the analysis of leg-based visual servoing of parallel robots," Proceedings of 2014 IEEE International Conference on Robotics and Automation (ICRA2014), pp. 5720-5727, Hong Kong, 2014.

[4] H. Mekki and M. Letaief, "Path planning for 3D visual servoing: For a wheeled mobile robot," Proceedings of 2013 International Conference on Individual and Collective Behaviors in Robotics (ICBR2013), pp. 86-91, Sousse, 2013.

[5] W. Jia, W.-J. Yi, J. Saniie and E. Oruklu, "3D image reconstruction and human body tracking using stereo vision and Kinect technology," Proceedings of 2012 IEEE International Conference on Electro/Information Technology (EIT2012), pp. 1-4, Indianapolis, 2012.

[6] Z. Wang, H. Song, H. Xiao, W. He, J. Gu and K. Yuan, "A real-time small moving object detection system based on infrared image," Proceedings of 2014 IEEE International Conference on Mechatronics and Automation (ICMA2014), pp. 1149-1154, Tianjin, 2014.

[7] B.-b. Wang, Z.-X. Chen, J. Wang and L. Zhang, "Pedestrian detection based on the combination of HOG and background subtraction method," Proceedings of 2011 International Conference on Transportation, Mechanical, and Electrical Engineering (TMEE2011), pp. 527-531, Changchun, 2011.

[8] C. Toepfer, M. Wende, G. Baratoff and H. Neumann, "Robot navigation by combining central and peripheral optical flow detection on a space-variant map," Proceedings of Fourteenth International Conference on Pattern Recognition (Volume: 2), pp. 1804-1807, Brisbane, 1998.

[9] M. Awais, N. Badruddin and M. Drieberg, "Automated eye blink detection and tracking using template matching," Proceedings of 2013 IEEE Student Conference on Research and Development (SCOReD2013), pp. 79-83, Putrajaya, 2013.

[10] M. Berger and L. M. Seversky, "Subspace Tracking under Dynamic Dimensionality for Online Background Subtraction," Proceedings of 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1274-1281, Columbus, 2014.

[11] L. Shindler, M. Moroni and A. Cenedese, “Using optical flow equation for particle detection and velocity prediction in particle tracking,” Applied Mathematics and Computation, vol. 218, no. 17, pp. 8684-8694, May 2012.

[12] Y.-T. Pai, L.-T. Lee, S.-J. Ruan, Y.-H. Chen, S. Mohanty and E. Kougianos, "Honeycomb Model Based Skin Color Detector for Face

Detection," Proceedings of 15th International Conference on Mechatronics and Machine Vision in Practice (M2VIP2008), pp. 11-16, Auckland, 2008.

[13] A. Kazlouski and R. K. Sadykhov, "Plain objects detection in image based on a contour tracing algorithm in a binary image," Proceedings of 2014 IEEE International Symposium on Innovations in Intelligent Systems and Applications (INISTA), pp. 242-248, Alberobello, 2014.

[14] W.-L. Zhao and C.-W. Ngo, “Flip-Invariant SIFT for Copy and Object Detection,” IEEE Transactions on Image Processing, vol. 22, no. 3, pp. 980-991, March 2013.

[15] Z. Lin and L. S. Davis, “Shape-Based Human Detection and Segmentation via Hierarchical Part-Template Matching,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 4, pp. 604-618, April 2010.

[16] S. Mirghasemi and E. Banihashem, "Sea target detection based on SVM method using HSV color space," Proceedings of 2009 IEEE Student Conference on Research and Development (SCOReD), pp. 555-558, UPM Serdang, 2009.

[17] A. Shashua, Y. Gdalyahu and G. Hayun, "Pedestrian detection for driving assistance systems: single-frame classification and system level performance," Proceedings of 2004 IEEE Intelligent Vehicles Symposium, pp. 1-6, New York, 2004.

[18] J. Wang, X. Wang, J. Wu, Y. Fan and G. Huang, "Research on moving target detection algorithm based on MRA and wavelet threshold," Proceedings of The 26th Chinese Control and Decision Conference (2014 CCDC), pp. 4396-4400, Changsha, 2014.

[19] D. Kiran, A. I. Rasheed, and H. Ramasangu, "FPGA implementation of blob detection algorithm for object detection in visual navigation." Proceedings of 2013 International conference on Circuits, Controls and Communications (CCUBE), pp. 1-5, Bengaluru, 2013.

[20] W. Ahmed, M. Irfan, Muzammil and Yaseen, "Pointing and target selection of object using color detection algorithm through DSP processor TMS320C6711," Proceedings of 2011 International Conference on Information and Communication Technologies (ICICT), pp. 1-3, Karachi, 2011.

[21] M. Imran, K. Shahzad, N. Ahmad, M. O'Nils, N. Lawal and B. Oelmann, “Energy-Efficient SRAM FPGA-Based Wireless Vision Sensor Node: SENTIOF-CAM,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 24, no. 12, pp. 2132-2143, December 2014.

[22] S. Guo, S. Mao, L. Shi and M. Li, "Development of an amphibious mother spherical robot used as the carrier for underwater microrobots," Proceedings of 2012 ICME International Conference on Complex Medical Engineering (CME), pp. 758-762, Kobe, 2012.

[23] S. Pan, S. Guo, L. Shi, Y. He, Z. Wang and Q. Huang, "A spherical robot based on all programmable SoC and 3-D printing," Proceedings of 2014 IEEE International Conference on Mechatronics and Automation (ICMA), pp. 150-155, Tianjin, 2014.

[24] M. Al Kadi, P. Rudolph, D. Gohringer and M. Hubner, "Dynamic and partial reconfiguration of Zynq 7000 under Linux," Proceedings of 2013 International Conference on Reconfigurable Computing and FPGAs (ReConFig), pp. 1-5, Cancun, 2013.

[25] Y. Wei, “Study of Embedded System Low-Power Software Technology,” Chinese Computer Technology and Development, vol. 1, no. 6, pp. 27-31, June 2011.

[26] M. Salajegheh, “Software techniques to reduce the energy consumption of low-power devices at the limits of digital abstractions,” Computer Science, vol. 23, no. 2, February 2013.

[27] K. Zhang, L. Zhang, M.-H. Yang, " Fast Compressive Tracking," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, no. 10, pp. 2002-2015, October 2014.

[28] S. Pan, L. Shi, S. Guo, "A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots," Sensors, vol. 15, no. 4, pp. 8232-8252, April 2015.