Design of Moving Object Detector Based on Modified GMM...

9
JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE, VOL.18, NO.4, AUGUST, 2018 ISSN(Print) 1598-1657 https://doi.org/10.5573/JSTS.2018.18.4.491 ISSN(Online) 2233-4866 Manuscript received Nov. 29, 2017; accepted Mar. 8, 2018 This work was partly presented in Korean Conference on Semiconductor (KCS) 2017. 1 School of Electronics and Information Engineering, Korea Aerospace University, Goyang-si, Korea 2 SoC Platform Research Center, Korea Electronics Technology Institute, Seongnam-si, Korea 3 Department of Information and Communication Engineering, Sejong University, Seoul, Korea E-mail : [email protected] Design of Moving Object Detector Based on Modified GMM Algorithm for UAV Collision Avoidance Jaechan Cho 1 , Yongchul Jung 1 , Dongsun Kim 2 , Seongjoo Lee 3 , and Yunho Jung 1 Abstract—In this paper, we propose a moving object detection (MOD) algorithm for unmanned aerial vehicle (UAV) collision avoidance, also presenting its hardware design and implementation results. The proposed MOD algorithm is based on a modified Gaussian mixture model (GMM) based background subtraction (BS) algorithm. Typical GMM based BS algorithms show superior performance in situations where the camera is kept stationary, but their performance is significantly degraded when the camera is moving, which occurs UAV applications. Therefore, we propose a modified GMM based BS algorithm that is able to compensate for the camera ego-motion by using UAV motion information obtained from an inertial measurement unit (IMU). The proposed moving object detector was designed with Verilog-HDL, and the real-time operation was verified and evaluated using an FPGA based test system. The proposed moving object detector was implemented with 475 logic slices, 5 DSP48s, and a block memory of 3,686.4 Kbits, and it can support real-time processing at an operating frequency of 170MHz for 1280×720 HD images. Index Terms—Background subtraction, FPGA, Gaussian mixture model, moving object detection, UAV I. INTRODUCTION Unmanned aerial vehicles (UAVs) have attracted a lot of attention because of their applications and multiple possibilities of commercialization in various fields [1]. In order for UAVs to perform their mission safely, it is of paramount importance to implement a robust collision avoidance function, which can detect and avoid different obstacles such as birds and other UAVs. A reliable moving object detection (MOD) technology, based on active sensor and image processing techniques, is the essential part of such a collision avoidance function [2, 3]. Image processing based MOD techniques are particularly suitable for UAVs because of their reasonable power consumption and lighter weight when compared to other active sensor techniques such as radar and lidar [4]. Temporal difference (TD), Gaussian mixture model (GMM) based background subtraction (BS) and optical flow estimation (OFE) are typical algorithms used in image processing based MOD techniques [5-10]. The TD algorithm detects moving objects by exploiting the difference between the input frame and the previous frame. Among all the techniques, it is probably the one characterized by the lowest degree of complexity, but its detection performance is very poor [5]. The OFE algorithm, in general, estimates the change of motion between two image frames assuming that the brightness is kept constant from one frame to the next [6, 7]. Although this kind of algorithm has high accuracy when detecting moving objects, its complexity is too high to be implemented in hardware because it requires excessive computations such as matrix inversion, gradient calculation, and pyramid operation to estimate the motion of each pixel. The GMM algorithm defines the

Transcript of Design of Moving Object Detector Based on Modified GMM...

Page 1: Design of Moving Object Detector Based on Modified GMM ...jsts.org/html/journal/journal_files/2018/08/Year... · The proposed moving object detector was designed with Verilog-HDL,

JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE, VOL.18, NO.4, AUGUST, 2018 ISSN(Print) 1598-1657 https://doi.org/10.5573/JSTS.2018.18.4.491 ISSN(Online) 2233-4866

Manuscript received Nov. 29, 2017; accepted Mar. 8, 2018 This work was partly presented in Korean Conference on Semiconductor (KCS) 2017. 1 School of Electronics and Information Engineering, Korea Aerospace University, Goyang-si, Korea 2 SoC Platform Research Center, Korea Electronics Technology Institute, Seongnam-si, Korea 3 Department of Information and Communication Engineering, Sejong University, Seoul, Korea E-mail : [email protected]

Design of Moving Object Detector Based on Modified GMM Algorithm for UAV Collision Avoidance

Jaechan Cho1, Yongchul Jung1, Dongsun Kim2, Seongjoo Lee3, and Yunho Jung1

Abstract—In this paper, we propose a moving object detection (MOD) algorithm for unmanned aerial vehicle (UAV) collision avoidance, also presenting its hardware design and implementation results. The proposed MOD algorithm is based on a modified Gaussian mixture model (GMM) based background subtraction (BS) algorithm. Typical GMM based BS algorithms show superior performance in situations where the camera is kept stationary, but their performance is significantly degraded when the camera is moving, which occurs UAV applications. Therefore, we propose a modified GMM based BS algorithm that is able to compensate for the camera ego-motion by using UAV motion information obtained from an inertial measurement unit (IMU). The proposed moving object detector was designed with Verilog-HDL, and the real-time operation was verified and evaluated using an FPGA based test system. The proposed moving object detector was implemented with 475 logic slices, 5 DSP48s, and a block memory of 3,686.4 Kbits, and it can support real-time processing at an operating frequency of 170MHz for 1280×720 HD images. Index Terms—Background subtraction, FPGA, Gaussian mixture model, moving object detection, UAV

I. INTRODUCTION

Unmanned aerial vehicles (UAVs) have attracted a lot of attention because of their applications and multiple possibilities of commercialization in various fields [1]. In order for UAVs to perform their mission safely, it is of paramount importance to implement a robust collision avoidance function, which can detect and avoid different obstacles such as birds and other UAVs. A reliable moving object detection (MOD) technology, based on active sensor and image processing techniques, is the essential part of such a collision avoidance function [2, 3]. Image processing based MOD techniques are particularly suitable for UAVs because of their reasonable power consumption and lighter weight when compared to other active sensor techniques such as radar and lidar [4].

Temporal difference (TD), Gaussian mixture model (GMM) based background subtraction (BS) and optical flow estimation (OFE) are typical algorithms used in image processing based MOD techniques [5-10]. The TD algorithm detects moving objects by exploiting the difference between the input frame and the previous frame. Among all the techniques, it is probably the one characterized by the lowest degree of complexity, but its detection performance is very poor [5]. The OFE algorithm, in general, estimates the change of motion between two image frames assuming that the brightness is kept constant from one frame to the next [6, 7]. Although this kind of algorithm has high accuracy when detecting moving objects, its complexity is too high to be implemented in hardware because it requires excessive computations such as matrix inversion, gradient calculation, and pyramid operation to estimate the motion of each pixel. The GMM algorithm defines the

Page 2: Design of Moving Object Detector Based on Modified GMM ...jsts.org/html/journal/journal_files/2018/08/Year... · The proposed moving object detector was designed with Verilog-HDL,

492 JAECHAN CHO et al : DESIGN OF MOVING OBJECT DETECTOR BASED ON MODIFIED GMM ALGORITHM FOR UAV …

distribution of each pixel as a mixture of multiple Gaussian distributions and estimates the background by updating distributions according to the intensity of the input pixel [8-10]. Moving objects are then detected by the difference between the estimated background and the input frame through the BS algorithm. Since the GMM algorithm estimates the background using multiple Gaussian models, it is efficient for detecting backgrounds that actively change such as traffic lights, flags, and leaves. Therefore, the GMM based BS algorithm shows higher accuracy than the TD algorithm, and it has a significantly lower implementation complexity than the OFE algorithm because it does not require matrix inversion, gradient calculation and pyramid operation. As a result, it is best suited for UAV applications where the weight and power consumption are main limiting factors. However, The GMM based BS algorithm shows good performance in fixed camera environments, but in mobile camera environments, such as those in UAV applications, the performance is significantly degraded due to the camera movement called ego-motion.

In this paper, we propose an efficient GMM based BS technique that can compensate for the ego-motion in mobile camera environments. This is accomplished by using UAV motion information from the initial measurement unit (IMU) sensor mounted in the UAV [11]. We present the design and implementation results of the proposed MOD hardware for the real-time processing. The remainder of this paper is organized as follows: Section 2 explains the GMM algorithm for background model generation and a technique for detecting moving objects using the BS algorithm. Section 3 explains the proposed MOD algorithm. Section 4 describes the hardware architecture of the proposed moving object detector. Section 5 describes the results obtained implementing the system on an FPGA. Finally, Section 6 concludes the paper.

II. GAUSSIAN MIXTURE MODEL BASED

BACKGROUND SUBTRACTION ALGORITHM

1. Statistical Model The GMM algorithm estimates the background image

by dealing with a statistical model of intensity for each pixel in the image frame. The statistical model for each

pixel of the image frame is composed by a mixture of k Gaussian distributions, which are represented by three parameters: weight (w), mean (m) and variance (s2).

The Gaussian distributions of each pixel have different parameters and change for every frame in the image frame. Therefore, these parameters are defined by three indices, X, k, and t, where X is the index for the pixel intensity, k is the index for the Gaussian distribution, and t is the index that refers to the time of the considered frame.

2. Parameters Update The parameters are updated differently depending on

the match condition, which indicates whether the pixel is suitable for the background model. A match condition is checked against the k Gaussian distributions that model the pixel. The match condition is

2 2 2

, ,1,k t k t k tm if X Dm s= - £ × (1)

where D is a threshold whose value is experimentally chosen equal to 2.5. The Gaussian distribution matching the pixel with mk = 1 is considered a “matched distribution,” and its parameters are updated as follows:

, 1 ,(1 ) ,k t w k t ww wa a+ = - × + (2) , 1 , , ,(1 ) ,k t k t k t k t tXm a m a+ = - × + × (3)

2 2 2, 1 , , , ,(1 ) ( ) .k t k t k t k t t k tXs a s a m+ = - × + × - (4)

The parameter aw is the learning rate for the weight,

while ak,t is the learning rate for mean and variance, which is derived from

, ,/ .k t w k twa a= (5) For the unmatched Gaussian distributions, mean and

variance are unchanged while the weights are updated as

, 1 ,(1 ) .k t w k tw wa+ = - × (6) When the pixel does not match any Gaussian

distributions, a specific “no-match” procedure is executed and the Gaussian distribution with the smallest wk,t is updated as

Page 3: Design of Moving Object Detector Based on Modified GMM ...jsts.org/html/journal/journal_files/2018/08/Year... · The proposed moving object detector was designed with Verilog-HDL,

JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE, VOL.18, NO.4, AUGUST, 2018 493

2 2, 1 0 , 1 ,t 1 0, ,k t k t t kw w Xm s s+ + += = = (7)

where wo and so

2 are constant values. The background model of each pixel is generated in a gray scale from 0 to 255 using the mean and weight of the Gaussian model as follow:

, ,1

.K

t k t k tk

B w m=

= ×å (8)

3. Moving Object Detection Using BS Algorithm

A black and white (BW) image is generated by using

the difference between the background model and the image frame, as follows:

21,

0,t t

t

if X B TBW

otherwise

ì - >ï= íïî

(9)

where T is a fixed threshold value. If the absolute value is larger than the threshold value, it is classified as a moving object and stored as a white value (binary 1), and the rest is classified as background and stored as a black value (binary 0). Then, the BW image is compensated by removing the noise with a median filter, and, finally, moving objects are detected by searching the coordinates of the white area.

III. PROPOSED MOVING OBJECT DETECTION

ALGORITHM

1. Moving Object Detection Using the Compensated Background Model

Fig. 1 shows a flow chart of the proposed MOD

algorithm. Since the UAV moves freely according to its mission, the input frame includes not only the motion of the object but also the motion of the background. Therefore, since the background can be recognized as an object, it is necessary to remove background motion information before the object is identified. For this purpose, the background model generated by the GMM needs to be compensated by using the motion information obtained from the IMU sensor mounted on the UAV. Initially, the motion information dx and dy in

the x-axis and y-axis directions, respectively, is extracted using the two extended Kalman filters (EKFs), as in [11]. Subsequently, the motion information dx and dy is divided into an integer part and a fractional part. The integer parts Idx and Idy can be obtained by round up dx and dy, while the fractional part fdx and fdy are computed as

,dx dxf I dx= - (10) .dy dyf I dy= - (11)

Compensation for the integer part is performed by

shifting the GMM parameters wk,t, mk,t and s2k,t generated

from the previous frame by Idx and Idy, respectively, as shown in Fig. 2. In the case of mk,t, the empty space generated by the shift operation is filled with the Xt of the same position as shown in Fig. 2(a). On the other hand, the empty spaces are filled with the fixed initialization values wo and so

2 in case of wk,t and s2k,t, respectively, as

depicted in Fig. 2(b) and (c). The fractional part is compensated by interpolating the

previously compensated GMM parameters wk,t, mk,t and s2

k,t in the x-axis and y-axis directions, respectively. The

Fig. 1. Flow chart of the proposed MOD algorithm. ,-memory

Idx

Idy

Filled with ,-memory

Idx

Idy

Filled with , -memory

Idx

Idy

Filled with

(a) (b) (c)

Fig. 2. Compensation for the integer part. Shaded region denotes empty space generated by shift operation (a) mk,tmemory, (b) wk,t memory, (c) s2

k,t memory.

Page 4: Design of Moving Object Detector Based on Modified GMM ...jsts.org/html/journal/journal_files/2018/08/Year... · The proposed moving object detector was designed with Verilog-HDL,

494 JAECHAN CHO et al : DESIGN OF MOVING OBJECT DETECTOR BASED ON MODIFIED GMM ALGORITHM FOR UAV …

interpolation for fdx is performed in the x-axis direction as shown in Eqs. (12-14), and the same is done for fdy in the y-axis direction as presented in Eqs. (15-17).

, , ,(i, j) (i, j) (1 ) (i 1, j)x

k t dx k t dx k tw f w f w= × + - × + (12)

, , ,(i, j) (i, j) (1 ) (i 1, j)xk t dx k t dx k tf fm m m= × + - × + (13)

2 2, , ,(i, j) (i, j) (1 ) (i 1, j)x

k t dx k t dx k tf fs s s= × + - × + (14)

, , ,(i, j) (i, j) (1 ) (i, j 1)x xk t dy k t dy k tw f w f w= × + - × + (15)

, , ,(i, j) (i, j) (1 ) (i, j 1)x xk t dy k t dy k tf fm m m= × + - × + (16)

2, , ,(i, j) (i, j) (1 ) (i, j 1)x x

k t dy k t dy k tf fs s s= × + - × + (17)

In the next step, the BW image is generated by the difference between the compensated background model and the current frame. The median filter is applied to the generated BW image, and finally, the moving object is detected by finding the coordinates of the white area.

2. Performance Evaluation

We used the VIVID dataset to evaluate the MOD

performance of the proposed algorithm. The VIVID dataset is an image dataset provided to evaluate object estimation algorithms and includes frames taken with an aerial camera [12]. Fig. 3 shows the result of applying an existing GMM-based BS algorithm and the proposed algorithm to the images of the VIVID Dataset. When applying the existing GMM-based BS algorithm in moving camera environments, it can be seen that there are many false positives (FP)s, which represents the total number of pixels in which the background pixel is

recognized as an object. When the proposed algorithm is applied to compensate for camera motion, FPs are significantly reduced compared to the existing algorithm.

Table 1 shows a numerical comparison between the existing and the proposed algorithms for precision (Pr), recall (Re), and F-measure (Fm), which are defined as follows:

,( )r

TPPTP FP

=+

(18)

,( )e

TPRTP FN

=+

(19)

2

.( )

r em

r e

P RF

P R× ×

=+

(20)

True positive (TP) represents the total number of

pixels in which an actual object pixel is recognized as an object, and false negative (FN) represents the total number of pixels where the actual object pixel is erroneously recognized as background. Therefore, Pr quantifies the precision of the actual object pixel among the pixels recognized by the algorithm as objects, and Re

Fig. 3. MOD performance of the proposed algorithm for VIVID dataset.

Table 1. MOD performance comparison between the proposed algorithm and other algorithms

Algorithm Recall Precision F-measure MD [13] 0.58352 0.58618 0.58484

HOMO [14] 0.65987 0.66373 0.66179 BMS [15] 0.83242 0.80376 0.83159

GMM based BS 0.20687 0.13549 0.16373 Proposed 0.64582 0.641553 0.64367

Page 5: Design of Moving Object Detector Based on Modified GMM ...jsts.org/html/journal/journal_files/2018/08/Year... · The proposed moving object detector was designed with Verilog-HDL,

JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE, VOL.18, NO.4, AUGUST, 2018 495

quantifies the detection rate as the ratio of pixels recognized by the algorithm as object among actual object pixels.

We presented the results of the comparisons with motion decomposition (MD) [13], homography based background subtraction (HOMO) [14], and background motion subtraction (BMS) [15] algorithms that can detect objects in mobile camera environments. The Fm performance of the proposed algorithm is about 48% better than the existing GMM based BS algorithm, it is superior to the highly complex MD algorithm, and has almost the same performance as the HOMO algorithm. Although the BMS algorithm shows better performance than the proposed algorithm, it requires very complicated calculations such as optical flow estimation, trajectory tracking, and reduced singular value decomposition.

Fig. 4 shows experimental results for images taken by our UAV. Fig. 4(b) shows the results archived with the existing GMM-based BS algorithm. It is clear that there are many FPs, i.e. background is misinterpreted as objects while actual objects are not detected. On the other hand, Fig. 4(c) shows that only actual objects are detected with the proposed algorithm.

IV. HARDWARE ARCHITECTURE DESIGN

Fig. 5 shows the block diagram of the proposed moving object detector, which consists of a camera motion compensator, a GMM based background

generator and an object finder. The camera motion compensator receives the image data and the motion information of the camera in the x and y directions, and compensates the GMM parameters in the mean, variance and weight memories. Subsequently, the GMM based background generator creates a background model using compensated GMM parameters and image data. Finally, the object finder generates the coordinate of the moving objects using the background model and the image data.

Fig. 6 shows a timing diagram of the proposed moving object detector. In this figure, “integer shift” refers to the compensation using the integer part in the camera motion compensator, and it takes clock cycles proportional to the resolution of the input image. That is, if MOD is performed for a 1280×720 HD resolution image, it takes 921.6K cycles. An interpolation is performed along the x and y-axis directions, taking 1.84M cycles, corres-ponding to two frame cycles. The GMM based background generator takes 2.76M cycles corresponding to 3 frame cycles to calculate Gaussian parameters for the background model. The object finder requires 5.53M cycles to generate the BW image and to find the coordinates of the white region after applying the median filter. However, while the object finder is operating, the camera motion compensator and BG generator can process the second image frame as shown in Fig. 6. Therefore, since the object coordinates are finally generated during in intervals of 5.53M cycles, the 30 fps real-time processing for 1280×720 HD image is possible at an operating frequency of 170 MHz.

1. Camera Motion Compensator

The camera motion compensator shown in Fig. 7

consists of decomposition unit (DU), interpolator, and

(a) (b) (c)

Fig. 4. MOD performance of the proposed algorithm for images taken by our UAV (a) input image, (b) GMM based BS, (c) proposed algorithm.

Fig. 5. Block diagram of the proposed moving object detector.

Page 6: Design of Moving Object Detector Based on Modified GMM ...jsts.org/html/journal/journal_files/2018/08/Year... · The proposed moving object detector was designed with Verilog-HDL,

496 JAECHAN CHO et al : DESIGN OF MOVING OBJECT DETECTOR BASED ON MODIFIED GMM ALGORITHM FOR UAV …

GMM memory controller. dx and dy are divided into the integer part and fractional part through a DU, and the integer part constitutes the input to the GMM memory controller, while the fractional part is the input to the interpolator. The GMM memory controller generates the memory read/write addresses according to the timing of the integer part compensation, interpolation, and background generation.

2. GMM based Background Generator

Fig. 8 shows the architecture of the GMM based

background generator, which consists of a weight calculator, weight normalizer, mean calculator, variance calculator, background calculator, and shift registers. Parallel and pipeline structures are applied to improve the processing speed. Initially, the processes of updating the data in the weight calculator and checking the match condition are performed in parallel.

Afterwards, the processes of normalizing the weight and updating the mean data in the mean calculator are performed at the same time. Finally, the variance data in the variance calculator are updated while generating the background model data.

3. Object Finder The object finder shown in Fig. 9 consists of

background subtractor, BW image memory, median filter, and boundary finder. Initially, BW image are generated by comparing the predefined threshold with the output of the background subtractor. These data are then stored in BW image memory and the median filter operation starts. Since the BW image has a binary value, the median filter design is based on a counter, as shown in Fig. 10.

The output of the median filter is determined by counting the white value of the area to which the filter is applied. In addition, since the number of operations for a two-dimensional (2D) median filter is much larger than

Fig. 6. Timing diagram of the proposed moving object detector.

2, 1, , i tw m s -

initw, 1i tw -

2inits

2, 1i ts -

, 1i tm -

2, 1, , i tw m s -

Fig. 7. Block diagram of the camera motion compensator.

, 1i tm -

a

2, 1i ts -

r

, 1i tw - ,i tw

,i tm

2,i ts

tB

Fig. 8. Block diagram of the background generator.

tB

Fig. 9. Block diagram of the object finder.

Page 7: Design of Moving Object Detector Based on Modified GMM ...jsts.org/html/journal/journal_files/2018/08/Year... · The proposed moving object detector was designed with Verilog-HDL,

JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE, VOL.18, NO.4, AUGUST, 2018 497

that for two one-dimensional (1D) median filters, two 1D median filters are chosen with the confirmation that there is no performance degradation as shown in Fig. 11. Finally, the boundary finder generates the object coordinates, detecting the white region in the median filtered BW image.

V. FPGA IMPLEMENTATION RESULTS

The proposed moving object detector is synthesized and implemented on a Xilinx Virtex6 (xc6vlx760) FPGA. Moreover, in order to compare with previous research results, the circuit has also been implemented on a Virtex6 (xc6vlx75t) and a Virtex5 (xc5vlx50) FPGAs. Table 2 summarizes the results of the implementation based on the Virtex6 (xc6vlx760) FPGA. The proposed moving object detector was implemented with 475 logic slices, 5 DSP48s, and 167 BRAM. Additionally, using

the FPGA test system at 170 MHz, we have verified that real-time processing is possible at a 1280×720 HD image resolution.

Table 3 shows comparison results with the previous research results. Since only GMM algorithm is implemented in moving object detectors of [16] and [17], the detection performance degrades in mobile camera environments. In addition, only BW images are generated in [16] and [17] and therefore an additional object finder block that finds the coordinates of the objects is needed. Although the proposed moving object detector includes more logic slices and supports less frame rate than the existing detectors, it shows good performance even in mobile camera environment as depicted in Fig. 3 and 4 and does not need additional object finder to find the coordinates of objects. In addition, it is confirmed in Table 3 that the GMM based background (BG) generator in the proposed design, which corresponds to [16] and [17], have similar complexity and processing speed.

VI. CONCLUSIONS

A novel moving object detector for UAV applications was proposed, and the results of its implementation were presented. The proposed MOD scheme has higher

Fig. 10. Block diagram of the median filter.

(a) (b) (c)

Fig. 11. Performance comparisons according to median filtering schemes (a) input image, (b) 2D median filter, (c) two 1D median filters.

Table 2. Performance of the proposed moving object detector circuit implemented on Virtex 6 FPGA

Camera Motion Compensator

Background Generator

Object Detector

FPGA Device Virtex6 xc6vlx760 LUT 59 399 80

Flip Flop 119 794 158 Slice 53 352 70

DSP48s 1 3 1 BRAM 167

Frequency 176.3 MHz

Table 3. Comparison of the proposed moving object detector and previous research results

Target FPGA Circuit LUT Slice DSP

48s Freq

(MHz) HD (fps)

Proposed 1071 475 5 176.3 32 BG Gen. 794 352 3 192.3 92

Virtex6 xc6vlx75t

[16] 788 349 3 189.3 91 Proposed 984 443 5 120.7 21 BG Gen. 729 325 3 132.8 64

[16] 724 323 3 130.9 63 Virtex5

xc5vlx50

[17] 1066 346 10 50.5 24

Page 8: Design of Moving Object Detector Based on Modified GMM ...jsts.org/html/journal/journal_files/2018/08/Year... · The proposed moving object detector was designed with Verilog-HDL,

498 JAECHAN CHO et al : DESIGN OF MOVING OBJECT DETECTOR BASED ON MODIFIED GMM ALGORITHM FOR UAV …

accuracy than the existing GMM based BS algorithms even in mobile camera environments, and it has been experimentally proven that it can be used for UAV applications. The proposed moving object detector was implemented with an FPGA, and its 30fps real-time processing for 1280×720 HD resolution image was verified at an operating frequency of 170MHz.

ACKNOWLEDGMENTS

This work was supported by the Civil-Military Technology Cooperation Program, 16-CM-RB-12, funded by the Ministry of Trade, Industry and Energy (MOTIE, Korea) and Defense Acquisition Program Administration (DAPA, Korea), and CAD tools were supported by IDEC.

REFERENCES

[1] H. Menouar, A. Tuncer, “UAV-enabled intelligent transportation systems for the smart city: applications and challenges,” Communications, IEEE Magazine, Vol. 55, pp.22-28, Mar., 2017.

[2] N. Gageik, P. Benz, S. Montenegro, “Obstacle detection and collision avoidance for a UAV with complementary low-cost sensors,” IEEE Access, Vol. 3, pp.599-609, May., 2015.

[3] Y. A. Nijsure, G. K. Kaddoum, N. K. Mallet, "Cognitive Chaotic UWB-MIMO Detect-Avoid Radar for Autonomous UAV Navigation,” Intelligent Transportation Systems, IEEE Transactions on, Vol. 17, No. 11, Nov., 2016.

[4] A. Ferrick, J. Fish, E. Venator, G. S. Lee, "UAV Obstacle avoidance using image processing techniques," Technologies for Practical Robot Applications, 2012, IEEE International Conference on, pp.73-78, Apr., 2012.

[5] Z. Chaohui, D. Xiaohui, X. shuoyu, S. Zheng, “An improved moving object detection algorithm based on frame difference and edge detection,” Image and Graphics, 2007, ICIG 2007, Fourth International Conference on, pp.519-523, Aug., 2007.

[6] N. Sharmin, R. Brad, “Optimal Filter Estimation for Lucas-Kanade Optical Flow,” Sensors 2012, Vol. 12, pp. 12694-12709, Sep., 2012.

[7] T. Brox, A. Bruhn, N. Papenberg, J. Wickert,

“High accuracy optical flow estimation based on a theory for warping,” Computer Vision, 8th European Conference on, Springer LNCS 3024, Vol. 4, pp. 25-36, May., 2004.

[8] C. Stauffer, W. E. L Grimson, “Adaptive background mixture models for real-time tracking,” Computer Vision and Pattern Recognition, 1999, IEEE Computer Society Conference on, Vol. 2, pp. 246-252, Jun., 1999.

[9] D. Lee, "Effective Gaussian mixture learning for video background subtraction," Pattern Analysis and Machine Intelligence, IEEE Transactions on, Vol. 27, No. 5, pp. 827-832, May., 2005.

[10] P. Suo, Y. Wang, "An Improved adaptive background modeling algorithm based on Gaussian mixture model," Signal Processing, 2008, ICSP 2008, 9th international Conference on, pp. 1436-2439, Oct., 2008.

[11] G. Ligorio, A. M. Sabatini, "Extended Kalman Filter-Based Methods for Posed Estimation Using Visual, Intertial and Magnetic Sensors," Sensors 2013, Vol. 13, pp. 1919-1941, Jan., 2013.

[12] http://vision.cse.psu.edu/data/vividEval/datasets/datasets. html

[13] S. Wu, O. Oreifej, M. Shah, "Action recognition in videos acquired by a moving camera using motion decomposition of lagrangian particle trajectories," Computer Vision, ICCV, 2011 IEE Conference on, pp. 1419-1426, Nov., 2011.

[14] R. Hartley, A. Zisserman, "Multiple View Geometry in Computer Vision," Cambridge University Press, ISBN: 0521540518, second edition, pp. 123-126, 2004.

[15] Y. Wu, X. He, T. Q, "Moving Object Detection with Freely Moving Camera via Background Motion Subtraction," Circuits and Systems for Video Technology, IEEE Transactions on, Vol. 27, No. 2, pp. 236-248, Feb., 2017.

[16] M. Genovese, E. Napoli, “ASIC and FPGA Implementation of the Gaussian Mixture Model Algorithm for Real-Time Segmentation of High Definition video,” Very Large Scale Integration (VLSI) Systems, IEEE Transactions on, Vol. 22, No. 3, Mar., 2014.

[17] M. Genovese, E. Napoli, “FPGA-based architecture for real time segmentation and denoising of HD videos,” J. Real Time Image Process, pp. 1-13, 2012.

Page 9: Design of Moving Object Detector Based on Modified GMM ...jsts.org/html/journal/journal_files/2018/08/Year... · The proposed moving object detector was designed with Verilog-HDL,

JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE, VOL.18, NO.4, AUGUST, 2018 499

Jaechan Cho received the B.S. and M.S. degrees in the School of Electronics and Information Engi-neering from Korea Aerospace University, Goyang, Korea, in 2015 and 2017, respectively. He is currently working towards the Ph.D.

degree in the School of Electronics and Information Engineering, Korea Aerospace University. His research interests include the signal processing algorithm and VLSI implementation for the image processing system.

Yongchul Jung received the B.S. and M.S. degrees in the School of Electronics and Information Engi-neering from Korea Aerospace University, Goyang, Korea, in 2015 and 2017, respectively. He is currently working towards the Ph.D.

degree in the School of Electronics and Information Engineering, Korea Aerospace University. His research interests include the signal processing algorithm and VLSI implementation for the image processing system.

Dongsun Kim received B.S. and M.S. degrees from the School of Electronics and Electrical Engi-neering in 1997 and 1999, respec-tively, at INHA University, Incheon, Korea. In 2005, he received his Ph.D. degree from the School of Infor-

mation and Telecommunication Engineering at INHA University, Incheon, Korea. Since 1999, he has been with the Korea Electronics Technology Institute (KETI), Gyeonggi-do, Korea, working on R&D at the SoC Platform Research Center, where he is currently a senior researcher and director. He is a member of the IEEE. His research interests are in the areas of wireless/wired communication systems, wireless sensor networks, VLSI & SoC design, multimedia codec design, computer architecture, and embedded system design

Seongjoo Lee received his B.S., M.S., and Ph.D. degrees in Department of Electrical and Electronic Engineering from Yonsei University, Seoul, Korea, in 1993, 1998, and 2002, respectively. From 2002 to 2003, he was a senior research engineer at the IT SOC

Research Center, Yonsei University, Seoul, Korea. From 2003 to 2005, he was a senior engineer in Samsung Electronics Co. Ltd., Suwon, Korea. He was a research professor at the IT Center and the IT SoC Research Center, Yonsei University, Seoul, Korea from 2005 and to 2006. He is currently a professor in the Department of Information and Communication Engineering at Sejong University, Seoul, Korea. His current research interests include PN code acquisition algorithms, cdma2000 modem SoC design, CDMA communication, and SoC design for image processing.

Yunho Jung received the B.S., M.S., and Ph.D. degrees in Department of Electrical and Electronic Engineering from Yonsei University, Seoul, Korea, in 1998, 2000, and 2005, respectively. From 2005 to 2007, he was a senior engineer in Samsung

Electronics, Suwon, Korea. From 2007 to 2008, he was a research professor at Institute of TMS Information Technology, Yonsei University, Seoul, Korea. He is currently a professor in the School of Electronics and Information Engineering, Korea Aerospace University, Goyang, Korea. His research interests include the signal processing algorithm and VLSI implementation for the wireless communication and image processing systems.