City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
Moving Picture Expert Group - Established in 1988 by the Joint ISO/IEC Technical Committee on IT.
Mission - To develop standards for coded representation of motion pictures and audio at a bit rate of up to 1.5Mb/s.
MPEG-1 was issued in 1992.
MPEG-2 (1994) - higher quality (not lower than NTSC and PAL) with bit rates between 2-10Mb/s.
Applications - Digital CATV and Terrestrial digital broadcasting distribution, Video recording and retrieval.
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
Lossy compression
Trade off image quality with bit rate according to objective or subjective criteria
Video sequences usually contains large statistical redundancies in both temporal and spatial directions
Intraframe coding
Interframe coding
Subsampling of Chrominance - Human eye is more sensitive to luminance than chrominance
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
Encoding of a single picture
Similar to JPEG
Discrete Cosine Transform- Converts spatial to frequency domain
Quantization of spectral coefficients
DPCM to encode DC terms
Zigzag scan to group zeros into long sequences, followed by run-length coding
Lossless, Variable Length Coding to encode AC coefficients
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
Remove temporal redundancies between frames
Use extensively in MPEG-1 and MPEG-2
Based on estimation of motion between video frames
Use of motion vectors to describe displacement of pixels from one frame to the next
Spatial correlation between motion vectors are high
One motion vector can represent the motion of a block of pixels.
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
Current framePrevious frame
Figure 1
For each image block in the current frame,
Find its nearest counterpart in the previous frame.
Record the displacement vector
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
Figure 2
mv
Frame N-1 Frame N
SearchWindow
Previous BlockLocation
Current BlockLocation
Only the prediction error (residual) images are transmitted
Good prediction reduces information content in residual images
Partition the previous and the current images into non-overlapping square blocks of size NxN
Previous frame Current frame
Previous frame Current frame
e.g., N=8
Partition the previous and the current images into non-overlapping square blocks of size NxN
Represent each block with a 2D matrix:
f(x,y) for previous frame
g(x,y) for current frame
Previous frame Current frame
Partition the previous and the current images into non-overlapping square blocks of size NxN
Represent each block with a 2D matrix:
f(x,y) for previous frame
g(x,y) for current frame
N
Row
N
Col
yxgyxfN
gfD0 0
2
2,,
1,
Previous frame Current frame
Partition the previous and the current images into non-overlapping square blocks of size NxN
Difference between any two blocks is given by
N
Row
N
Col
yxgyxfN
gfD0 0
2
2,,
1,
Previous frame Current frame
Partition the previous and the current images into non-overlapping square blocks of size NxN
The lower the difference, the more similar is the pair of blocks
Represent each block with a 2D matrix:
f(x,y) for previous frame
g(x,y) for current frame
A motion vector is computed for ‘EVERY’ blocks in the current frame. HOW?
Previous frame Current frame
A motion vector is computed for ‘EVERY’ blocks in the current frame. HOW?
Previous frame Current frame
Each block in the current frame matched against all the blocks in the previous frame, the closest one is taken to be its counterpart.
MV=(-2,-3)x
y
Previous frame Current frame
Each block in the current frame matched against all the blocks in the previous frame, the closest one is taken to be its counterpart.
MV=(-1,-3)x
y
Previous frame Current frame
Each block in the current frame matched against all the blocks in the previous frame, the closest one is taken to be its counterpart.
MV=(-1,-2)x
y
Previous frame Current frame
Each block in the current frame matched against all the blocks in the previous frame, the closest one is taken to be its counterpart.
The method is slow, especially if the image resolution and N are large.
Previous frame Current frame
A motion vector is computed for ‘EVERY’ blocks in the near neighborhood of the current frame.
Previous frame Current frame
For example, only the blocks that are adjacent to the current one is tested. The method is faster but the search area is restricted
Search Window
Previous frame Current frame
For example, only the blocks that are adjacent to the current one is tested. The method is faster but the search area is restricted
A motion vector is computed for ‘EVERY’ blocks in the near neighborhood of the current frame.
Assumption: changes between frames are small and are restricted within the search window.
Search Window
However, the search time is still long
Previous frame Current frame
A motion vector is computed for ‘EVERY’ blocks in the near neighborhood of the current frame.
Given a block in the current frame, search the best match in the previous frame along the vertical direction
Search Window
Previous frame Current frame
Search WindowBest
Match
Previous frame Current frame
Given a block in the current frame, search the best match in the previous frame along the vertical direction
Search the best match in the previous frame along the horizontal direction
Search WindowSolution
Previous frame Current frame
Search WindowSolution
Non-optimal solution with the assumption of smooth intensity distribution
Previous frame Current frame
Search the best match in the previous frame along the horizontal direction
Applications - multimedia and video transmission Based on JPEG and H.261 standards Flexible picture size and frame rate specified by
users Video source - Non-interlaced video signals. Minimum requirements on decoders
– resolution of 720X576– 30 frames/s – 1.86Mb/s
Applications - multimedia and video transmission Based on JPEG and H.261 standards Flexible picture size and frame rate specified by
users Video source - Non-interlaced video signals. Minimum requirements on decoders
– resolution of 720X576– 30 frames/s – 1.86Mb/s
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
Layer structure in MPEG bitstreamLayer structure in MPEG bitstream
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
Sequence Group Of Pictures (GOP) Picture Slice Macroblock Block
Sequence Group Of Pictures (GOP) Picture Slice Macroblock Block
Video SequenceVideo Sequence
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
Group Of PicturesGroup Of Pictures
A PictureA Picture A SliceA Slice MacroblockMacroblock BlockBlock
Partitioning of images into Macroblocks (MB) Intraframe coding on one out of every K images Motion estimation on MBs Generate (K-1) predicted frames Encode residual error images Conditional Replenishment of Macroblocks
Partitioning of images into Macroblocks (MB) Intraframe coding on one out of every K images Motion estimation on MBs Generate (K-1) predicted frames Encode residual error images Conditional Replenishment of Macroblocks
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
An image is partitioned into Macroblocks of size 16X16
1 MB = 4 luminance (Y) and 2 chrominance blocks (U,V)
The sampling ratio between Y, U and V is 4:1:1
An image is partitioned into Macroblocks of size 16X16
1 MB = 4 luminance (Y) and 2 chrominance blocks (U,V)
The sampling ratio between Y, U and V is 4:1:1
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
I P P PY1 Y2
Y3 Y4
U
V
Y:U:V = 4:1:11 2 3 4
I P P PY1 Y2
Y3 Y4
U
V
Y:U:V = 4:1:11 2 3 4
Figure 3
Chrominance format:
4:4:4, 4:2:2, 4:1:1, 4:2:0
Chrominance format:
4:4:4, 4:2:2, 4:1:1, 4:2:0Assuming 8bits for Y, U and V components
Assuming 8bits for Y, U and V components
4:4:44:4:44*8 (Y) + 4*8 (U) + 4*8 (V)
= 96 bits
4*8 (Y) + 4*8 (U) + 4*8 (V)
= 96 bits
Bits per pixel =
96/4 = 24 bpp
Bits per pixel =
96/4 = 24 bpp
Chrominance format:
4:4:4, 4:2:2, 4:1:1, 4:2:0
Chrominance format:
4:4:4, 4:2:2, 4:1:1, 4:2:0Assuming 8bits for Y, U and V components
Assuming 8bits for Y, U and V components
4:2:24:2:24*8 (Y) + 2*8 (U) + 2*8 (V)
= 64 bits
4*8 (Y) + 2*8 (U) + 2*8 (V)
= 64 bits
Bits per pixel =
64/4 = 16 bpp
Bits per pixel =
64/4 = 16 bpp
Chrominance format:
4:4:4, 4:2:2, 4:1:1, 4:2:0
Chrominance format:
4:4:4, 4:2:2, 4:1:1, 4:2:0Assuming 8bits for Y, U and V components
Assuming 8bits for Y, U and V components
4:1:14:1:14*8 (Y) + 1*8 (U) + 1*8 (V)
= 48 bits
4*8 (Y) + 1*8 (U) + 1*8 (V)
= 48 bits
Bits per pixel =
48/4 = 12 bpp
Bits per pixel =
48/4 = 12 bpp
Chrominance format:
4:4:4, 4:2:2, 4:1:1, 4:2:0
Chrominance format:
4:4:4, 4:2:2, 4:1:1, 4:2:0Assuming 8bits for Y, U and V components
Assuming 8bits for Y, U and V components
4:2:04:2:04*8 (Y) + 1*8 (U) + 1*8 (V)
= 48 bits
4*8 (Y) + 1*8 (U) + 1*8 (V)
= 48 bits
Bits per pixel =
48/4 = 12 bpp
Bits per pixel =
48/4 = 12 bpp
DCT Weighted (I-frame)/Uniform (P-frame) Quantization DPCM on DC terms Zigzag scan + runlength + VLC
DCT Weighted (I-frame)/Uniform (P-frame) Quantization DPCM on DC terms Zigzag scan + runlength + VLC
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
Macroblock Block to beencoded
8
88 8
DCT
Q
sz
DPCMDC
AC
ZigZagScanning
RunlengthEncoding
VLC
sz: Step Size
JPEG encoded DC
JPEG encoded ACFigure 4
Previous I or P frame is stored in both encoder and decoder
Motion Compensation is performed on a macroblock basis
One motion vector (mv) is generated for each macroblock
The mvs are coded and transmitted to the receiver
Previous I or P frame is stored in both encoder and decoder
Motion Compensation is performed on a macroblock basis
One motion vector (mv) is generated for each macroblock
The mvs are coded and transmitted to the receiver
Motion prediction error of pixels in each macroblock is calculated
Error blocks (size 8X8) are encoded in the same manner as those in the I-Picture
A video buffer plus step size adjustment maintain a constant target bit-rate
Motion prediction error of pixels in each macroblock is calculated
Error blocks (size 8X8) are encoded in the same manner as those in the I-Picture
A video buffer plus step size adjustment maintain a constant target bit-rate
x(n)Coder
Predictor
Decoder
-+
+
+
ec(n)
epq(n)
xp(n)
xr(n)
e(n)• Current signal x(n) is predicted
from previous sample x(n-1). Predicted value is xp(n)
• Predicted error e(n)=x(n)-xp(n) is compressed (encode) and transmitted
• The encoded error is decoded and added back to xp(n) to reconstruct the current signal x(n). However there are loss in the codec and the reconstructed signal xr(n) is not identical to x(n)
• xr(n) is taken to predicted the next sample x(n+1)
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
Figure 5
Block to beencoded
8
8
DCT Q
sz
RLCVLC
MC + FRAMESTORE
VB
CONTROL
Q-1
DCT-1
-+
+
+
ENCODEDRESIDUAL
ERROR
MOTIONVECTOR
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
I-Pictures are encoded independently I-Pictures can therefore be used as access point for
random access, fast-forward (FF) or fast-reverse (FR)
P-Pictures cannot be decoded alone, hence cannot be used as an access point
B-Pictures are constructed with the nearest I or P Pictures
Backward prediction requires the presence of the start and end frames, both can be used as access points
I-Pictures are encoded independently I-Pictures can therefore be used as access point for
random access, fast-forward (FF) or fast-reverse (FR)
P-Pictures cannot be decoded alone, hence cannot be used as an access point
B-Pictures are constructed with the nearest I or P Pictures
Backward prediction requires the presence of the start and end frames, both can be used as access points
Compression Random Access Coding DelayI Pictures only Low Highest LowI and P Pictures Medium Low MediumI, P and B Pictures High Medium High
Figure 6a
I P PB B B B
1 23 4 76 5Order of Coding
Figure 6b
I I II I I I
I I IP P P P
I I IP B P B
Compression Random Access Coding DelayI Pictures only Low Highest LowI and P Pictures Medium Low MediumI, P and B Pictures High Medium High
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
Only macroblocks that have been changed in the decoder are updated
Three types of MB are classified in MPEG standard Skipped MB - Zero motion vector, the MB is
neither encoded nor transmitted Inter MB - Motion Prediction is valid, the MB type
and address, motion vector and the coded DCT coefficients are transmitted
Intra MB - Encoded DCT coefficients of the MB are transmitted. No Motion Compensation is used
Only macroblocks that have been changed in the decoder are updated
Three types of MB are classified in MPEG standard Skipped MB - Zero motion vector, the MB is
neither encoded nor transmitted Inter MB - Motion Prediction is valid, the MB type
and address, motion vector and the coded DCT coefficients are transmitted
Intra MB - Encoded DCT coefficients of the MB are transmitted. No Motion Compensation is used
Pred-mcq
Pred-mc
Pred-m
Intra-q
Intra-d
Pred-cq
Pred-c
Skipped
Q
No Q
No C
C
Q
No Q
I
Non I
No C
C
Q
No Q
MC
No MC
MB
Pred-mcq
Pred-mc
Pred-m
Intra-q
Intra-d
Pred-cq
Pred-c
Skipped
Q
No Q
No C
C
Q
No Q
I
Non I
No C
C
Q
No Q
MC
No MC
MB
Non-zero motion vector,
Error coded with defined quantization
Pred-mcq
Pred-mc
Pred-m
Intra-q
Intra-d
Pred-cq
Pred-c
Skipped
Q
No Q
No C
C
Q
No Q
I
Non I
No C
C
Q
No Q
MC
No MC
MB
Non-zero motion vector,
Error coded with default quantization
Pred-mcq
Pred-mc
Pred-m
Intra-q
Intra-d
Pred-cq
Pred-c
Skipped
Q
No Q
No C
C
Q
No Q
I
Non I
No C
C
Q
No Q
MC
No MC
MB
Non-zero motion vector,
Error not coded
Pred-mcq
Pred-mc
Pred-m
Intra-q
Intra-d
Pred-cq
Pred-c
Skipped
Q
No Q
No C
C
Q
No Q
I
Non I
No C
C
Q
No Q
MC
No MC
MB
Macroblock intra-coded with defined quantization
Pred-mcq
Pred-mc
Pred-m
Intra-q
Intra-d
Pred-cq
Pred-c
Skipped
Q
No Q
No C
C
Q
No Q
I
Non I
No C
C
Q
No Q
MC
No MC
MBMacroblock intra-coded with default quantization
Pred-mcq
Pred-mc
Pred-m
Intra-q
Intra-d
Pred-cq
Pred-c
Skipped
Q
No Q
No C
C
Q
No Q
I
Non I
No C
C
Q
No Q
MC
No MC
MB MV = 0 (not predicted)
Error coded with defined quantization
Pred-mcq
Pred-mc
Pred-m
Intra-q
Intra-d
Pred-cq
Pred-c
Skipped
Q
No Q
No C
C
Q
No Q
I
Non I
No C
C
Q
No Q
MC
No MC
MB
MV = 0 (not predicted)
Error coded with default quantization
Pred-mcq
Pred-mc
Pred-m
Intra-q
Intra-d
Pred-cq
Pred-c
Skipped
Q
No Q
No C
C
Q
No Q
I
Non I
No C
C
Q
No Q
MC
No MC
MB
Macroblock copied from predictor picture
Pred-f/b/i cq
Pred-f/b/i c
Skipped Pred-f/b/i
Intra-q
Intra-d
Q
No Q
No C
C
Q
No Q
I
Non I
MB
DecodedBlock
8
8
RLDVLD
MC+FRAMESTORE
VB
Q-1
DCT-1
+
+
MOTIONVECTOR
ENCODEDDCT DATA
Note: MC for P-Frames only
Reverse process of the encoderReverse process of the encoder
Figure 7
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
A superset of MPEG-1 and backward compatible to the latter
Support interlaced video signals Scalable video-coding property, can be decoded by
receivers with different capabilities Permits partial implementation defined by Profiles
and Levels A Profile defines a new set of algorithms added as a
superset to the algorithms in the profile that follows A Level specifies the range of parameters supported
by the implementation
A superset of MPEG-1 and backward compatible to the latter
Support interlaced video signals Scalable video-coding property, can be decoded by
receivers with different capabilities Permits partial implementation defined by Profiles
and Levels A Profile defines a new set of algorithms added as a
superset to the algorithms in the profile that follows A Level specifies the range of parameters supported
by the implementation
LEVEL Pel/Line Pels/Frame Frame Rate (f/s) Bit-rate (Mb/s)High 1920 1152 60 80
High 1440 1440 1152 60 60Main 720 576 30 15Low 352 288 30 4
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
PROFILE
MAIN
Non-scalable Coding
Interlaced Video
B-Picture
4:2:0 YUV format
2 layers SNR ScalableCoding
MAIN +SNR
SCALABLE
2 layers SpatialScalable Coding
+SPATIAL
SCALABLESNR
SCALABLE4:0:0 YUV format
3 layers SNR andSpatial ScalableCoding+HIGH
SPATIALSCALABLE
4:2:2 YUV format
B-Picture PredictionMAIN -SIMPLE
Figure 8
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
Main Profile: MPEG-2 Non-scalable Coding Mode
A straightforward extension of MPEG-1 to accomodate interlaced video signals
Field/Frame MacroblocksTwo types of prediction
Frame Prediction: Prediction based on one or more previously decoded frames
Field Prediction : Prediction of individual field based on one or more previously decoded field
Main Profile: MPEG-2 Non-scalable Coding Mode
A straightforward extension of MPEG-1 to accomodate interlaced video signals
Field/Frame MacroblocksTwo types of prediction
Frame Prediction: Prediction based on one or more previously decoded frames
Field Prediction : Prediction of individual field based on one or more previously decoded field
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
Figure 9a
The four sub-blocks of a Frame
Macroblock
A stationary scene image
oeoe
oeoe
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
Figure 9b
The four sub-blocks of a Frame
Macroblock
A moving scene image
oeoe
oeoe
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
The object shape is changed with motion because of the interlacing mechanism
Same object may appear different in successive frames because of the above reason - prediction is not accurate
Simple image patterns may become complicated
More AC coefficients are required to describe each component in the frame Macroblock
The object shape is changed with motion because of the interlacing mechanism
Same object may appear different in successive frames because of the above reason - prediction is not accurate
Simple image patterns may become complicated
More AC coefficients are required to describe each component in the frame Macroblock
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
Figure 9c
The four sub-blocks of a Frame
Macroblock
A moving scene image
oeoe
oeoe
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
Compute Field-based Variance Compute Frame-based Variance If Field-based Variance < Frame-based Variance
MB coded with Field-based DCT
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
var1=0;
for(m=0;m<COL;m++)
for(n=0;n<ROW-2;n++)
{
D1=x(m,n)-x(m,n+1);
D2=x(m,n+1)-x(m,n+2)
var1+=(D1*D1)+(D2*D2);
}
oeoe
0123
n
ROW-1
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
var1=0;
for(m=0;m<COL;m++)
for(n=0;n<ROW-2;n++)
{
D1=x(m,n)-x(m,n+2);
D2=x(m,n+1)-x(m,n+3)
var1+=(D1*D1)+(D2*D2);
}
oeoe
0123
n
ROW-1
A top field is predicted from either previously coded top or bottom field with Motion Compensation (MC)
Bottom fields are predicted from previously coded top field with MC
Combine frame and field prediction is used in MPEG-2
A top field is predicted from either previously coded top or bottom field with Motion Compensation (MC)
Bottom fields are predicted from previously coded top field with MC
Combine frame and field prediction is used in MPEG-2
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
Provide interoperability between different services and systems
Base layer - encodes downscaled video at reduced bitstream
Enhance layer - encodes the difference between original signal and the upscaled base-layer video
Provide interoperability between different services and systems
Base layer - encodes downscaled video at reduced bitstream
Enhance layer - encodes the difference between original signal and the upscaled base-layer video
ENHANCEMENTENCODER
UPSCALING
BASE LAYERENCODER
DOWNSCALING
Video in
Enhancementlayer bitstream
Basic layerbitstream
ENHANCEMENTENCODER
UPSCALING
BASE LAYERENCODER
DOWNSCALING
Video in
Enhancementlayer bitstream
Basic layerbitstream
Figure 10
ENHANCEMENTENCODER
UPSCALING
BASE LAYERENCODER
DOWNSCALING
Video in
Enhancementlayer bitstream
Basic layerbitstream
ENHANCEMENTENCODER
UPSCALING
BASE LAYERENCODER
DOWNSCALING
Video in
Enhancementlayer bitstream
Basic layerbitstream
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
A 2-layer DCT, VLC and MC encoder
Both layers encoded video signal at the same resolution
Base layer - DCT coefficients are coarsely quantized and is protected from transmission error
Enhancement layer: DCT coefficients are finely quantized and their difference with the base layer is transmitted
A 2-layer DCT, VLC and MC encoder
Both layers encoded video signal at the same resolution
Base layer - DCT coefficients are coarsely quantized and is protected from transmission error
Enhancement layer: DCT coefficients are finely quantized and their difference with the base layer is transmitted
x Quantizer
Quantized level L
De-quantizer
xQ=s * L
Step size s
x Quantizer
Quantized level L
De-quantizer
xQ=s * L
Step size s
Quantization error
x Quantizer
Quantized level L
De-quantizer
xQ=S * L
Coarse step size S
Quantization error E
Quantizer De-quantizer
Fine step size s
LR
EQ=s * LQ
EQ can be used to compensate error in xQ
QDCT
Q-1
DCT-1
FSMC
Q
Q-1
VLC
VLC
VB
VBImageBlock Base Layer
Bit Stream
EnhancementLayer BitStream
+
+
+
+
++
-
-
DCT-1
FSMC
VLD
VB
VB
VLD
Base LayerBit Stream
EnhancementLayer BitStream
++
+
+
ImageBlocks
Figure 11
Temporal prediction from previous frame
Estimation of mv of lost MB from neighbouring Mbs
Add mvs to MBs of I-Frame for error concealment
2 Layer Coding using Data Partitioning, Spatial and Frequency scalability
Temporal prediction from previous frame
Estimation of mv of lost MB from neighbouring Mbs
Add mvs to MBs of I-Frame for error concealment
2 Layer Coding using Data Partitioning, Spatial and Frequency scalability
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
Applications of MPEG Encoding StandardCable and Interactive TV DistributionSatellite and Digital Terrestrial TVBroadcastingRemote SurveillanceVideo Conferencing/TelephonyStandalone or Computer Based MultimediaSystemsHDTVDVDVCDDigital Camera
Applications of MPEG Encoding StandardCable and Interactive TV DistributionSatellite and Digital Terrestrial TVBroadcastingRemote SurveillanceVideo Conferencing/TelephonyStandalone or Computer Based MultimediaSystemsHDTVDVDVCDDigital Camera
City University of Hong KongCity University of Hong KongCity University of Hong KongCity University of Hong Kong
Top Related