Lecture 3
description
Transcript of Lecture 3
![Page 1: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/1.jpg)
Lecture 3
Mei-Chen Yeh2010/03/16
![Page 2: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/2.jpg)
Announcements (1)
• Assignment formats:– A word or a pdf file– Subject ( 主旨 ):
Multimedia System Design-Assignment #X-your student id-your nameMultimedia System Design-Assgnment #2-697470731- 游宗毅
– File name ( 檔名 )Assignment #X-your student id-your nameAssignment #2-697470731- 游宗毅 .doc
![Page 3: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/3.jpg)
Announcements (2)
• For the assignment#2…• If you did not use either a pdf or a doc file, please re-send
your report to TA using the format.• Due 03/16 (today)
…and based on TA’s clock
![Page 4: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/4.jpg)
Announcements (3)
• The reading list is finally released!• Sources:
– Proceedings of ACM MM 2008– Proceedings of ACM MM 2009– The best paper in MM 2006 and MM 2007
• Interesting papers not on the list? Let me know!
![Page 5: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/5.jpg)
Announcements (4)
• So you need to…– Browse the papers– Discuss with your partners about the paper choice– …and do it as soon as possible! I love that paper!
That paper sucks…
![Page 6: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/6.jpg)
How to access the papers?
• The ACM digital library– http://portal.acm.org/– Should be able to download the papers if you connect to the site on
campus
• MM08 paper on the web– http://mpac.ee.ntu.edu.tw/multimediaconf/acmmm2008.html
• Google search
![Page 7: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/7.jpg)
Next week in class
• Bidding papers! ( 論文競標 )• Each team will get a ticket, where you put
your points.
Ticket # 7Team name: 夜遊隊Team members: 葉梅珍 游宗毅-------------------------------------------------------------------paper1 paper2 paper3 paper4 … paper 25
50 10 15 20 5
Total : 100 points
![Page 8: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/8.jpg)
Bidding rules
• The team with the most points gets the paper.• Every team gets one paper.• When a tie happens…
…and the winner takes the paper.
![Page 9: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/9.jpg)
More about the bidding process
• Just, fair, and open!
• I will assign a paper to teams in which no one show up for the bid.
公平 公正 公開
Questions
![Page 10: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/10.jpg)
Multimedia Compression (1)
![Page 11: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/11.jpg)
Outline
• Introduction• Information theory• Entropy ( 熵 ) coding
– Huffman coding– Arithmetic coding
![Page 12: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/12.jpg)
Why data compression?
• Transmission and storage
– For uncompressed video• CD-ROM (650MB) could store 650MB x 8 / 221Mbps ≈ 23.5
seconds• DVD-5 (4.7GB) could store about 3 minutes
Approximate Bit Rates for Uncompressed Sources
![Page 13: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/13.jpg)
What is data compression?
• To represent information in a compact form (as few bits as possible)
• Technique
Original
Reconstructed data
Compressed data
Compression Reconstruction
Codec = encoder + decoder
![Page 14: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/14.jpg)
Technique (cont.)
• Lossless– The reconstruction is identical to the
original.
• Lossy– Involves loss of information Lossy!
Lossless? Not necessarily true!
Do not send money!Do now send money!
Encoder Decoder
source
Example: image codec
![Page 15: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/15.jpg)
Performance MeasuresHow do we say a method is good or bad?
• The amount of compression• How closely the reconstruction is• How fast the algorithm performs• The memory required to implement
the algorithm• The complexity of the algorithm• …
![Page 16: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/16.jpg)
Two phases: modeling and coding
• Modeling– Discover the structure in the data– Extract information about any redundancy
• Coding– Describe the model and the residual (how the
data differ from the model)
Original
Compressed data
Fewer bits!
Encoder
![Page 17: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/17.jpg)
![Page 18: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/18.jpg)
Example (1)
• 5 bits * 12 samples = 60 bits• Representation using fewer bits?
![Page 19: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/19.jpg)
Example: Modeling
,...2,18ˆ nnxn
n
![Page 20: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/20.jpg)
Example: CodingnxOriginal data
8ˆ nxnModel 10 11 12 13 14 15 16 17 18 19 20
Residual nnn xxe ˆ 1 0 -1 1 -1 0 1 -1 -1 1 10
• {-1, 0, 1} • 2 bits * 12 samples = 24 bits (compared with
60 bits before compression)We use the model to predict the value, then encode the residual!
![Page 21: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/21.jpg)
Another Example
• Morse Code (1838)
Shorter codes are assigned to letters that occur more frequently!
![Page 22: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/22.jpg)
A Brief Introduction to Information Theory
![Page 23: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/23.jpg)
Information Theory (1)
• A quantitative ( 量化的 ) measure of information– You will win the lottery tomorrow.– The sun will rise in the east tomorrow.
• Self-information [Shannon 1948]P(A): the probability that the event A will happen)(log
)(
1log)( AP
APAi bb
b determines the unit of information
The amount of surprise or uncertainty in the message
![Page 24: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/24.jpg)
Information Theory (2)
• Example: flipping a coin– If the coin is fairP(H) = P(T) = ½i(H) = i(T) = -log2(½) = 1 bit
– If the coin is not fairP(H) = 1/8, P(T)=7/8i(H) = 3 bits, i(T) = 0.193 bitsThe occurrence of a HEAD conveys more information!
![Page 25: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/25.jpg)
Information Theory (3)
• For a set of independent events Ai
– Entropy (the average self-information)
– The coin example• Fair coin (1/2, 1/2): H=P(H)i(H) + P(T)i(T) = 1• Unfair coin (1/8, 7/8): H=0.544
SAi
)(log)()()()( ibiii APAPAiAPSH
![Page 26: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/26.jpg)
Information Theory (4)
• Entropy– The best a lossless compression scheme can do– Not possible to know for a physical source– Estimate (guess)!
• Depends on our assumptions about the structure of the data
![Page 27: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/27.jpg)
Estimation of Entropy (1)
• 1 2 3 2 3 4 5 4 5 6 7 8 9 8 9 10– Assume the sequence is i.i.d.
• P(1)=P(6)=P(7)=P(10)=1/16P(2) =P(3)=P(4)=P(5)=P(8)=P(9)=2/16
• H = 3.25 bits
– Assume sample-to-sample correlation exists• Model: xn = xn-1 + rn
• Residuals: 1 1 1 -1 1 1 1 -1 1 1 1 1 1 -1 1 1• P(1)=13/16, P(-1)=3/16• H = 0.7 bits
![Page 28: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/28.jpg)
Estimation of the Entropy (2)
• 1 2 1 2 3 3 3 3 1 2 3 3 3 3 1 2 3 3 1 2– One symbol at a time
• P(1) = P(2) = ¼, P(3) = ½• H = 1.5 bits/symbol• 30 (1.5*20) bits are required in total
– In blocks of two• P(1 2) = ½, P(3 3)=½• H = 1 bit/block• 10 (1*10) bits are required in total
![Page 29: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/29.jpg)
Coding
![Page 30: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/30.jpg)
Coding (1)• The assignment of binary sequences to elements of an
alphabet
• Rate of the code: average number of bits per symbol• Fixed-length code and variable-length code
letter_
alphabet
codeword
code
![Page 31: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/31.jpg)
Ambiguous Not uniquely decodable
Prefix codeinstantaneous
DecodableWith one-
symbol delay
![Page 32: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/32.jpg)
Coding (3)
• Example of not uniquely decodable code
Letters Codea1 0a2 1a3 00a4 11
100a2 a3 =>a2 a1 a1 =>
back
100
![Page 33: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/33.jpg)
Coding (4)
• Not instantaneous, but uniquely decodable code
Oops!
a2
a2 a3 a3 a3 a3 a3 a3 a3 a3
![Page 34: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/34.jpg)
Prefix Codes
• No codeword is a prefix to another codeword• Uniquely decodable
![Page 35: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/35.jpg)
Huffman Coding
• Basic algorithm• Extended form• Adaptive coding
![Page 36: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/36.jpg)
Huffman Coding
• Observations of prefix codes
– Frequent symbols have shorter codewords– The two symbols that occur least frequently have
the same lengthHuffman procedure:Two least frequent symbols differ only in the last bit. Ex: m0, m1
![Page 37: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/37.jpg)
Algorithm
• A = {a1, a2, a3, a4, a5}
• P(a1) = 0.2, P(a2) = 0.4, P(a3) = 0.2, P(a4) = 0.1, P(a5) = 0.1
a2 (0.4)a1 (0.2)a3(0.2) a4(0.1) a5(0.1)
a’4(0.2)
a’3(0.4)
a’1(0.6)
101000 0010 0011
(1)
111
100
0
0
![Page 38: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/38.jpg)
Algorithm
• Entropy: 2.122 bits/symbol• Average length: 2.2 bits/symbol
101000 0010 0011a1 (0.2)a3(0.2) a4(0.1) a5(0.1)
a’4(0.2)
a’3(0.4)
a’1(0.6)(1)
111
100
0
0
a2 (0.4) 00
a2 (0.4)
a’2(0.6)
(1)
1110a3(0.2)a1(0.2)
a’1(0.4)0 1
1
1
011010a5(0.1)a4(0.1)
a’4(0.2)0 1
0
0
Which code is preferred? The one with minimum variance!
![Page 39: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/39.jpg)
Exercisee h l o p t w
30.5% 13.4% 9.8% 16.1% 5% 22.8% 2.4%
e (30.5) h (13.4) o (16.1) p (5.0) w (2.4) l (9.8) t (22.8)
![Page 40: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/40.jpg)
Length of Huffman Codes
• A source S with A = {a1,…,ak} and {P(a1),…,P(ak)}– Average codeword length:
• Lower and upper bounds
K
iii laPl
1
)(
1)()( SHlSH
entropy of the source
average code length
![Page 41: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/41.jpg)
Extended Huffman Codes (1)
• Consider small alphabet and skewed probabilities– Example:
• Block multiple symbols together
symbol Prob. codeword
a 0.9 0
b 0.1 11 bit / letterNo compression!
symbol Prob. codeword
aa 0.81 0
ab 0.09 10
ba 0.09 110
bb 0.01 111 0.645 bit / letter
![Page 42: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/42.jpg)
Extended Huffman Codes (2)
Another example:
H = 0.816 bits/symbol = 1.2 bits/symbol
H = 1.6315 bits/block = 0.816 bits/symbol = 1.7228 / 2 = 0.8614 bits/symbol
sizeblkSHlSH
1)()(
![Page 43: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/43.jpg)
Adaptive Huffman Coding (1)
• No initial knowledge of source distribution• One-pass procedure• Based on statistics of encountered symbols
Maintain a Huffman Tree!
![Page 44: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/44.jpg)
Adaptive Huffman Coding (2)
• Huffman tree• Node (id, weight)• Sibling property
– w(parent) = sum of w(children)– ids are ordered with non-
decreasing weights
weight: # of occurrence
Id: 1 2 3 4 5 6 7 8 9 10 11w: 2 3 5 5 5 6 10 11 11 21 32
Non-decreasing!
![Page 45: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/45.jpg)
Adaptive Huffman Coding (3)• NYT (not yet transmitted) code
– w(NYT) = 0– Transmitted when seeing
a new letter– Smallest id in the tree
• Uncompressed code (ex: m letters)
![Page 46: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/46.jpg)
Adaptive Huffman Coding: Encode (1)
Input: [a a r d v a r k] (Alphabet: 26 lowercase letters)Initial tree
a00000
Output: 00000
a1
1
NYT0
r10001
010001
![Page 47: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/47.jpg)
NYT000
v1011
1
2
3
5
001011
Adaptive Huffman Coding: Encode (2)
Input: [a a r d v a r k] (Alphabet: 26 lowercase letters)
NYT00
d00011
Output: 0000010100010000011
![Page 48: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/48.jpg)
Adaptive Huffman Coding: Encode (3)
1
2
3
5
Swap 47 and 48
3 2
5
Tree update
![Page 49: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/49.jpg)
Adaptive Huffman Coding: Encode (4)
Swap 49 and 50
3 2
5
49 50
4748
45
43 44
46
Tree update
![Page 50: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/50.jpg)
![Page 51: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/51.jpg)
Adaptive Huffman Coding: Decode (1)
Input: 0000010100010000011001011…Initial tree
0000Not in the uncompressed codeGet one more bit
Output:
00000a
a
1a
a
![Page 52: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/52.jpg)
Adaptive Huffman Coding: Decode (2)
Input: 0000010100010000011001011…
0NYT
1000Not in the uncompressed codeGet one more bit
00NYT
Output: aa
10001r
r ……
![Page 53: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/53.jpg)
Arithmetic Coding
• Basic algorithm• Implementation
![Page 54: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/54.jpg)
Small alphabetSkewed probability
Cases where Huffman Coding doesn’t work well
Letter Probability Codeword
a1 0.95
a2 0.02
a3 0.03
01110
H = -0.95*log(0.95)-0.02*log(0.02)-0.03*log(0.03)=0.335 bits/symbolAverage length = 0.95*1+0.02*2+0.03*2 = 1.05 bits/symbol
Average length = 1.222 bits/block = 0.611 bits / symbol
![Page 55: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/55.jpg)
Huffman codes for large blocks
• # {codeword} grows exponential with block size– N symbols, group m symbols for a block =>
Nm codewords
• Generate codes for all sequences given a length m
• Not efficient!
![Page 56: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/56.jpg)
Arithmetic Coding: Generate a tag
• View the entire sequence as a big block– Step 1: Map the sequence into a unique tagEx: A = {a1, a2, a3}, P(a1) = 0.7, P(a2) = 0.1, P(a3) = 0.2
Encode a1, a2, a3, …
– Step 2: Generate a binary code for the tag
0.0
1.0
0.70.8
a1
a2
a3
0.0
0.7
0.490.56
a1
a2
a3
0.49
0.56
0.5390.546
a1
a2
a3
0.546
0.56
0.55580.5572
a1
a2
a30.553
![Page 57: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/57.jpg)
Arithmetic Coding: Interpret the tag
Ex: 0.553
0.0
1.0
0.70.8
a1
a2
a3
0.0
1.0
0.70.8
a1
a2
a3
Update the number(0.553-0.49)*(0.56-0.49)= 0.9
0.0
1.0
0.70.8
a1
a2
a3
a1
l(1) = 0.0u(1) = 0.7
a3
l(3) = 0.7+(0.8-0.7)*0.8u(3) = 0.7 +(0.8-0.7)*1.0
Update the number:(0.553-0.0)/(0.7-0.0)= 0.79
a2
l(2) = 0.0+(0.7-0.0)*0.7=0.49u(2) = 0.0+(0.7-0.0)*0.8=0.56
![Page 58: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/58.jpg)
One more example
• A = {a1, a2, a3}, P(a1) = 0.8, P(a2) = 0.02, P(a3) = 0.18Encode a1, a3, a2, a1
0.0
1.0
0.80.82
a1
a2
a3
0.0
0.8
0.640.656
a1
a2
a3
0.656
0.8
0.77120.77408
a1
a2
a3
0.7712
0.77408
0.7735040.77356
a1
a2
a3
(0.7712+0.773504)/2= 0.772352
![Page 59: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/59.jpg)
Ex: 0.772352
0.0
1.0
0.80.82
a1
a2
a3
(0.772352-0.0)/(0.8-0.0)= 0.96544
0.0
1.0
0.80.82
a1
a2
a3
(0.772352-0.656)*(0.8-0.656)= 0.808
0.0
1.0
0.80.82
a1
a2
a3
l(2) = 0.0+(0.8-0.0)*0.82=0.656u(2) = 0.0+(0.8-0.0)*1.0=0.8
l(1) = 0.0u(1) = 0.8
l(3) = 0.656+(0.8-0.656)*0.8=0.7712u(3) = 0.656+(0.8-0.656)*0.82=0.77408
(0.772352-0.7712)*(0.77408-0.0.7712)= 0.4
0.0
1.0
0.80.82
a1
a2
a3
![Page 60: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/60.jpg)
Generating a binary code
• Use the binary representation of the tag– Truncate to bits– probability↗, interval↗, required bits↘Ex:
– Bounds:
Symbol Prob. Cdf Tag In binary Code
a1 0.5 0.5
a2 0.25 0.75
a3 0.125 0.875
a4 0.125 1.0
0.0
1.0
0.5
0.75
a1
a2
a40.875
a3
0.25
0.625
0.8125
0.9375
.0100
.1010
.1101
.1111
2
3
4
4
01
101
1101
1111An extreme case where the sequence has only one letter
lengthsequenceSHlSH
2)()(
![Page 61: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/61.jpg)
Adaptive Arithmetic Coding
• A = {a1 , a2 , a3}
• Input sequence: a2 a3 a3 a2
0.0
1.0
0.33
0.67
a1
a2
a3
1/3
1/3
1/3
1/4
2/4
1/4
0.33
0.67
0.42
0.58
a1
a2
a3
0.60
0.63
1/5
2/5
2/5
0.58
0.67
a1
a2
a3
0.64
0.65
1/6
2/6
3/6
0.63
0.67
a1
a2
a3
1/7
3/7
3/7
0.64
0.65
a1
a2
a3
![Page 62: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/62.jpg)
Implementation
• Two problems– Finite precision– Transmit the first bit after seeing the entire
sequence
Synchronized rescaling!
Incremental encoding!
![Page 63: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/63.jpg)
Implementation: Encode (1)
• Incremental coding– Send the MSB when l(k) and u(k) have a common
prefix
• Rescaling– Map the half interval containing the code to [0, 1)
E1: [0, 0.5)E1(x) = 2xSend 0Left shift 1 bit
E2: [0.5, 1)E2(x) = 2(x-0.5)Send 1Left shift 1 bit
![Page 64: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/64.jpg)
Implementation: Encode (2)
Example: a1, a3, a2, a1 (tag: 0.7734375)
0.0
1.0
0.80.82
a1
a2
a3
a1 a3 send 1
E1: [0, 0.5) -> [0, 1); E1(x) = 2xE2: [0.5, 1) -> [0, 1); E2(x) = 2(x-0.5)
a2
send 1
send 0
send 0
![Page 65: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/65.jpg)
Implementation: Encode (3)
send 0
send 1
a1
Use 0.5send 10…0
0.7734375=(.1100011)2
How to stop? 1.Send the stream size2.Use EOF (00…0)
![Page 66: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/66.jpg)
Implementation: Decode (1)Find the smallest interval (0.82-0.8=0.02) => 6 bits (2-6 < 0.02)
11000110…0.110001=0.765625decode a1
update code:(0.765625-0)/(0.8-0)=0.957decode a3
Input: 11000110…0
11000110…0
.100011=0.546875update code:(0.546875-0.312)/0.6-0.312=0.8155decode a2
11000110…0
11000110…0
11000110…0
11000110…0
11000110…0
![Page 67: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/67.jpg)
Implementation: Decode (2)11000110…0
update code:(0.5-0.3568)/(0.54112-0.3568)=0.7769decode a1
.10=0.5
![Page 68: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/68.jpg)
Enhancement (optional)
• One more mapping:– E3: [0.25, 0.75) -> [0, 1); E3(x) = 2(x-0.25)
How do we transfer information about an E3 mapping to the decoder?
![Page 69: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/69.jpg)
E3 mapping
• E3E1
• E3E2
0.0
1.0
0.5
0.75
0.25
0.25
0.75
0.5
0.625
0.375
[¼, ½): send 0 1
0.0
1.0
0.5
0.75
0.25
0.25
0.75
0.5
0.625
0.375
[½, ¾): send 1 0
![Page 70: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/70.jpg)
E3 mapping
• E3…E3E1
• E3…E3E2
m
m
0.0
1.0
0.5
0.75
0.25
0.25
0.75
0.5
0.625
0.375
0.375
0.625
0.5
0.5625
0.4375
…
[¼, ½): 01 [¼+⅛, ½): 011 [¼+⅛+…, ½): 011…1m
0.5
Send 1 0 0 … 0
Send 0 1 1 … 1
![Page 71: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/71.jpg)
E3 mapping: EncodeExample: a1, a3, a2, a1 (tag: 0.7734375)
0.0
1.0
0.80.82
a1
a2
a3
a1 a3 send 1
E1: [0, 0.5) -> [0, 1); E1(x) = 2xE2: [0.5, 1) -> [0, 1); E2(x) = 2(x-0.5)E3: [0.25, 0.75) -> [0, 1); E3(x) = 2(x-0.25)
m = 1
a2
send 0
send 0
send 1 0 m = 0
![Page 72: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/72.jpg)
E3 mapping: Encode
send 1
m = 1
a1
Use 0.5send 10…0
Output:11000110…0
![Page 73: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/73.jpg)
E3 mapping: DecodeFind the smallest interval (0.82-0.8=0.02) => 6 bits (2-6 < 0.02)
11000110…0.110001=0.765625decode a1
update code:(0.765625-0)/(0.8-0)=0.957decode a3
Input: 11000110…0
11000110…0
.100011=0.5468752*(0.546875-0.25)=0.5938
m = 1
update code:(0.5938-0.124)/(0.7-0.124)=0.8155 decode a2
11000110…0 m = 0
11000110…0
11000110…0
11000110…0
m = 1
![Page 74: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/74.jpg)
E3 mapping: Decode
m = 1
. 10…0 = 0.52*(0.5-0.25)=0.5update code:(0.5-0.2136)/(0.58224-0.2136)=0.7769 decode a1
Output:a1 a3 a2 a1
![Page 75: Lecture 3](https://reader034.fdocuments.in/reader034/viewer/2022051316/56814937550346895db679d5/html5/thumbnails/75.jpg)
Next week
• In-class paper bidding• Decide how you distribute your points with
your partners before coming to the class!