ࡱ> GIF` Bxbjbj 7z;       4 8 4 4$, 3$$ hsHY H   OOO   OOO  O Z!X Os<0O##O# O$OHHX444d 444 444       VARIABLE TO FIXED LENGTH SOURCE CODING - TUNSTALL CODES 6.441 Supplementary Notes 1, 2/10/94 So far, we have viewed source coding as a mapping from letters of a source alphabet into strings from a code alphabet. We saw that Huffman coding minimized the expected number of code letters per source letter, subject to the requirement of unique decodability. We then saw that by accumulating L source letters at a time into a "super letter" from the alphabet of source L-tuples, the expected number of code letters per source letter, nL, could typically be reduced with increasing L. This reduction arose in two ways, first by taking advantage of any statistical dependencies that might exist between successive source letters, and second by reducing the effect of the integer constraint on code word length. We saw that (for stationary sources) the expected number of binary code letters per source letter, nL, satisfies H" d" nL d" HL + 1/L where limL"HL = H". By taking L to be arbitrarily large, and viewing it as the lifetime of the source, we see that in a very real sense, H" is the minimum expected number of binary code letters per source letter that can be achieved by any source coding technique, and that this value can be approached arbitrarily closely by Huffman coding over L-tuples of source letters for sufficiently large L. We see from this that, from a purely theoretical standpoint, there is not much reason to explore other approaches to source coding. From a slightly more practical viewpoint, however, there are two important questions that must be asked. First, are there computationally simpler techniques to approach the limit H"? Note that a Huffman code on L-tuples from a source alphabet of size K contains KL code words; this is not attractive computationally for large L. The second question has to do with the assumption of known source probabilities. Usually such probabilities are not known, and thus one would like to have source coding techniques that are "universal" in the sense that they work well independent of the source probabilities, or "adaptive" in the sense that they adapt to the existing source probabilities. There is not much difference between universal and adaptive source coding, and we discuss these topics later. The important point for now, however, is that we need a richer set of tools than just Huffman coding in order to address both the problem of computational complexity and the problem of adaptation. A broader viewpoint of a source coder than that taken for Huffman codes is given in figure 1, where the encoder is broken into two parts, a parser and a coder. The parser segments the source output sequence into a concatenation of strings. As an example: a b a a c b a a b a c a a a b c ... (ab)(aac)(b)(aab)(ac)(aaa)(b)(c)... The string encoder then maps the set of possible strings produced by the parser into binary code words (or more generally Dary code words). For example, the parser might simply parse the source output into L-tuples, as analyzed earlier. The primary concern here, however, is the case where the parser output is a set of variable length strings (as in the example above) . We assume that there is a finite dictionary of M allowable strings and that the string encoder maps the set of such strings into binary code words. If these binary code words are uniquely decodable (which we assume), then a decoder can reproduce the parsed strings, from which the original source output is obtained.  EMBED Word.Picture.8 Figure 1: Model of Source encoder We see from the above description that an essential element of a parser is its dictionary of strings. Assuming that any combination of source letters (from the given K letter alphabet) is possible, the dictionary must have the property that every possible source sequence has a prefix that is in the dictionary (for otherwise the parser could not produce an initial string). Such a dictionary is said to be a valid dictionary. We shall also restrict our attention temporarily to dictionaries that satisfy the prefix condition, i.e., that no dictionary entry is a prefix of any other dictionary entry.  EMBED Word.Picture.8 Figure 2 A dictionary can be represented as a rooted tree, as illustrated for a ternary source alphabet in figure 2. Note that the non-valid dictionary in Fig 2a is not capable of parsing ab..., illustrating that parsers must use valid dictionaries. Note also that the non-prefix condition dictionary in Fig. 2c allows the sequence aaab... to be parsed either as (aaa)(b) or (aa)(ab). This is not a serious problem, except that the parser is not fully described by such a dictionary; it also needs a rule for choosing between such alternatives. Practical adaptive source coders often use such non-prefix condition dictionaries. The rule usually used, given the dictionary of strings y1, y2,...,yM, is for the parser to pick the longest prefix of the source sequence u1, u2,... that is in the dictionary (say u1,...,uL). For a prefix condition dictionary, of course, the dictionary fully specifies the parsing process. Note that the dictionary tree here is analogous to the code tree for a Huffman code. A valid prefix condition dictionary corresponds to a complete prefix condition code tree. It is interesting to observe that validity is necessary for the parser, whereas completeness is only desirable for efficiency in the code tree. We now restrict our attention to valid prefix condition dictionaries. As we saw before, each intermediate node in such a dictionary tree has K immediate descendants (where K is the source alphabet size) and the total number of leaves in such a tree is of the form M=a(K-1)+1 for some integer a, where a is the number of intermediate nodes, including the root. We also now restrict our attention to variable to fixed length codes in which the string encoder maps each dictionary string into a fixed length binary string of length n. This requires the number of strings to satisfy Md"2n and for efficiency, we make M as large as possible subject to M=a(K-1)+1 and Md"2n. That is, M lies in the range 2n-(K-2)d"Md"2n. It is intuitively plausible that the appropriate objective, given the constraints above, is to find that dictionary of at most 2n strings that maximizes the expected number, E[L], of source letters per dictionary string. We shall see that there is a remarkably simple algorithm, due to Tunstall (B. Tunstall, "Synthesis of Noiseless Compression Codes", Ph.D. Thesis, Georgia Tech, 1968) for constructing such a dictionary. To justify maximizing E[L], consider a memoryless source (i.e., a source with statistically independent letters). For a given dictionary, let L1, L2, ... be the lengths of successive strings used by the parser in encoding the source. The number of source letters per code letter encoded by the first n parser strings is (L1+L2+...+Ln)/(nn). By the law of large numbers, this approaches E[L]/n with probability 1 as n " and thus the number of code letters per source letter approaches n/E[L] with probability 1. A somewhat cleaner way of seeing this, for those familiar with renewal theory, is to view a renewal as occurring at each parsing point in the source sequence. We now recall that E[L] is the expected length of the dictionary tree and that this expected length can be found simply by summing the probabilities associated with each intermediate node in the tree, including the root. In particular, given the dictionary tree, label each leaf by the probability of the corresponding string and label each intermediate node by the sum of probabilities of all leaves growing from that node. E[L] is then the sum of the probabilities of all intermediate nodes, counting the root (see figure 3).  EMBED Word.Picture.8  Figure 3 Our problem now is to choose a valid prefix condition dictionary tree with M=a(K-1)+1 nodes so as to maximize E[L], which means to maximize the probabilities of the set of intermediate nodes. Visualize starting with a full K-ary tree including all nodes out to level M, labelled with the probabilities of the corresponding strings. Then prune the tree down to M leaves (a intermediate nodes, counting the root) in such a way as to maximize E[L]. We maximize E[L] by choosing the a nodes of maximum probability and using all of them as intermediate nodes in the pruned tree. This is simple, however; pick out the highest probability nodes one by one starting at the root. Each node picked has all of its ancestors already picked, since they each have higher probability than the given node. Tunstall algorithm: 1) Start with the root as an intermediate node and all level 1 nodes as leaves. 2) Pick the highest probability leaf, make it intermediate, and grow K leaves on it. 3) If the number of leaves < M, goto step 2 else stop. Figure 4 gives an example of this algorithm for M=4.  EMBED Word.Picture.8  Figure 4a: Tunstall algorithm for a source with P(a) = 0.7, P(b) = 0.3, M = 4.  EMBED Word.Picture.8  Figure 4b: Extension of figure 4a to M = 8. Note that this algorithm was demonstrated to maximize E[L] only for the case of a memoryless source. There is a subtle, but very serious, problem in trying to generalize this algorithm to sources with memory. The problem is that the probability of a given string of letters, starting with a parsing point, depends on how the parsing points are chosen, which depends on the dictionary itself. For example, in Figure 4a, a parsing point appears after the letter b with probability 0.657, whereas the letter b appears in the source sequence with probability 0.3. It appears to be a reasonable heuristic to maximize E[L] by the algorithm above, assuming that the source starts in steady state, but this is not optimum since, as seen above, such a tree will not leave the source in steady state. Note also that the assumption of the prefix condition is essential in the demonstration above. One can easily find examples in which E[L] can be increased beyond its value in the Tunstall algorithm by using a valid dictionary of the same size that does not satisfy the prefix condition. If the dictionary does not satisfy the prefix condition, however, the conditional probability distribution on the first source letter after a parse might be different than the unconditional distribution (see Figure 5).  EMBED Word.Picture.8  We now analyze Tunstall codes (again assuming a memoryless source). Let Q be the probability of the last intermediate node chosen in the algorithm. For each leaf vi, P(vi)d"Q since Q was the probability of the most probable leaf immediately before that leaf was turned into an intermediate node. Similarly, each intermediate node has a probability at least Q. Since any leaf vi can be reached from its immediate ancestral intermediate node with probability at least Pmin, P(vi)e"QPmin. Thus, for each leaf, QPmin d" P(vi) d" Q (1) Summing the left side of (1) over the M leaf nodes, we have QPminM d" 1, so that Q d" (MPmin)-1. Combining this with the right side of (1), P(vi) d" (MPmin)-1 (2) We can now use (2) to lower bound the entropy of the ensemble of leaf nodes, H(V) =  EQ \I\SU(i=1,M,P(vi) log\F(1,P(vi))) e"  EQ \I\SU(i=1,M,P(vi)log(MPmin)) = log(MPmin) (3) The entropy of the ensemble of leaf nodes is also equal to the source entropy H(U) times the expected number of source letters E[L] in a dictionary entry (see homework set 3, problem 2). E[L] =  EQ \F(H(V),H(U)) e"  EQ \F(log(MPmin),H(U))  (4) We have already argu8]     > @ B D F H P R F H 45LMNO h!kACJEHjmG h!kAUVjmG hpUVj=hpUjnG h!kAUVjnG hpUVhpj;hpU h!kA>*h!kACJEHh!kACJEHOJQJh!kACJEHh!kA h!kA5h!kA5CJ$aJ$48]* Z 12W|4rj",%+-*.V.2I4]445dhdh $dha$/xAx !!!!!!!($*$$$$$%%&%(%&&''''(() )))))$)&)))))--".$.&.(.L.T...>1@122q5r55ĹjGhpU h!kACJjlG h!kAUVjlG hpUVhpj|BhpUh!kACJEHOJQJh!kACJEHh!kACJEH h!kAOJQJh!kACJEHh!kA?5<5q5555(6C9?;_;>>&@b@@ANCCUU\Y \bc\llmbpBq$dha$dhdh5555555555@;A;X;Y;Z;[;<<<<==f>l>v>x>>>>>>>p?v?????.@0@>@D@F@Z@ AAAAA4A6A@ABAPARA\A^A`AfAƻѰݝݝݝݝݝݝݝݝݝݝݕݝݝݕݝݝh!kACJEHh!kACJEHjiG h!kAUVjiG hpUVjjG h!kAUVjjG hpUVhpjhpUh!kAjGhpUjkG h!kAUVjkG hpUV:fAhArAtAvAAAAAAAAAA^C`CjCCCCCCCCCCCCDTUU U8U:U*Uh!kACJEHh!kAhpjhpUPed that n, the number of binary code letters per dictionary entry, satisfies n d" log(M+K-2). Combining this with (4), we finally obtain  EQ \F(n,E[L]) d" H(U)  EQ \F(log(M+K-2),log(MPmin))  (5) We see that the right hand side of (5) approaches H(U) in the limit as M->" (in particular the limit is approached as 1/log M). Thus the number of binary code letters per source digit can be made as close to H(U) as desired by variable to fixed length coding with a sufficiently large dictionary. Note, however, that our result here is weaker than our earlier coding theorem using fixed to variable length codes, since the result here does not apply to sources with memory. Although the analysis here does not apply to sources with memory, one of the major reasons for being interested in variable to fixed length coding is because of the potential application to sources with memory. For example, if one thinks of ordinary English text, one finds many common words and phrases, involving in the range of 5 to 30 characters, for which it would be desirable to provide code words. Providing code words for all strings of 30 characters would of course be computationally infeasible, whereas a variable to fixed length dictionary could easily include highly probable long strings. Appendix: One additional topic of academic interest (i.e., read at your own peril) is to find the asymptotic performance of Tunstall codes for large M as opposed to simply bounding their performance. The appropriate tool for this problem is to view the process of generating self information from the source as a renewal process. That is, let U1, U2, ... be the sequence of source letters, let I(Ui) be the self information (in nats) of the ith letter, let Sj = I(U1)+I(U2)+... +I(Uj), and let {N(t); te"0} be the renewal process defined, for each te"0, by specifying the random variable N(t) as the largest integer j for which Sjd"t<Sj+1. By Blackwell's theorem (assuming that I(U) has a non-arithmetic distribution), the expected number of renewals in the interval (t, t+e) approaches e/H(U) as t->" for any e>0, where H(U) is in nats. For e sufficiently small, this is just the probability of a renewal in (t,t+e). If a renewal occurs in (t,t+e), then Sj lies in (t,t+e) for some j, and the corresponding string u1, ...uj has a probability between e-t and e-(t+e). Thus the number of different strings u1,...,uj for which Sj is in (t,t+e) tends to eet/H(U) for small e and large t. Now let t = ln(1/Q) be the self information of the last intermediate node choseaaaabbbbVbZbfbnbpbbbbbbbccc c"c$cDcHcrctcdhhhfihitiviiijjkkkklknkkkkkllnlfmhmmmmmmmmnppppp p"p*p,p2p4pFpHpNpPpRppphpjhpUUh!kACJEHOJQJh!kACJEHh!kAOJQJh!kAh!kACJEHPn in a Tunstall code. We can find the number of leaf nodes for which the last source letter is some given ai by observing that each intermediate node of self information between t+lnP(ai) and t has one such leaf. Since all nodes with self information in this range are intermediate nodes in the Tunstall code, we can integrate the number of nodes of self information t from t=t+lnP(ai) to t. Thus the total number of leaves ending in ai is approximately et(1-P(ai))/H(U). It follows by summing over i that the total number of leaf nodes is approximately M = et(K-1)/H(U) (6) Next, the self information of a sample dictionary entry can be interpreted as the first renewal after t in the renewal process. Thus, we have H(U)E[L] = H(V) = t +  EQ \F("iP(ai) (-ln P(ai))2,2 "iP(ai)(-ln P(ai)))  (7) Finally, taking n = log2M (since for large M, the difference between M and M-K+2 is negligible), Eq. (6) yields n = t log2e + log2[(K-1)/H(U)] = E[L]H(U)log2e -  EQ \F("iP(ai) (-ln P(ai))2,2 "iP(apPqRqZq\qjqlqqqqqqqqqqqqqqqqrxx x xxxxxxxx.x/x0x2x3x5x6x8x9x=x>xAxBxjh!kAUh!kACJaJUh!kACJEHhpjhpUh!kACJEHh!kAOJQJh!kA.Bqq/x1x2x4x5x7x8x:x;x?x@xAxBx$a$dhi)(-ln P(ai))) log2e + log2[(K-1)/H(U)] (8)     1. 5 00P<0/ =!"#$%   !"#$%&'()*+,-./0123456789:;<=?@ABCDEHL]KMRNOPQSUTVYWXZ\[^d_`abceifghjlmnopqrstuvwxyRoot Entry! Fz!XJ2Data >WordDocument 7zObjectPool#W!Xz!X_1205671278 FW!XW!XPIC dPICT ObjInfo  !"#$%&'()*+,-./0146789:;<=>?@ABCDEFIKLMNOPQRSTUVWXY[\]^_`abcdefghijkloqrstuvwxyz{|}~d;!H H 4 CF8a60"&DQ4 F8a%9C0")K4 5Fu8: +(Source:+Parser:#(#;String #* Encoder:%(X a b a a c:') (ab)(aac)a90")s2:(|Binary * coded- t"d -  H   . Times }ww 0wf1- ! c_1205671277  FW!XW!XPIC  dMETA  PICT  i-W(mSW(CLCNWrCL.p.pW(WQCLCpTimes ww 0wf1-!ctV!b]V!abIx!ac[y!aa3}!aaa -/x*rHR@L\.U(NdN::N:%%NN::!ck!bT!ab@!acR!aaa &!?7SL""%7!aab ( !aac > N d5N :.:.%RN N3!ck9!bT9!aa*_&Z!T?47.SL ! a) Non-valid!b) Valid, Prefix!c) Valid, non-prefix;'i:  +c "W(+"W($"CN$"CL$#$"W()"CL$: +c: (]Vb: (Ixab: +ac: (3}aa:(aaa T*r/xXT@LHRXTU(\.XTX"N+"N$":$":$#-"N)":$: (kc: (Tb: (@ab: *ac:(aaaT!&XT7?XTLSX""+"%-: +aab: (> aac"N +"N $#$"N ): (k9c: (T9b: (*_aaT!T&ZXT7.?4XTL SX:&( a) Non-valid:/)b) Valid, Prefix:;+c) Valid, non-prefixd':7 Le:  +G *  "i+"i$"U9$"U7$"i)"U7$: +j3cObjInfo2_1205671276 FW!XW!XPIC 3dPICT 5e: (oAb: ([^ab: +ac T<]AcXTR7Z=XTgnX"i+"i$"U=$"U;$"i)"U;$TR;ZAXTgnX: (E0.1: (nA0.2: )$0.7:(4E0.49:+!0.14: (O(0.7: (n 1:+n/#P(a) = .7, P(b) = 0.2, P(c) = 0.1"<b">b""@b&:($aaa:+aab:+aac"@_4"@_/">d* :(.343:(0.098:*.049:V(E[L] = 1 + 0.7 + 0.49 = 2.19ObjInfoG_1205671275  FW!XW!XPIC HdMETA Jd' '    s . Times ؄ww 0w f}- ! 4-T c1T @.-YR B4;.f6_00)TMf`@k9e!aE9!bk;!0.7 7-!0.3 [-X`gX`D]fV`G@kd!bo!0.7 <!0.3 _A-DO!aa9!abb!0.49*!0.21R5;.5Y>R8]l ]I b[KD oh !bt!0.71@ !0.31d F26HS4!abf@!0.49*%!0.21VB/=^2:;c!.343W!.1477d!aaa7!i!aab7Im'zI z +4 "T '"T $ TR YXT;.B4XT_0f6XT)0XTMTXT`fXT9e@kXI(E9aI+&bI (7-0.7I *$0.3"X`&"X`$TV`]fXT@GXTdkXPICT ZObjInfom_1205671274 FW!XW!XPIC ndI+dbI (<0.7I *#0.3"A$"D" I(9aaI+)abI(*0.49I+(0.21T.55;XTR8Y>X"]&"]$T[bXTD KXTh oXI+]"bI (@ 0.7I *$0.3"F$"H" I+6abI(*%0.49I+,0.21"/=!"2:) I(W.343I+ $.147I (!iaaaI +(aabdj%HMETA pPICT ObjInfo_1205671273 FW!XW!Xj%   j . --*#LETimes  ww 0wfF- ! T<5`YU^NXyar[} 0} i.{ l4e.60!0.7 `-!0.3 -f5RYh5uZ!0.49JH!0.21n]P`9R][!.3433z!.147W!aab7b7'9G !.2401 !.1029C5[7]e_c]&&/3,{t!.168!.072)uawya!0.21Y!bag!.147s!.063!aaabR !aaaab9 !aaaaa!abaa|!abba!0.09S!bbi!Observe that each leaf node is f!less probable than each inter- r!"mediate node, and thus the inter- ~!mediate nodes selected are !those of maximum probability.'q q T#*XTELX9 +T T5<XTY`XTNXU^XTr[yaX"} &"} $T{ XTe.l4XT06X9 + 0.79 *$0.3"f5$"h5% 9 (JH0.499 +$0.21"P`!"R]) 9 (3z.3439 + $.1479 + aab"7)"9+9( .24019+#.1029"5&"7&T_eXT]cX"&+"&) TXT,3XTt{XTX9 (.1689 *.072"ua+"ya+9 (Y0.219+ba9 (s.1479 (.0639 (Raaab9(9aaaab9(aaaaa (|aba *abb (S0.09(ibbO(fObserve that each leaf node is  O* less probable than each inter- O* "mediate node, and thus the inter- O* mediate nodes selected are O* those of maximum probability.dF' HPIC dMETA PICT ObjInfoF'  c  j| . Times ww 0wf- ! -7 M57 #.#.R7 73-<5 !cW9!b<5!.Figure 5: P(a) = .7, P(b) = 0.2, P(c) = 0.1k.!a*<!aa[!"The string (a) is used only when !$the following letter is either b or "!%c. As a result, the strings (b) and .!%(c) appear more often than one might :!expect and E[L] = 1.7/1.21 F#60X R'qI q + "7 +"7 $#$"7 ) T5 <XI(W9cI(<5bI(k..Figure 5: P(a) = .7, P(b) = 0.2, P(c) = 0.1I(*<aI([aaIk)O"The string (a) is used only when  k* $the following letter is either b or k* %c. As a result, the strings (b) and k* %(c) appear more often than one might k* expect and E[L] = 1.7/1.21 T0#6XT RXXOh+'0  ( H T `lt|8VARIABLE TO FIXED LENGTH SOURCE CODING - TUNSTALL CODESchungc Normal.dotchungc2Microsoft Office Word@F#@ۧ!X@ۧ!X21Tablek#SummaryInformation("DocumentSummaryInformation8TCompObjq՜.+,0$ hp  chungcl; 8VARIABLE TO FIXED LENGTH SOURCE CODING - TUNSTALL CODES Title  FMicrosoft Office Word Document MSWordDocWord.Document.89qV@V Normald'CJOJQJ^J_HmH nHsH tHuDA@D Default Paragraph FontRi@R  Table Normal4 l4a (k@(No List4`4 Header  !.)`. Page Number;z8]*G QuKgz !!/""""#_#|##&((**+++l,(-m- .S./022799]::";F;;;;;;;;;;;;;0Py0y0y0y0y0y0y0y0y0y0y0y00000000000000000000000000000000000000000000y0y0y0y0y0y0y0y0@#0y0y0 5fAapBx"%'(295BqBx#&:Ax$ Qikz" # #`#x#z#(((,0,4,X,0-E-H-a- ..(.G.v::];;;::::::111111118@0(  B S  ?;;;;;;;;;;;;;;;;;;!kAymp;;;;;;;@AshdownNe29:winspoolApple LaserWriter 16/600 PSAshdownS odXXLetterPRIV0''''\KhCO;AshdownS odXXLetterPRIV0''''\KhCO;T !)*+,-0456789:;;p@p p@pp$@p p"pL@p(pX@p.p0p2ph@p<p>p@pBpTp@p^p`pbphpjplppp@UnknownGz Times New Roman5Symbol3& z Arial9New York3z Times"#F#F2l2l!+xx;;2+HX?p27VARIABLE TO FIXED LENGTH SOURCE CODING - TUNSTALL CODESchungcchungc, n  A . --CF -- OII6-'E'-F -- R9C%-**&-Fu 5Times hww 0wf9- !Source( !Parser) !String #; !Encoder/; ! a b a a cX ! (ab)(aac)-- RYY9-*t* !Binary c|!code"|'d,!+    . Times hww 0w2fV- ! G! S-i>iU7U9i]U7@[ii<U7U[!cA!boA!ab[^!acoa-Ac<]Z=R7ngiBiU;U=iaU;@_ii@U;U_ZAR;ng!0.1 E!0.2 nA!0.7 ne!0.494E!0.14Ub!0.74O(!1n !#P(a) = .7, P(b) = 0.2, P(c) = 0.1y<b!>b9@bP!aaa4$!aab4=!aac4T@_@_.>dI!.343!.0980!.049N!E[L] = 1 + 0.7 + 0.49 = 2.19'd+ L