Note: DCT is an orthogonal transform, i.e., the transformation matrix
for IDCT is a scaled version of the transpose of that for the DCT and
vice versa. Therefore, the DCT architecture can be obtained by
“transposing” the IDCT, i.e., reversing the direction of the arrows in
the flow graph of IDCT, and the IDCT can be obtained by
“transposing” the DCT
• Direct implementation of DCT or IDCT requires N(N-1) multiplication
operations, i.e., O(N2), which is hardware expensive.
• Strength reduction can reduce the multiplication complexity of a 8-
point DCT from 56 to 13.
50 trang |
Chia sẻ: huyhoang44 | Lượt xem: 787 | Lượt tải: 0
Bạn đang xem trước 20 trang tài liệu Kỹ thuật viễn thông - Chapter 9: Algorithmic strength reduction in filters and transforms, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên
Chapter 9: Algorithmic Strength
Reduction in Filters and
Transforms
Keshab K. Parhi
Chapter 9 2
Outline
• Introduction
• Parallel FIR Filters
– Formulation of Parallel FIR Filter Using Polyphase
Decomposition
– Fast FIR Filter Algorithms
• Discrete Cosine Transform and Inverse DCT
– Algorithm-Architecture Transformation
– Decimation-in-Frequency Fast DCT for 2M-point DCT
Chapter 9 3
Introduction
• Strength reduction leads to a reduction in hardware complexity by
exploiting substructure sharing and leads to less silicon area or power
consumption in a VLSI ASIC implementation or less iteration period
in a programmable DSP implementation
• Strength reduction enables design of parallel FIR filters with a less-
than-linear increase in hardware
• DCT is widely used in video compression. Algorithm-architecture
transformations and the decimation-in-frequency approach are used to
design fast DCT architectures with significantly less number of
multiplication operations
Chapter 9 4
Parallel FIR Filters
• An N-tap FIR filter can be expressed in time-domain as
– where {x(n)} is an infinite length input sequence and the sequence
contains the FIR filter coefficients of length N
– In Z-domain, it can be written as
¥×××=-=*= å
-
=
,,2,1,0,)()()()()(
1
0
ninxihnxnhny
N
i
{ })(nh
÷
ø
ö
ç
è
æ
×÷
ø
ö
ç
è
æ
=×= åå
¥
=
-
-
=
-
0
1
0
)()()()()(
n
n
N
n
n znxznhzXzHzY
Formulation of Parallel FIR Filters Using
Polyphase Decomposition
Chapter 9 5
• The Z-transform of the sequence x(n) can be expressed as:
– where X0(z2) and X1(z2), the two polyphase components, are the z-
transforms of the even time series {x(2k)} and the odd time-series
{x(2k+1)}, for {0£k<¥}, respectively
• Similarly, the length-N filter coefficients H(z) can be decomposed as:
– where H0(z2) and H1(z2) are of length N/2 and are referred as even and
odd sub-filters, respectively
• The even-numbered output sequence {y(2k)} and the odd-numbered
output sequence {y(2k+1)} for {0£k<¥} can be computed as
[ ] [ ]
)()(
)5()3()1()4()2()0(
)3()2()1()0()(
2
1
12
0
42142
321
zXzzX
zxzxxzzxzxx
zxzxzxxzX
-
-----
---
+=
×××++++×××+++=
×××++++=
)()()( 21
12
0 zHzzHzH
-+=
(continued on the next page)
Chapter 9 6
• (cont’d)
– i.e.,
– where Y0(z2) and Y1(z2) correspond to y(2k) and y(2k+1) in time domain,
respectively. This 2-parallel filter processes 2 inputs x(2k) and x(2k+1)
and generates 2 outputs y(2k) and y(2k+1) every iteration. It can be
written in matrix-form as:
( ) ( )
[ ]
[ ])()(
)()()()()()(
)()()()(
)()()(
2
1
2
1
2
2
0
2
1
2
1
2
0
12
0
2
0
2
1
12
0
2
1
12
0
2
1
12
0
zHzXz
zHzXzHzXzzHzX
zHzzHzXzzX
zYzzYzY
-
-
--
-
+
++=
+×+=
+=
)()()()()(
)()()()()(
2
0
2
1
2
1
2
0
2
1
2
1
2
1
22
0
2
0
2
0
zHzXzHzXzY
zHzXzzHzXzY
+=
+= -
ú
û
ù
ê
ë
é
×ú
û
ù
ê
ë
é
=ú
û
ù
ê
ë
é -
1
0
01
1
2
0
1
0
X
X
HH
HzH
Y
YXHY ×= or (9.1)
Chapter 9 7
– The following figure shows the traditional 2-parallel FIR filter structure,
which requires 2N multiplications and 2(N-1) additions
• For 3-phase poly-phase decomposition, the input sequence X(z) and
the filter coefficients H(z) can be decomposed as follows
– where {X0(z3), X1(z3), X2(z3)} correspond to x(3k),x(3k+1) and x(3k+2)
in time domain, respectively; and {H0(z3), H1(z3), H2(z3)} are the three
sub-filters of H(z) with length N/3.
H0
H1
H0
H1 2-Z
y(2k+1)
y(2k)x(2k)
x(2k+1)
)()()()(
),()()()(
3
2
23
1
13
0
3
2
23
1
13
0
zHzzHzzHzH
zXzzXzzXzX
--
--
++=
++=
Chapter 9 8
– The output can be computed as:
– In every iteration, this 3-parallel FIR filter processes 3 input samples x(3k),
x(3k+1) and x(3k+2), and generates 3 outputs y(3k), y(3k+1) and y(3k+2),
and can be expressed in matrix form as:
( ) ( )
( )[ ] [ ]
[ ]0211202
22
3
0110
1
1221
3
00
2
2
1
1
02
2
1
1
0
3
2
23
1
13
0 )()()()(
HXHXHXz
HXzHXHXzHXHXzHX
HzHzHXzXzX
zYzzYzzYzY
+++
+++++=
++×++=
++=
-
---
----
--
ú
ú
ú
û
ù
ê
ê
ê
ë
é
×
ú
ú
ú
û
ù
ê
ê
ê
ë
é
=
ú
ú
ú
û
ù
ê
ê
ê
ë
é
-
--
2
1
0
012
2
3
01
1
3
2
3
0
2
1
0
X
X
X
HHH
HzHH
HzHzH
Y
Y
Y
(9.2)
Chapter 9 9
– The following figure shows the traditional 3-parallel FIR filter structure,
which requires 3N multiplications and 3(N-1) additions
H1x(3k)
H0
H2
H1
x(3k+1)
H0
H2
H1
x(3k+2)
H0
H2
D
D
D
y(3k+2)
y(3k+1)
y(3k)
D
3: -z
Chapter 9 10
• Generalization:
– The outputs of an L-Parallel FIR filter can be computed as:
– This can also be expressed in Matrix form as
å
åå
-
=
---
=
-
-
+=
-+
-
=
-££÷
ø
ö
ç
è
æ
+÷
ø
ö
ç
è
æ
=
1
0
11
0
1
1
20,
L
i
iLiL
k
i
iki
L
ki
ikLi
L
k
XHY
LkxHXHzY
ú
ú
ú
ú
û
ù
ê
ê
ê
ê
ë
é
××××
ú
ú
ú
ú
ú
û
ù
ê
ê
ê
ê
ê
ë
é
×××
××××××××××××
×××
×××
=
ú
ú
ú
ú
û
ù
ê
ê
ê
ê
ë
é
×××
---
-
-
-
-
- 1
1
0
021
201
110
1
1
0
LLL
L
L
L
L
L X
X
X
HHH
HzHH
HzHzH
Y
Y
Y
XHY ×=
(9. 3)
(9. 4)
Note: H is a pseudo-circulant matrix
Chapter 9 11
Two-parallel and Three-parallel Low-Complexity FIR Filters
• Two-parallel Fast FIR Filter
– The 2-parallel FIR filter can be rewritten as
– This 2-parallel fast FIR filter contains 3 sub-filters. The 2 sub-
filters H0X0 and H1X1 are shared for the computation of Y0 and Y1
( ) ( ) 110010101
11
2
000
XHXHXXHHY
XHzXHY
--+×+=
+= - (9. 5)
H0
x(2k)
H0+H1
H1 D
x(2k+1)
y(2k)
y(2k+1)
-
-
Chapter 9 12
– This 2-parallel filter requires 3 distinct sub-filters of length N/2
and 4 pre/post-processing additions. It requires 3N/2 = 1.5N
multiplications and 3(N/2-1)+4=1.5N+1 additions. [The traditional
2-parallel filter requires 2N multiplications and 2(N-1) additions]
– Example-1: when N=8 and , the 3 sub-filters
are
– The subfilter can be precomputed
– The 2-parallel filter can also be written in matrix form as
{ }7610 ,,,, hhhhH ×××=
{ }
{ }
{ }7654321010
75311
64200
,,,
,,,
,,,
hhhhhhhhHH
hhhhH
hhhhH
++++=+
=
=
10 HH +
22222 XPHQY ×××= (9.6)
Q2 is a post-processing matrix which determines the manner in which the filter outputs
are combined to correctly produce the parallel outputs and P2 is a pre-processing
matrix which determines the manner in which the inputs should be combined
Chapter 9 13
– (matrix form)
– where diag(h*) represents an NXN diagonal matrix H2 with diagonal
elements h*.
– Note: the application of FFA diagonalizes the original pseudo-
circulant matrix H. The entries on the diagonal of H2 are the sub-
filters required in this parallel FIR filter
– Many different equivalent parallel FIR filter structures can be
obtained. For example, this 2-parallel filter can be implemented
using sub-filters {H0, H0 -H1, H1} which may be more attractive in
narrow-band low-pass filters since the sub-filter H0 -H1 requires
fewer non-zero bits than H0 +H1. The parallel structure containing
H0 +H1 is more attractive for narrow-band high-pass filters.
ú
û
ù
ê
ë
é
×
ú
ú
ú
û
ù
ê
ê
ê
ë
é
×
÷
÷
÷
ø
ö
ç
ç
ç
è
æ
+×ú
û
ù
ê
ë
é
--
=ú
û
ù
ê
ë
é -
1
0
1
10
02
1
0
10
11
01
111
01
X
X
H
HH
H
diag
z
Y
Y
(9.7)
Chapter 9 14
• 3-Parallel Fast FIR Filter
– A fast 3-parallel FIR algorithm can be derived by recursively
applying a 2-parallel fast FIR algorithm and is given by
(9.8)
( )( )[ ]
( )( )[ ] [ ]
( )( )[ ]
( )( )[ ]
( )( )[ ]112121
111010
2102102
22
3
001110101
112121
3
22
3
000
XHXXHH
XHXXHH
XXXHHHY
XHzXHXHXXHHY
XHXXHHzXHzXHY
-++-
-++-
++++=
---++=
-+++-=
-
--
– The 3-parallel FIR filter is constructed using 6 sub-filters of length
N/3, including H0X0, H1X1, H2X2, ,
– With 3 pre-processing and 7 post-processing additions, this filter
requires 2N multiplications and 2N+4 additions which is 33% less
than a traditional 3-parallel filter
( )( )1010 XXHH ++
( )( )and 2121 XXHH ++ ( )( )210210 XXXHHH ++++
Chapter 9 15
– The 3-parallel filter can be expressed in matrix form as
33333 XPHQY ×××=
ú
ú
ú
ú
ú
û
ù
ê
ê
ê
ê
ê
ë
é
-
-
-
×
ú
ú
ú
û
ù
ê
ê
ê
ë
é
--
-=
ú
ú
ú
û
ù
ê
ê
ê
ë
é
=
-
-
100000
010010
001010
00001
1110
0011
001
,
3
3
3
2
1
0
3
z
z
Q
Y
Y
Y
Y
ú
ú
ú
û
ù
ê
ê
ê
ë
é
=
ú
ú
ú
ú
ú
ú
ú
ú
û
ù
ê
ê
ê
ê
ê
ê
ê
ê
ë
é
=
ú
ú
ú
ú
ú
ú
ú
ú
û
ù
ê
ê
ê
ê
ê
ê
ê
ê
ë
é
++
+
+
=
2
1
0
33
210
21
10
2
1
0
3 ,
111
110
011
100
010
001
,
X
X
X
XP
HHH
HH
HH
H
H
H
diagH
(9.9)
Chapter 9 16
– Reduced-complexity 3-parallel FIR filter structure
H0
x(3k+2)
H1
H2 D
x(3k+1)
y(3k)
y(3k+1)-
-
H0+H1
H1+H2
H0+H1+H2
D
-
- -
-
x(3k)
y(3k+2)
Chapter 9 17
Parallel FIR Filters (cont’d)
Parallel Filters by Transposition
• Any parallel FIR filter structure can be used to derive another parallel
equivalent structure by transpose operation (or transposition).
Generally, the transposed architecture has the same hardware
complexity, but different finite word-length performance
• Consider the L-parallel filter in matrix form Y=HX (9.4), where H is
an LXL matrix. An equivalent realization of this parallel filter can be
generated by taking the transpose of the H matrix and flipping the
vectors X and Y:
– where
F
T
F XHY ×=
[ ]
[ ]ïî
ï
í
ì
×××=
×××=
--
--
T
LLF
T
LLF
YYYY
XXXX
021
021
(9.10)
Chapter 9 18
• Examples:
– the 2-parallel FIR filter in (9.1) can be reformulated by using transposition
as follows:
– Transposition of the 2-parallel fast filter in (9.6) leads to another
equivalent structure:
– The reduced-complexity 2-parallel FIR filter structure by transposition is
shown on next page
ú
û
ù
ê
ë
é
×ú
û
ù
ê
ë
é
=ú
û
ù
ê
ë
é
-
0
1
01
2
10
0
1
X
X
HHz
HH
Y
Y
22222 XPHQY ×××=
( )
F
FF
XQHP
XPHQY
TTT
T
2222
22222
×××=
×××=
ú
û
ù
ê
ë
é
×
ú
ú
ú
û
ù
ê
ê
ê
ë
é
-
-
×
÷
÷
÷
ø
ö
ç
ç
ç
è
æ
+×ú
û
ù
ê
ë
é
=ú
û
ù
ê
ë
é
- 0
1
2
1
10
0
0
1
1
10
11
110
011
X
X
zH
HH
H
diag
Y
Y
(9.11)
Chapter 9 19
• Signal-flow graph of the 2-parallel FIR filter
• Transposed signal-flow graph
x0
x1
y0
y1
H0
H1
H0+H1
-
-
z-2
y0
y1
x0
x1
H0
H1
H0+H1 -
- z
-2
Fig. (a)
Fig. (b)
Chapter 9 20
(c) Block diagram of the transposed
reduced-complexity 2-parallel FIR filter
D
H0
H0+H1
H1
x0
x1
y1
y0
-
-
Fig. (c)
Chapter 9 21
Parallel FIR Filters (cont’d)
Parallel Filter Algorithms from Linear Convolutions
• Any LXL convolution algorithm can be used to derive an L-parallel
fast filter structure
• Example: the transpose of the matrix in a 2X2 linear convolution
algorithm (9.12) can be used to obtain the 2-parallel filter (9.13):
(9.13)
ú
û
ù
ê
ë
é
×
ú
ú
ú
û
ù
ê
ê
ê
ë
é
=
ú
ú
ú
û
ù
ê
ê
ê
ë
é
0
1
0
10
1
0
1
2
0
0
x
x
h
hh
h
s
s
s
ú
ú
ú
û
ù
ê
ê
ê
ë
é
×ú
û
ù
ê
ë
é
=ú
û
ù
ê
ë
é
-
1
0
1
2
01
01
1
0
0
0
X
X
Xz
HH
HH
Y
Y
(9. 12)
Chapter 9 22
• Example: To generate a 2-parallel filter using 2X2 fast convolution, consider
the following optimal 2X2 linear convolution:
– Note: Flipping the samples in the sequences {s}, {h}, and {x} preserves
the convolution formulation (i.e., the same C and A matrices can be used
with the flipped sequences)
– Taking the transpose of this algorithm, we can get the matrix form of the
reduced-complexity 2-parallel filtering structure:
ú
û
ù
ê
ë
é
×
ú
ú
ú
û
ù
ê
ê
ê
ë
é
×
÷
÷
÷
ø
ö
ç
ç
ç
è
æ
+×
ú
ú
ú
û
ù
ê
ê
ê
ë
é
--=
ú
ú
ú
û
ù
ê
ê
ê
ë
é
×××=
0
1
0
10
1
0
1
2
10
11
01
100
111
001
x
x
h
hh
h
diag
s
s
s
XAHCs
( ) XPHQXAHCY T ×××=×××=
(9.14)
Chapter 9 23
– The matrix form of the reduced-complexity 2-parallel filtering structure
– The 2-parallel architecture resulting from the matrix form is shown as
follows
– Conclusion: this method leads to the same architecture that was obtained
using the direct transposition of the 2-parallel FFA
ú
ú
ú
û
ù
ê
ê
ê
ë
é
×
ú
ú
ú
û
ù
ê
ê
ê
ë
é
-
-
×
÷
÷
÷
ø
ö
ç
ç
ç
è
æ
+×ú
û
ù
ê
ë
é
=ú
û
ù
ê
ë
é
-
1
0
1
2
0
10
1
1
0
110
010
011
110
011
X
X
Xz
H
HH
H
diag
Y
Y
(9.15)
x(2k)
x(2k+1)
y(2k)
y(2k+1)
H0
D
-
H0+H1
H1
-
Chapter 9 24
Parallel FIR Filters (cont’d)
Fast Parallel FIR Algorithms for Large Block Sizes
• Parallel FIR filters with long block sizes can be designed by cascading
smaller length fast parallel filters
• Example: an m-parallel FFA can be cascaded with an n-parallel FFA
to produce an -parallel filtering structure. The set of FIR filters
resulting from the application of the m-parallel FFA can be further
decomposed, one at a time, by the application of the n-parallel FFA.
The resulting set of filters will be of length .
• When cascading the FFAs, it is important to keep track of both the
number of multiplications and the number of additions required for the
filtering structure
( )nm´
( )nmN ´
Chapter 9 25
– The number of required multiplications for an L-parallel filter with
is given by:
• where r is the number of levels of FFAs used, is the block size of
the FFA at level-i, is the number of filters that result from the
applications of the i-th FFA and N is the length of the filter
– The number of required additions can be calculated as follows:
rLLLL ×××= 21
Õ
Õ ==
=
r
i
ir
i i
M
L
N
M
1
1
iL
iM
(9.16)
÷
÷
ø
ö
ç
ç
è
æ
-÷÷
ø
ö
çç
è
æ
+
ú
ú
û
ù
ê
ê
ë
é
÷÷
ø
ö
çç
è
æ
÷÷
ø
ö
çç
è
æ
+=
Õ
Õ
å ÕÕÕ
=
=
=
-
=+==
1
1
1
2
1
112
r
i i
r
i
i
r
i
i
k
k
r
ij
ji
r
i
ii
L
N
M
MLALAA
(9.17)
Chapter 9 26
• where is the number of pre/post-processing adders required by
the i-th FFA
– For example: consider the case of cascading two 2-parallel reduce-
complexity FFAs, the resulting 4-parallel filtering structure would
require a total of 9N/4 multiplications and 20+9(N/4-1) additions.
Compared with the traditional 4-parallel filter which requires 4N
multiplications. This results in a 44% hardware (area) savings
• Example: (Example 9.2.1, p.268) Calculating the hardware complexity
– Calculate the number of multiplications and additions required to
implement a 24-tap filter with block size of L=6 for both the cases
and :
• For the case :
iA
{ }3,2 21 == LL { }2,3 21 == LL
{ }3,2 21 == LL
( ) ( ) ( ) ( ) ( ) ( ) 96132
24
6331034,7263
32
24
=ú
û
ù
ê
ë
é
-
´
´+´+´==´´
´
= AM
,10,6,4,3 2211 ==== AMAM
Chapter 9 27
• For the case :
• How are the FFAs cascaded?
– Consider the design of a parallel FIR filter with a block size of 4,
using (9.3), we have
– The reduced-complexity 4-parallel filtering structure is obtained by
first applying the 2-parallel FFA to (9.18), then applying the FFA a
second time to each of the filtering operations that result from the
first application of the FFA
– From (9.18), we have (see the next page):
{ }2,3 21 == LL
( ) ( ) ( ) ( ) ( ) ( ) 98123
24
3664210,7236
23
24
=ú
û
ù
ê
ë
é
-
´
´+´+´==´´
´
= AM
,4,3,10,6 2211 ==== AMAM
( )
( )3322110
3
3
2
2
1
1
0
3
3
2
2
1
1
0
HzHzHzH
XzXzXzX
YzYzYzYY
---
---
---
+++
×+++=
+++=
(9.18)
Chapter 9 28
– (cont’d)
• where
– Application-1
• The 2-parallel FFA is then applied a second time to each of the
filtering operations of
(9.19)
– Application-2
• Filtering Operation
( ) ( )110110 '''' HzHXzXY -- +×+=
ïþ
ï
ý
ü
ïî
ï
í
ì
+=+=
+=+=
--
--
3
2
11,2
2
00
3
2
11,2
2
00
''
''
HzHHHzHH
XzXXXzXX
( ) ( )[ ] '1'12'1'1'0'0'1'0'1'01'0'0 HXzHXHXHHXXzHXY -- +--+×++= (9.19)
( ) ( ){ }10101100 '''','','' HHXXHXHX +×+
{ }00 '' HX
( )( )
( ) ( )[ ] 22422002020200
2
2
02
2
0
'
0
'
0
HXzHXHXHHXXzHX
HzHXzXHX
--
--
+--+×++=
++=
Chapter 9 29
• Filtering Operation
• Filtering Operation
– The second application of the 2-parallel FFA leads to the 4-parallel
filtering structure (shown on the next page), which requires 9
filtering operations with length N/4
{ }11 '' HX
( )( )
( ) ( )[ ] 33433113131211
3
2
13
2
1
'
1
'
1
HXzHXHXHHXXzHX
HzHXzXHX
--
--
+--+×++=
++=
( )( ){ }1010 '''' HHXX ++
( )( ) ( ) ( )[ ] ( ) ( )[ ]
( )( )[ ] ( )( )[ ]
( )( )
( )( ) ( )( )úû
ù
ê
ë
é
++-++-
++++++
+
+++++=
+++×+++=++
-
-
--
32321010
321032102
3232
4
1010
32
2
1032
2
101010 ''''
HHXXHHXX
HHHHXXXX
z
HHXXzHHXX
HHzHHXXzXXHHXX
Chapter 9 30
Reduced-complexity 4-parallel FIR filter (cascaded 2 by 2)
Chapter 9 31
Discrete Cosine Transform and
Inverse DCT
• The discrete cosine transform (DCT) is a frequency transform used in
still or moving video compression. We discuss the fast
implementations of DCT based on algorithm-architecture
transformations and the decimation-in-frequency approach
• Denote the DCT of the data sequence x(n), n=0, 1,, N-1, by X(k),
k=0, 1, , N-1. The DCT and inverse DCT (IDCT) are described by
the following equations:
– DCT:
– IDCT:
( )
1,,1,0,
2
12
cos)()()(
1
0
-×××=úû
ù
êë
é += å
-
=
Nk
N
kn
nxkekX
N
n
p
( )
1,,1,0,
2
12
cos)()(
2
)(
1
0
-×××=úû
ù
êë
é += å
-
=
Nn
N
kn
kXke
N
nx
N
k
p
(9.20)
(9.21)
Chapter 9 32
• where
• Note: DCT is an orthogonal transform, i.e., the transformation matrix
for IDCT is a scaled version of the transpose of that for the DCT and
vice versa. Therefore, the DCT architecture can be obtained by
“transposing” the IDCT, i.e., reversing the direction of the arrows in
the flow graph of IDCT, and the IDCT can be obtained by
“transposing” the DCT
• Direct implementation of DCT or IDCT requires N(N-1) multiplication
operations, i.e., O(N2), which is hardware expensive.
• Strength reduction can reduce the multiplication complexity of a 8-
point DCT from 56 to 13.
î
í
ì =
=
otherwise
k
ke
,1
0,21
)(
Chapter 9 33
• Example (Example 9.3.1, p.277) Consider the 8-point DCT
– It can be written in matrix form as follows: (where )
ú
ú
ú
ú
ú
ú
ú
ú
ú
ú
ú
û
ù
ê
ê
ê
ê
ê
ê
ê
ê
ê
ê
ê
ë
é
×
ú
ú
ú
ú
ú
ú
ú
ú
ú
ú
ú
û
ù
ê
ê
ê
ê
ê
ê
ê
ê
ê
ê
ê
ë
é
=
ú
ú
ú
ú
ú
ú
ú
ú
ú
ú
ú
û
ù
ê
ê
ê
ê
ê
ê
ê
ê
ê
ê
ê
ë
é
)7(
)6(
)5(
)4(
)3(
)2(
)1(
)0(
)7(
)6(
)5(
)4(
)3(
)2(
)1(
)0(
9271331173217
26142221030186
1112313325155
28201242820124
137127211593
30262218141062
15131197531
44444444
x
x
x
x
x
x
x
x
cccccccc
cccccccc
cccccccc
cccccccc
cccccccc
cccccccc
cccccccc
cccccccc
X
X
X
X
X
X
X
X
( )
î
í
ì ==
×××=úû
ù
êë
é += å
=
otherwise
kkewhere
k
kn
nxkekX
n
,1
0,21)(
7,,1,0,
16
12
cos)()()(
7
0
p
16cos pici =
Chapter 9 34
– The algorithm-architecture mapping for the 8-point DCT can be
carried out in three steps
• First Step: Using trigonometric properties, the 8-point DCT can be
rewritten as in next page
ú
ú
ú
ú
ú
ú
ú
ú
ú
ú
ú
û
ù
ê
ê
ê
ê
ê
ê
ê
ê
ê
ê
ê
ë
é
×
ú
ú
ú
ú
ú
ú
ú
ú
ú
ú
ú
û
ù
ê
ê
ê
ê
ê
ê
ê
ê
ê
ê
ê
ë
é
----
----
----
----
----
----
----
=
ú
ú
ú
ú
ú
ú
ú
ú
ú
ú
ú
û
ù
ê
ê
ê
ê
ê
ê
ê
ê
ê
ê
ê
ë
é
)7(
)6(
)5(
)4(
)3(
)2(
)1(
)0(
)7(
)6(
)5(
)4(
)3(
)2(
)1(
)0(
75311357
62266226
51733715
44444444
37155173
26622662
13577531
44444444
x
x
x
x
x
x
x
x
cccccccc
cccccccc
cccccccc
cccccccc
cccccccc
cccccccc
cccccccc
cccccccc
X
X
X
X
X
X
X
X
(9.22)
Chapter 9 35
– (continued)
– where
– The following figure (on the next page) shows the DCT
architecture according to (9.23) and (9.24) with 22 multiplications.
410073123150
410013725130
21161033521170
61121053327110
)0(,)5(
)4(,)3(
)6(,)7(
)2(,)1(
cPXcMcMcMcMX
cMXcMcMcMcMX
cMcMXcMcMcMcMX
cMcMXcMcMcMcMX
×=+-+=
×=---=
-=++-=
+=+++=
,,,,
,,,,
,,,,
3211101032111010
523612431700
523612431700
PPPPPPPPMPPM
xxPxxPxxPxxP
xxMxxMxxMxxM
+=+=-=-=
+=+=+=+=
-=-=-=-=
(9.23)
11101001110100 , PPPPPM +=-=
(9.24)
Chapter 9 36
Figure: The implementation of 8-point DCT structure
in the first step (also see Fig. 9.10, p.279)
Chapter 9 37
• Second step, the DCT structure (see Fig. 9.10, p.279) is grouped into
different functional units represented by blocks and then the whole
DCT structure is transformed into a block diagram
– Two major blocks are defined as shown in the following figure
– The transformed block diagram for an 8-point DCT is shown in
the next page (also see Fig. 9.12 in p.280 of text book)
x(0)
x(1)
x(0)+x(1)
x(0)-x(1)-
x(0)
x(1)
ax(0)+bx(1)
bx(0)-ax(1)-
a
a
b
b
X±
XC±
a b
Chapter 9 38
Figure: The implementation of 8-point DCT structure
in the second step (also see Fig. 9.12, p.280)
Chapter 9 39
• Third step: Reduced-complexity implementations of various blocks
are exploited (see Fig. 9.13, p.281)
– The block can be realized using 3 multiplications and 3
additions instead of using 4 multiplications and 2 additions, as
shown in follows
– Define the block with and
reversed outputs as a rotator block that performs the
following computation:
XC±
x
y
ax+by
bx-ay-
x
y
ax+by
bx-ay
-
a-
b
a+
b
a
a
b
b
b
XC± { }qq cos,sin == ba
qrot
ú
û
ù
ê
ë
é
×ú
û
ù
ê
ë
é -
=ú
û
ù
ê
ë
é
y
x
y
x
qq
qq
cossin
sincos
'
'
Chapter 9 40
– Note: The angles of cascaded rotators can be simply added, as
shown in the transformation block as follows:
– Note: Based on the fact that a rotator with is just like
the block , we modify it as the following structure:
XC±
a b
x
y
bx-ay
ax+by
qrotxy
x’
y’ î
í
ì
=
=
q
q
cos
sin
b
a
for
1qrot 2qrot ( )21 qq +rot
{ }4pq =
X±
X±
x
y
c4
c4
( )4protxy x’y’ )4cos(4 p=c
Chapter 9 41
– From the three steps, we obtain the final structure where only 13
multiplications are required (also see Fig. 9.14, p.282)
X±
x(0)
x(7)
÷
ø
ö
ç
è
æ
16
3p
rot
X±
x(3)
x(4)
X±
x(1)
x(6)
X±
x(2)
x(5)
÷
ø
ö
ç
è
æ
16
p
rot
X±
X±
÷
ø
ö
ç
è
æ
8
3p
rot
X±
X±
X±
-
-
-
c4
c4
c4
c4
X(1)
X(5)
X(3)
X(7)
X(6)
X(2)
X(0)
X(4)
Chapter 9 42
Decimation-in-Frequency Fast DCT for -Point DCT
• The fast -point DCT/IDCT structures can be derived by the
decimation-in-frequency approach, which is commonly used to derive
the FFT structure to compute the discrete-Fourier transform (DFT). By
power-of-2 decomposition, this algorithm reduces the number of
multiplications to about
• We only derive the fast IDCT computation (The fast DCT structure can
be obtained from IDCT by “transposition” according to their
computation symmetry). For simplicity, the 2/N scaling factor in (9.21)
is ignored in the derivation.
Discrete Cosine Transform and Inverse DCT
m2
m2
( ){ }NN 2log2
Chapter 9 43
– Define and decompose x(n) into even and
odd indexes of k as follows
– Notice
( ) ( ) ( )kXkekX ×=ˆ
( )
( ) ( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( )[ ]
( ) ( ) ( ) ( ) úû
ù
êë
é +
úû
ù
êë
é +++
·
+
+úû
ù
êë
é +=
úû
ù
êë
é ++++úû
ù
êë
é +=
úû
ù
êë
é +=
å
å
åå
å
-
=
-
=
-
=
-
=
-
=
N
n
N
kn
kX
NnN
kn
kX
N
kn
kX
N
kn
kX
N
kn
kXnx
N
k
N
k
N
k
N
k
N
k
2
12
cos
2
1212
cos12ˆ2
212cos2
1
2
212
cos2ˆ
2
1212
cos12ˆ
2
212
cos2ˆ
2
12
cos)(ˆ)(
12
0
12
0
12
0
12
0
1
0
pp
p
p
pp
p
( ) ( ) ( ) ( ) ( )
( )
N
kn
N
kn
N
n
N
kn
p
ppp
12
cos
112
cos
2
12
cos
2
1212
cos2
++
++
=
+
×
++
Chapter 9 44
– Therefore, (since )
– Substitute k’=k+1 into the first term, we obtain
• where
( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) ( )åå
åå
å
-
=
-
=
-
=
-
=
-
=
úû
ù
êë
é +++úû
ù
êë
é +++=
úû
ù
êë
é +++úû
ù
êë
é +++=
úû
ù
êë
é +
úû
ù
êë
é +++
12
0
22
0
12
0
12
0
12
0
12
cos12ˆ
112
cos12ˆ
12
cos12ˆ
112
cos12ˆ
2
12
cos
2
1212
cos12ˆ2
N
k
N
k
N
k
N
k
N
k
N
kn
kX
N
kn
kX
N
kn
kX
N
kn
kX
N
n
N
kn
kX
pp
pp
pp
( ) ( )[ ] 011212cos =+-+ NNn p
( ) ( ) ( )
( ) ( ) ( ) ( )åå
å
-
=
-
=
-
=
úû
ù
êë
é +-=úû
ù
êë
é +-=
úû
ù
êë
é +++
12
0
12
1'
22
0
'12
cos1'2ˆ
'12
cos1'2ˆ
112
cos12ˆ
N
k
N
k
N
k
N
kn
kX
N
kn
kX
N
kn
kX
pp
p
( ) 01ˆ =-X
Chapter 9 45
– Then, the IDCT can be rewritten as
– Define
– and
– Clearly, G(k) & H(k) are the DCTs of g(n) & h(n), respectively.
( ) ( )( ) ( )[ ]
( ) ( )[ ] ( )( )å
å
-
=
-
=
ú
û
ù
ê
ë
é +
-++
×
+
+ú
û
ù
ê
ë
é +
=
12
0
12
0
22
12
cos12ˆ12ˆ
212cos2
1
22
12
cos2ˆ)(
N
k
N
k
N
kn
kXkX
NnN
kn
kXnx
p
p
p
12,,1,0
),12(ˆ)12(ˆ)(
),2(ˆ)(
-×××=
ïî
ï
í
ì
-++º
º
Nk
kXkXkH
kXkG
(9.25)
( )
( )
[ ] ( )( )ï
ï
î
ï
ï
í
ì
ú
û
ù
ê
ë
é +
-++º
-×××=ú
û
ù
ê
ë
é +
º
å
å
-
=
-
=
22
12
cos)12(ˆ)12(ˆ)(
12,,1,0,
22
12
cos)2(ˆ)(
12
0
12
0
N
kn
kXkXnh
Nn
N
kn
kXng
N
k
N
k
p
p
(9.26)
Chapter 9 46
– Since
– Finally, we can get
– Therefore, the N-point IDCT in (9.21) has been expressed in terms
of two N/2-point IDCTs in (9.26). By repeating this process, the
IDCT can be decomposed further until it can be expressed in terms
of 2-point IDCTs. (The DCT algorithm can also be decomposed
similarly. Alternatively, it can be obtained by transposing the
IDCT)
( )( ) ( )
( )( ) ( )
ï
ï
î
ïï
í
ì
+
-=
+--
+
=
+--
N
n
N
nN
N
kn
N
knN
pp
pp
12
cos
112
cos
12
cos
112
cos
( ) ( )[ ]
( ) ( )[ ]ï
ï
î
ïï
í
ì
-×××=
+
-=--
+
+=
12,,1,0
),(
212cos2
1
)()1(
),(
212cos2
1
)()(
Nn
nh
Nn
ngnNx
nh
Nn
ngnx
p
p
(9.27)
Chapter 9 47
• Example (see Example 9.3.2, p.284) Construct the 2-point IDCT butterfly
architecture.
– The 2-point IDCT can be computed as
– The 2-point IDCT can be computed using the following butterfly
architecture
( )
( )î
í
ì
-=
+=
,4cos)1(ˆ)0(ˆ)1(
,4cos)1(ˆ)0(ˆ)0(
p
p
XXx
XXx
-1
x(1)
x(0)
C4
)1(Xˆ
)0(Xˆ
Chapter 9 48
• Example (Example 9.3.3, p.284) Construct the 8-point fast DCT
architecture using 2-point IDCT butterfly architecture.
– With N=8, the 8-point fast DCT algorithm can be rewritten as:
– and
3,2,1,0
),12(ˆ)12(ˆ)(
),2(ˆ)(
=
ïî
ï
í
ì
-++º
º
k
kXkXkH
kXkG
( )
( )
ï
ï
î
ïï
í
ì
úû
ù
êë
é +=
-×××=úû
ù
êë
é +=
å
å
-
=
=
8
12
cos)()(
12,,1,0,
8
12
cos)()(
13
0
3
0
kn
kHnh
Nn
kn
kGng
k
k
p
p
( )[ ]
( )[ ]ï
ï
î
ïï
í
ì
+
-=--
+
+=
),(
1612cos2
1
)()1(
),(
1612cos2
1
)()(
nh
n
ngnNx
nh
n
ngnx
p
p
Chapter 9 49
– The 8-point fast IDCT is shown below (also see Fig.9.16, p.285), where
only 13 multiplications are needed. This structure can be transposed to get
the fast 8-point DCT architecture as shown on the next page (also see Fig.
9.17, p.286) (Note: for N=8, in both
figures)
( )[ ] ( )4cos164cos214 pp ==C
)0(Xˆ
)0(Xˆ
)4(Xˆ
)2(Xˆ
)6(Xˆ
)1(Xˆ
)5(Xˆ
)3(Xˆ
)7(Xˆ
)4(Xˆ
)2(Xˆ
)6(Xˆ
)1(Xˆ
)5(Xˆ
)3(Xˆ
)7(Xˆ
)0(G
)2(G
)1(G
)3(G
)0(H
)2(H
)1(H
)3(H
4C
4C
4C
4C
1-
1-
1-
1-
2C
6C
2C
6C
1-
1-
1-
1-
3C
1C
7C
5C
1-
1-
1-
1-
)0(X
)1(X
)3(X
)2(X
)7(X
)6(X
)4(X
)5(X
Chapter 9 50
Fast 8-point DCT Architecture
)0(Xˆ
)4(Xˆ
)2(Xˆ
)6(Xˆ
)1(Xˆ
)5(Xˆ
)3(Xˆ
)7(Xˆ
1-
1-
1-
1-
3C
1C
7C
5C
)0(X
)2(X
)4(X
)6(X
)1(X
)3(X
)5(X
)7(X
1-
1-
1-
1-
6C
2C
2C
6C
1-
1-
1-
1-
4C
4C
4C
4C
4C
Các file đính kèm theo tài liệu này:
- chap_chap9_1876_4595.pdf