A General Regression Approximator-Estimator Modelled ...



EEG Signal Compression using Generalized Regression

Neural Network Estimator Model

N.Sriraam†, C.Eswaran†, R.Kannan†

†Center for Multimedia Computing,

Faculty of information Technology, Multimedia University, 63100

Cyberjaya, Malaysia

Abstract

A general regression neural network (GRNN) is essentially a function approximator. This paper investigates this characteristic for a two-stage lossless compression of electroencephalogram (EEG) signal. The first stage consists of a GRNN with a predictor-estimator model, which reduces the amplitude range of the input data and thereby generates a sequence, which is approximately white Gaussian. The second stage uses a lossless encoding scheme such as arithmetic coding to further compress the residue sequence. Two different training schemes namely, single block and block adaptive methods are used to train the network in the first stage. The performance of this network model, when used along with an arithmetic encoder in a two-stage compression scheme is evaluated in terms of compression ratios. It is found from the results that, this network model performs better than other predictor based neural network models such as multilayer perceptrons and recurrent networks and also classical predictors such as FIR and NLMS.

Key-words: - Neural Networks, EEE signal, GRNN, two-stage compression, lossless compression, arithmetic encoder.

1. Introduction

Medical signal processing is a fast growing field of research that is producing increasingly sophisticated applications in today's high-tech medicine. In the field of neurology common signals of interest are electrical potentials caused by firings of millions of neurons in the human brain during various mental activities. The recordings of these signals are obtained by electrodes placed on the head and are called electroencephalogram (EEG) signals. Recently, recording high-resolution EEGs from large electrode array has been a clear trends in both brain research and clinical

diagnosis. However, EEG data are currently stored without compression, causing problems in database management and internet transmission. Lossless compression techniques assume importance in EEG signal compression due to the necessity for exact recovery of the data for diagnostic purposes. A well- known lossless compression scheme is the two-stage predictor-encoder combination shown in Fig.1 [1]-[5]. Neural Networks have been used for the first stage due to their

[pic]

characteristics such as massive parallelism, robustness, adaptive learning, self-organization, fault tolerance and generalization that merely enhances the performance of a predictor. It has been shown that this scheme can be implemented using neural network predictors such as single- layer and multi-layer feed forward networks, recurrent networks [4],[5],[6],[7]. EEG signal compression using GRNN predictor in the first stage is reported in [8]. In this work, another model of GRNN, namely, general regression neural network estimator (GRNNE) is investigated for EEG signal compression. The performance of this model in the first stage in conjunction with arithmetic encoder in the second stage, are also evaluated in terms of compression ratio for the two-stage loslsess compression scheme.

2. The GRNN

The GRNN [9] is a two-layer variant of radial basis network and is generally used for function approximation. It consists of an input layer, the hidden radial basis layer, and a special linear layer, as shown in Fig. 2. The activation functions are given in Fig.3.The GRNN has a radial basis first layer, which when given a value n, returns the exp (-n2).The weights of this layer are set to detect the difference between a sample value and its training input.

[pic]

Fig.2: The two-layer GRNN

[pic]

Fig.3: Activation functions f(•) for GRNN

GRNN has found many applications in regression and function estimation processes [6]. It is known [10] that given a sufficient number of neurons in the hidden layer, a GRNN can approximate a continuous function to an arbitrary precision. Although GRNN tends to require more neurons than the standard feed forward back propagation networks, they have the advantage that the time needed to design these networks is only a fraction of that required to train the standard feed forward networks [11].Training, configuration, and implementation details of the GRNNE are discussed below.

3. Training

The training process involves two different schemes, single block (SB) training and block adaptive (BA) training [5], [7]. In fixed training, the entire input data are treated as a single block (SB). The first 20% samples are buffered and used to train the network, the remaining samples are then predicted. In adaptive block training, the samples are divided into blocks and neural network is trained afresh for each block. The training is done using only a portion (say 20%) of the samples in a block.

3.1 Training the GRNNE

The input data are divided in to blocks in BA scheme and the entire training window is buffered at the transmitting end. The buffered values are sampled at regular intervals, determined by the size of the training window (0.2 SBlock, where SBlock is the number of samples per block) and the number of input nodes (p) as shown in (1).

IGRNNE = 0.2 SBlock / p (1)

[pic]

Fig.4: A p-input GRNNE generating residues

The sample values are fed to the GRNN, which approximates a function (say, F) of the training data. The plot of F is then stretched across the block to be predicted. This stretching process is done by preparing the p number of input data values to be fed to the GRNNE at Istretch intervals as shown in (2) by substituting the values sampled at IGRNNE intervals.

Istretch = SBlock / p (2)

Prediction of successive input values is then done by taking the values generated by F as shown in (3)

ˆXn = F (n) (3)

4. Configuring the GRNN

In order to configure GRNNE for optimum performance, the parameters that should be adjusted are the number of neurons in the input layer and the value of the constant SPREAD. Empirical testing of values SPREAD= 0.1, 0.5, 1.0(default), 2.0 and 5.0 reveals no noticeable change in compression performance. To choose the number of input neurons, a trial-and- error procedure is adopted. The performance of GRNNE model in the first stage is evaluated in terms of the overall compression ratio (CR), which is defined as

size of the original files (bytes)

(4)

CR =

size of the compressed file (bytes)

The performances of the GRNNA and GRNNE in terms of CR with single block scheme for the first stage for the test files eegfp1, eegfp2, eegf3 are shown in Table1

Table 1 Compression ratios (CR) achieved by the GRNNE with different network sizes for SB scheme

|Test data |Number of Input Nodes |

|files | |

| |1 |2 |3 |4 |5 |

|eegfp1 |4.96 |5.03 |5.00 |4.97 |4.94 |

|(39,936 bytes) | | | | | |

|eegfp2 |4.88 |5.10 |4.98 |4.85 |4.86 |

|(39,936 bytes) | | | | | |

|eegf3 |4.72 |5.12 |4.92 |4.81 |4.77 |

|(39,424bytes) | | | | | |

|Average | | | | | |

| |4.85 |5.08 |4.96 |4.87 |4.85 |

From Tables 1 and 2, we can see that the implementations with two input neurons (predictor order = 2) yields the maximum values for compression ratios for the SB scheme.

The performances of the GRNNE (with predictor order =2) with BA scheme for different block sizes are also evaluated in terms of CR for the same test files and the results obtained are shown in Table 2

Table 2 CR achieved by the GRNNE with different block sizes for BA scheme

|Test data | GRNNE in the I stage |

|files | |

| |50 |100 |200 |SB |

|eegfp1 |6.20 |5.80 |5.70 |5.03 |

|eegfp2 |6.24 |5.86 |5.42 |5.10 |

|eegf3 |6.12 |5.80 |5.72 |5.12 |

|Average |6.18 |5.82 |5.72 |5.08 |

From Table2, we can see that for GRNNE model using BA scheme, block 50 gives the best compression results. It is understood from the results shown in Tables 1 and 2, that, BA scheme performs better than the SB scheme for the GRNNE. It can also be concluded from the results that the performances of these networks are comparable to those obtained with other neural network models such as multilayer perceptrons and recurrent networks [7].

5. Two- stage Compression

The performance of two- stage lossless compression scheme, using GRNNE model in the first stage with arithmetic encoder in the second stage is evaluated in terms of the CR. The results are compared with other known classical predictors and neural network predictors. For this purpose, the performance of two classical linear predictors , namely, a Finite impulse response (FIR) filter described by equation (5) [2] and a 5th order adaptive FIR using the normalised least mean squares (NLMS) algorithm [12], are evaluated using the same test files.

e (n) = x (n) – 4 x (n-1) + 7 x(n-2) –

7 x(n-3) + 4 x (n-4) – x(n-5)

(5)

For the neural network models, Rosenblatt perceptron (RP), recurrent Elman network

(EN) and Multilayer feedforward Network (MLFN) [4], [5], [6] are also compared against the GRNNE model. The results of the comparisons are given in Table.3

Table 3 CR for Two-stage compression scheme using classical and neural network predictors in the first stage and arithmetic coding in the second stage.

[pic]

From the results shown in Table 3, the GRNNE performs better than other classical linear predictors and neural network predictors. The GRNNE in the first stage in conjunction with arithmetic encoder in the second stage yields the highest compression ratio for the two- stage compression scheme. It is also concluded from the results shown in Tables 1-3, that the two stage compression scheme performs better than single stage compression scheme.

6. Conclusions

This paper has discussed the application of the general regression neural network for lossless EEG data compression. It has been shown that this network can be used as a predictor in the first stage of a two-stage compression scheme. The training method and the selection of optimum parameters for the GRNNE configuration have been discussed. The performance of this network in a two-stage compression scheme along with an arithmetic encoder has been evaluated by using different EEG test data files. It is found that the performances of this network are comparable to those obtained with other neural network models such as multilayer perceptrons and recurrent networks and also classical predictors such as FIR and NLMS.

7. References

[1] S.D. Stearns, L.Z. Tan, & N. Magotra,

Lossless compression of waveform data

for efficient storage and transmission,

IEEE Transactions on Geoscience and

Remote Sensing, Vol.31, No.4, 1993,

pp.645-654.

[2] J.W. McCoy, N. Magotra, & S. Stearns,

Lossless predictive coding, IEEE

Midwest Symposium on Circuits and

Systems, Los Angeles, 1994, pp.927-

930.

[3] S.D. Stearns, Arithmetic coding in

lossless waveform compression, IEEE

Transactions on Signal Processing,

Vol. 43, No.8, 1995, pp.1874-1879.

[4] G.Antoniol and P.Tonella, EEG data

compression techniques, IEEE Trans.

on Biomed.Eng, Vol.44, 1997, pp.105-

114.

[5] R. Logeswaran and C. Eswaran, Neural

Network Based Lossless Coding

Schemes for Telemetry Data, IEEE

International Geoscience and Remote

Sensing Symposium 1999, June 1999,

vol.4, pp. 2057-2059.

[6]R. Logeswaran, C. Eswaran, Radial

Basis Neural networks for Lossless Data

Compression, International Journal of

computers and applications,Vol.24,No.1,

2002, pp.14-19.

[7]N.Sriraam, R.Kannan, C.Eswaran,

Lossless compression of EEG data using

Neural Network Predictors, Proc. 9th

International Conference on Neural

Information Processing, Vol.4, 2002,

pp.2046-2048.

[8] N.Sriraam, C.Eswaran, A generalized

regression neural network for EEG

signal compression, International

Conference on Information and

Technology in Asia, 2003, pp.53-55.

[9] D. F. Specht, A general regression

neural network, IEEE Trans. Neural

Networks , Vol. 2, No. 6, 1991,

pp. 568- 576.

[10] L. Marquez, T. Hill, Function

approximation using backpropagation

and general regression neural

networks, Proceeding of the Twenty-

sixth Hawaii International

Conference on System Sciences,

1993, Vol. 4, 1993, Page(s): 607-615

[11]. Chen, C.F.N. Cowan, & P.M. Grant,

Orthogonal least squares learning

algorithm for radial basis function

networks, IEEE Transactions on

Neural Networks, Vol.2, No.2, 1991,

pp.302-309.

[12] S. Haykin, Adaptive Filter Theory,

Prentice Hall, New Jersey, pp.683-

687, 1986.

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download