Electrocardiogram compression technique for global ... - IEEE Xplore

0 downloads 0 Views 535KB Size Report
Jun 19, 2012 - application for rural clinics in India. M.Mitra J.N.Bera R.Gupta. Department of Applied Physics, University of Calcutta, 92, APC Road, Kolkata, ...
www.ietdl.org Published in IET Science, Measurement and Technology Received on 15th January 2012 Revised on 19th June 2012 doi: 10.1049/iet-smt.2012.0004

ISSN 1751-8822

Electrocardiogram compression technique for global system of mobile-based offline telecardiology application for rural clinics in India M. Mitra J.N. Bera R. Gupta Department of Applied Physics, University of Calcutta, 92, APC Road, Kolkata, 70009, WB, India E-mail: [email protected]

Abstract: Compression of Electrocardiographic (ECG) data is an important requirement to develop an efficient telecardiology application. This study describes an offline compression technique, which is implemented for ECG transmission in a global system of mobile (GSM) network for preliminary level evaluation of patient’s cardiac condition in a non-critical condition. A short-duration (5 – 6 beats) ECG data from Massachusetts Institute of Technology– Beth Israel Hospital (MIT – BIH) arrhythmia database is used for the trial. The compression algorithm is based on direct processing of ECG samples in four major steps: viz., down-sampling of dataset, normalising inter-sample differences, grouping for sign and magnitude encoding, zero element compression and finally, conversion of bytes into corresponding 8 bit American standard code for information interchange (ASCII) characters. The developed software at the patient side computer also converts the compressed data file into formatted sequence of short text messages (SMSs). Using a dedicated GSM module these message are delivered to the mobile phone of the remote cardiologist. The received SMSs are to be downloaded at the authors computer for concatenation and decompression to obtain back the original ECG for visual or automated investigation. An average percentage root-meansquared difference and compression ratio values of 43.54 and 1.73 are obtained, respectively, with MIT – BIH arrhythmia data. The proposed technique is useful for rural clinics in India for preliminary level cardiac investigation.

1

Introduction

With the advancement of information and communication technology, healthcare services are now extended to the remotest patient at any geographical location. Bio-telemetry involves the principles of transmission of biological information from one place to the other. It finds application in areas such as intensive care units, emergency telemedical services, telemedicine, home care, space programs, sports, military and so on. There is an increasing trend in the use of public networks for transmission of patients’ pathological information for clinical use. Compression of data is essential not only for optimal usage of computer memory for data archiving, but also for increasing the spectral efficiency of communication link for bio-telemetry applications. Data compression techniques are classified into loss-less (such as Huffman, adaptive Huffman, Lempel Ziv 77, LZ Welch, run-length encoding etc.) and lossy (such as wavelet, discrete polynomial). The main objective of bio-signal compression is to minimise the redundancy of information and achieve high-volume reduction in data. Decompression of the signal should faithfully extract clinically significant information [1]. The evaluation criteria for bio-signal compression mainly refer to error figure estimation and determination of transmission errors. Most common form of error figures used for compression evaluation are: percentage root-mean-squared difference (PRD), maximum absolute error, local absolute error and 412 & The Institution of Engineering and Technology 2012

signal-to-noise ratio. Electrocardiogram (ECG) compression in particular finds wide application in prolonged recordings (Holter records) and tele-ECG practices. ECG compression schemes broadly utilise direct time-domain techniques and transform techniques. Direct compression schemes use amplitude zone time epoch coding, delta (Delta pulse code modulation (DPCM)), coordinate time-encoding reduction system, entropy coding, Fan and scan-along polygonal approximations etc. [2– 5]. Transformation techniques involve pre-processing of the data samples and encode the transformed output. For reconstruction, an inverse transform is performed to generate the original data. Many types of orthogonal transforms are in use for ECG compression, such as Karhumen –Loeve transform, Fourier, Cosine, Harr, Walsh etc. [6 – 8]. In the last two decades, wavelet-based ECG compression techniques have become popular for teleECG applications [9 – 11]. Tele-ECG is a specialised form of bio-telemetry where patient’s data can be remotely collected for analysis and record [12]. In a typical tele-ECG set-up the patient ECG is acquired from the patient body by a hardware acquisition module consisting of electrodes, amplifiers, filters and a DSP controller. This device is connected with a high-end portable gadget like cell phone, personal digital assistant (PDA), laptop for data compression and local analysis, before the data are transmitted using a wire or wireless media to a remote place. The choice of communication media depends upon the application area. In telemedicine IET Sci. Meas. Technol., 2012, Vol. 6, Iss. 6, pp. 412 –419 doi: 10.1049/iet-smt.2012.0004

www.ietdl.org applications used in primary health care centres and hospitals, internet or satellite links are normally used. Wireless public networks such as global system of mobiles (GSM), code division multiple access (CDMA) play a vital role in data communication in remote monitoring applications [13, 14]. Web-based telemedicine services are being deployed to transmit patient data at a remote place [15]. At the receiving point, a physician can visually inspect the ECG wave shape to determine the abnormality, if any. A wearable compact wireless monitoring system is described [16], where the patient’s ECG and respiration is transmitted through CDMA and internet to a medical service centre. Computerised ECG processing employs complex algorithms and numerical methods to analyse digitised sample of patient ECG. PCbased automated analysis and classification software are already available to assist the physicians with more accurate diagnosis [17]. In [18], a real-time wireless telecardiology application is described where the diagnosis is performed by analysing the compressed data, based on a rule-based data-mining technique. This reduces the time delay caused because of decompression of data in real-time monitoring application. In this application, the patient’s mobile phone is used for continuous data analysis and whenever an abnormal beat is detected, messages are generated for medical action. The same authors proposed a method of encryption along with a delta modulation-based encoding for securing patients’ privacy [19] in a real-time monitoring application. A compression ratio (CR) up to 20.06 is achieved in the described scheme. In an earlier publication [20], a mobile phone platform is used for lossless ECG compression and diagnosis. The diagnostic information is extracted from compressed data itself, without the need of decompression in doctor’s mobile. In most of the reported works, emphasis is on real-time telecardiology solutions, using high-speed data network and use of advanced gadgets. This demands high-cost involvement, state-of-theart infrastructure and handling of high-end gadgets by concerned medical personnel. In Indian healthcare context, especially in the rural clinics, lack of requisite infrastructure and poor maintenance is a serious issue of concern. With more than two-third of Indian population distributed in rural villages, many of the rural clinics suffer from acute shortage of cardiologists. As a result, even the preliminary level investigation of cardiac patients is sometimes difficult. Telemedicine systems have been established in selected city-based hospitals to cater the need of healthcare service in peripheral districts. This is insufficient to serve the vast rural population of India. Moreover, the existing systems relies of real-time (video and audio) conversation between city-based specialists and their rural non-specialist counterparts, supplemented by medical data and image transmission over dedicated satellite

link. However, it is observed that because of the busy schedule of specialist doctors in city hospitals, these telemedicine systems are highly underutilised. The chief motivation of this work is to offer a cost-effective solution where the rural clinics can ‘connect’ to a city-based cardiologist for a routine check-up application. Hence, the proposed compression technique is for intermittent use and neither suitable for critical condition of the patient nor in a real-time tele-monitoring scenario. A semi-skilled technician would handle the entire system at a rural healthcare centre. Many of the new ECG machines provide an option to directly collect the patient’s data in a text formatted file when connected to a computer. The objective is to acquire a short-duration (5 – 6 beats) of patient ECG in the computer. The compressed ECG is transmitted to the remote cardiologists as message format for visual or automated analysis at his computer. The compression algorithm is tested with both normal and abnormal ECG data available on Physikalisch-Technische Bundesanstalt (PTB) diagnostic ECG database (ptb-db), MIT– BIH arrhythmia database (mit-db) and MIT – BIH ECG compression test database (c-db) from Physionet [21]. The reconstructed signal is clinically validated.

2

Methods

The proposed telecardiology system is intended for the preliminary-level analysis of patient ECG. In view of this, a short-duration (5–6 beats) ECG data are used in the compression technique. This also keeps the number of SMSs within manageable limit for transmitting the compressed ECG. Block diagram of the transmitting point (patient-end) system is shown in Fig. 1a. In the rural healthcare unit, the patient ECG is acquired by an ECG machine with digital readout facility or a hardware acquisition module to capture the single-lead ECG from the patient and store in a timestamped data file. The compression algorithm operates on this data file to convert it in an ASCII character file, which is ready to be transmitted. So, the entire task at the transmitting point is divided into two, viz., compressing the acquired ECG and, generating the text message(s) from the compressed ECG characters for transmission. In the next subsections these two tasks are discussed in detail. 2.1 Compression and decompression algorithm using character encoding The proposed compression scheme is based on encoding of successive sample difference (SSD) or first difference of the ECG data. A typical ECG wave shape contains equipotential segments (TP, ST, PR etc.) connected by high frequency change (QRS) and low-frequency (P and T)

Fig. 1 Block diagram of the developed system a Transmit point b Receive point IET Sci. Meas. Technol., 2012, Vol. 6, Iss. 6, pp. 412–419 doi: 10.1049/iet-smt.2012.0004

413

& The Institution of Engineering and Technology 2012

www.ietdl.org 2.1.3 SSD array generation and normalisation: In the first step, the SSDs in array z[n] are computed to generate array a[n] using the following formula a(i) = z(i),

for i = 2 to N

a(i) = z(i) − z(i − 1),

for i = 2 to N

(1) (2)

where N is the total number of samples in array z[n]. A suitably chosen index h is taken as the starting element in array a[n] and the following elements h to N are taken in a separate array, which is the reference array for evaluation of compression performance. The elements from index h onwards in array a[n] are normalised in a scale of 0– 99 to form a new array b[n]. So, the normalisation constant k is derived as Fig. 2 Compression stages

regions. In the SSD array of a typical ECG beat, about 85% of data elements have either very small values (P and T waves) or zero values (flat portion of TP, ST segments). In the proposed technique, the compression is applied on the nonQRS regions, that is, low-frequency regions such as P and T waves, and almost zero-frequency regions such as PR, ST and TP. The following two encoding rules are applied: (a) in the region of P and T waves the SSDs, after normalisation and rounding off, assumes a small magnitude that can be represented by a nibble (i.e. 4 bits) instead of a byte. Two consecutive such SSD elements can be combined in a single byte by some encoding scheme. So, a 2:1 compression is achieved and (b) in most cases, the normalised and rounded-off SSD elements in zerofrequency regions (TP, ST) of the ECG wave consist of many consecutive zeros. So, a suitable encoding scheme devised to represent these zero sequences, offers another scope of compression. Thus, in effect the proposed compression scheme is an adoption of delta encoding, customised to suit the present application. A high compression ratio is achieved through the following stages, viz., down-sampling of data, grouping for sign and magnitude encoding, zero element compression and finally integer to 8 bit ASCII (i.e. in the range 0 – 255) conversion, as shown in Fig. 2. In the proposed compression scheme, the coded data range of 0– 255 are judiciously distributed in the different stages to facilitate easy decoding at receive point. 2.1.1 Pre-processing of samples: At first, the lead samples are smoothed to eliminate the high-frequency noise using ‘Spline smoothing’ function. The smoothing factor is empirically selected as 0.001 fraction of the amplitude span of the raw dataset. A point to point error estimation before and after the smoothing process yields a maximum fractional error of 2.3% although computed over 20 leads from ptb-db data. Ten number of such records are clinically validated by doctors before and after smoothing. The compression procedure is summarised in the following steps. 2.1.2 Down-sampling: The smoothed data array, say x[n] is down-sampled till the extrema (maximum and minimum) of the down-sampled array match with the corresponding ones in the original array. This is to ensure the preservation of clinical features in the ECG. The down-sampled array z[n] is clinically validated by cardiologists using ten number of normal and abnormal records from ptb-db. 414 & The Institution of Engineering and Technology 2012

k=

99 , max[abs{a(i)}]

for i . h

(3)

In the next step, the normalised SSDs are rounded-off to the nearest integer. This quantisation operation truncates the fractional part of the SSDs, resulting in a loss of information. During the reconstruction stage this error can become cumulative and to counteract this, the original normalised sample is incorporated after a fixed interval N ′ to generate a new array b[n]. The value of N ′ or the refreshing interval depends on the value of the downsampling factor, f. The more the down-sampling factor, more chance for cumulative effect of quantisation error and so, less the value of N ′ . For example, if f ¼ 2, N ′ ¼ 1000 and if f ¼ 4, N ′ ¼ 250 etc. The value of N ′ is kept equal to an integer multiple of 250. So, the new formation of array b[n] is b(i) = ka(i),

for i = N ′ , 2N ′ , 3N ′ , . . .

b(i) = round{ka(i)}, for i = 1 to (N ′ − 1), (N ′ + 1) to (2N ′ − 1), . . .

(4) (5)

where the round operator converts to nearest integer. Again b(i) = ka(i) . 99,

for i = N ′ , 2N ′ , 3N ′ , . . .

Hence, a new array c[n] is generated from b[n] where each of these N ′ , 2N ′ , . . . etc. values are spilt and placed in three consecutive places in array c[n]. So, each set of N ′ elements in array b[n] occupy N ′ + 2 places in array c[n]. For example, if b(1) ¼ 405.6, then c(1) ¼ 04, c(2) ¼ 05 and c(3) ¼ 60. The elements b(2) to b(N ′ 2 1) are placed in c(4) to c(N ′ + 2). The generalised format of array b[n] is shown in Fig. 3 for N ′ ¼ 1000. 2.1.4 Grouping of elements, magnitude and sign encoding: As the elements in array c[n] may be positive, negative or even zero, a suitable sign encoding is necessary to proceed further with compression. Therefore the elements of array c[n] are now grouped with eight consecutive elements. ‘Zero padding’ is applied towards end of the array by dividing the number of elements in array c[n] by eight and adding zeros equal to the number of remainder. Then, sign and magnitude encoding of each such group is done to generate new array d[n]. In sign encoding, the sign of an individual element in array c[n] is coded as a bit (si), which IET Sci. Meas. Technol., 2012, Vol. 6, Iss. 6, pp. 412 –419 doi: 10.1049/iet-smt.2012.0004

www.ietdl.org

Fig. 3 Normalisation and difference array generation (considering N ′ ¼ 1000)

assumes ‘0’ (‘1’) value if the element is positive (negative). Thus, a single byte (s8 s7 –s1) can be used to represent the combined sign information of a group, with s8 (s1) represents sign of eighth (first) element in the group. So, the sign encoding rule is given as for c(i) . 0, else c(i) , 0,

then s8−i = 0 then s8−i = 1

For magnitude encoding of the group, compression is performed by rule (a), that is, nibble combination. In the Pand T-wave regions, the normalised elements become small enough so that the upper nibble of the respective equivalent byte is zero. Compression is performed for two successive such elements having value less than ten. Otherwise, the values are encoded with an offset of 100 in the array d[n]. Magnitude encoding logic is given as

consecutive combined zero elements are taken care of in the present stage. Each queue of consecutive zero elements is encoded by two bytes. The first byte is fixed as 255, indicating start of zero sequence. The second byte is equal to 200 plus number of consecutive zeros in that queue. If the number of zero exceeds 54, then number of zeros plus 200 becomes equal or grater than 255. In such a case, the successive zeros in excess 54 are represented by another two byte combination. For example, 70 consecutive zero elements in array d is encoded as 255, 254 and 255, 216. A new array e[n] is formed where non-zero elements in array d[n] are copied and zero encoded bytes corresponding to zero queues are placed in order. The encoding rule is if d(i) = 0 to d(i + n) = 0

(number of successive zeros) then e(j) = 255 and e(j + 1) = 200 + n else e(j) = d(j)

if abs[c(i)] , 10 and abs[c(i + 1)] , 10 d(i) = 10abs[c(i)] + abs[c(i + 1)] else, d(i) = 100 + abs[c(i)] d(i + 1) = 100 + abs[c(i + 1)] The magnitude encoded elements of the same group are sequentially placed after the encoded sign byte. An illustrative example of magnitude and sign encoding is shown below. For consecutive elements c1, . . ., c8: 04, 05, 06, 00, 212, 28, 22 and 00 in array c[n], the encoded sign byte will be generated as (00001110)2 ¼ (14)10 . The magnitude encoded elements will be 45, 60, 112, 108 and 20. So, the corresponding encoded elements in array d[n] would be d1, . . ., d6: 14, 45, 60, 112, 108 and 20. So, a group of eight elements in array c[n] generates one encoded sign byte and 4 – 8 encoded magnitude elements. For next group c9 – c16, the encoded sign byte is placed in d7, followed by magnitude encoded elements. Thus in array d[n], 0 – 255 represents the sign byte of a group, 100– 199 encoded uncombined elements and 0 –99 encoded combined elements. The generalised format of magnitude and sign encoding is represented in Fig. 4. 2.1.5 Zero element sequence compression: The objective of this stage is to compress the zero sequence in the down-sampled SSD array, following rule (b). These zones correspond to zero frequency or equipotential segments PR, ST and TP in the ECG. A first hand 2:1 compression is achieved in corresponding flag wave segment array d[n] generated from array c[n]. The excess zero elements in c[n] in the form of zero sign bytes and

Fig. 4 Generalised format of array d[n] after magnitude and sign encoding IET Sci. Meas. Technol., 2012, Vol. 6, Iss. 6, pp. 412–419 doi: 10.1049/iet-smt.2012.0004

where n ≥ 1

The encoding rules from Sections 2.1.3 –2.1.5 are summarised as shown in Table 1. For proper reconstruction at the receive point, the normalisation factor k (in splitted form), sampling interval and down-sampling factor f are prefixed as first six bytes of array e[n]. Elements of this final array are converted to corresponding 8 bit ASCII characters in 0 – 255 range and stored in a data file for transmission. It is observed from Table 1 that the third decimal place of an encoded element represents sign byte, combined or uncombined encoded element or zero sequence. Decompression of the file (at the receiving point) involves an inverse sequence adopted in the compression stage. The steps of decompression starts with ASCII to 8 bit integer conversion, followed by the steps mentioned below: 1. Zero sequence finding and decompression: Whenever a combination of 255 followed by a number between 200 and 255 is obtained, zero strings are extracted. Otherwise, the non-zero elements are copied in the derived array. 2. Ungrouping and sign bit generation: The sign bytes of individual elements are extracted and combined elements are unfolded in a single step. To distinguish between the magnitude and sign encoded elements, a data counter is used for each group of eight elements. 3. Difference array generation: The difference array along with the normalised original samples is determined as per the refreshing interval. Table 1

Summary of encoding rules

Encoded element

Interpretation for decoding

255 followed by a number in the range 200– 254 any value in the range 0– 255 followed by a number less than 200 any value in the range 0– 99 any value in the range 100–199

zero sequence sign byte combined elements uncombined elements

415

& The Institution of Engineering and Technology 2012

www.ietdl.org

Fig. 5 Protocol for sending text message in GSM modem

Fig. 6 Typical message stream sequence generated at transmitting point

4. De-normalisation and original sample reconstruction: The difference array elements are divided by normalization factor and original samples are reconstructed by cumulative summation of consecutive sample with the first element. 5. Interpolation and generating the original ECG samples: Using the down-sampling factor, interpolation is done to generate the intermediate samples. 2.2 Communication of compressed data files in text message format To transmit the compressed ECG data as 8 bit ASCII character stream, a MOD 9001 GSM module is connected through serial port with the transmitting point PC. An application software is developed, which provides a GUI to the semi-skilled technician to select the compressed file and transmits the same using the GSM module as per its protocol. The packet structure (frame) for sending a single message through the GSM modem is shown in Fig. 5. As standard SMS protocol supports only 7 bit ASCII, at first, the developed application splits each 8 bit ASCII characters of the compressed file into two 7 bit ASCII. So, a packet accommodates maximum of original 80 characters of the compressed file. The total character stream to be transmitted is segregated into packets and these are transmitted serially as short messages. A packet serial number is inserted as the last (i.e. 160th) data element of each message for easy concatenation of the messages at the receiving point. The target mobile number, actually referring to the cardiologist in city hospital needs to be inserted from the PC keyboard by the operator. A typical message stream format generated from the GSM modem is shown in Fig. 6. At the receiving point (cardiologist) mobile phone, these messages are downloaded to desktop PC/laptop through USB or Bluetooth link. The receiving point hardware connection schematic configuration is shown in Fig. 1b. The received SMS(s) can be downloaded in his computer where a software performs the concatenation of messages, then decompress to construct the original ECG samples. Each of the received massages, when downloaded from a mobile phone in the form of text Table 2

3

Results

The first stage of testing for any real-time system is performed on simulation platform for performance evaluation [22, 23]. The proposed algorithms for compression/decompression and for GSM module transmissions are tested for ensuring reliable operation using short-duration (5 s) ptb-db, mit-db and c-db from Physionet. The quality of reconstruction of original ECG data is calculated by PRD, percentage root mean square difference normalised (PRDN), CR, quality score (QS) and maximum error (Emax) given as   N ′ 2 n=1 (x[n] − x [n]) PRD = 100 × N 2 n=1 (x[n])  N ′ 2 n=1 (x[n] − x [n]) PRDN = 100 ×  N _ 2 n=1 (x[n] − x )

(6)

(7)

where N is the total number of samples in the data set, x[n] is the actual value, x ′ [n] is the reconstructed value and _ x is the mean of the original data array CR =

Input data file size Output file data size

(8)

CR PRD

(9)

QS =

x [n]) Emax = max(x[n] − _

(10)

where max operator extracts maximum element. The compression/decompression algorithm is tested with 240 leads from ptb-db data. This database contains 12

PRD and CR values for compression using c-db data

Patient file ID in Physionet

08 730_03 08 730_04 11 950_03 11 247_01 11 247_03

files, is prefixed and suffixed by additional fixed character blocks, representing sender’s SIM numbers, receiver’s SIM number, and time of reception etc. These fixed characters are specific to the phone set and normally dependent on its manufacturer. These additional characters are utilised to find out whether the message stream from a particular generating point (i.e. rural healthcare centre) is properly received. The concatenation of the received massages is performed by checking the packet serial number and discarding both the prefix and suffix blocks. In case of proper reception all messages, an acknowledgment is sent to the transmitting point. A time – plane plot of the ECG from the reconstructed samples facilitates visual inspection and further analysis. In case of missed SMS(s), the software in the receiving point computer generates an error message, requiring a complete retransmission request of the entire data from transmit point. The application software at cardiologist’s laptop can also be used for sending the feedback comments after analysis.

Lead 1

Lead 2

CR

PRD, %

PRDN, %

QS

Max. error

CR

PRD, %

PRDN, %

QS

Max. error

26.97 25.33 41.12 38.11 40.16

1.63 1.42 3.48 9.83 7.98

1.75 1.46 7.99 9.96 8.21

16.54 17.83 11.81 3.87 5.03

0.009 0.043 0.076 0.155 0.205

25.83 25.96 39.21 33.49 40.27

1.66 1.31 11.98 3.42 9.73

1.671 1.525 12.87 5.15 9.812

15.56 19.81 3.27 9.79 4.13

0.014 0.019 0.117 0.040 0.030

416

& The Institution of Engineering and Technology 2012

IET Sci. Meas. Technol., 2012, Vol. 6, Iss. 6, pp. 412 –419 doi: 10.1049/iet-smt.2012.0004

www.ietdl.org Table 3

PRD and CR values for compression using mit-db data

Patient file ID in Physionet with lead no. 101_v1 102_v2 104_v2 105_v1 106_v1 115_v1 117_v2

CR 31.82 32.14 58.64 37.90 33.50 42.07 41.59

standard lead and 3 Frank lead ECG records from 290 normal and abnormal subjects at 1 kHz sampling. An average CR and PRD values with 52.04 and 2.12, respectively, are obtained. Table 2 shows some results obtained with c-db data. The cdb database contains 10 s ECG data at 4 ms sampling. An average CR, PRD, PRDN and Emax. obtained with 50

PRD, %

PRDN, %

1.231 0.435 2.49 2.15 2.36 0.938 1.130

8.359 0.445 2.97 3.228 4.32 3.93 2.007

QS

Max. error

25.84 73.88 23.55 17.62 14.19 44.85 36.80

0.009 0.043 0.040 0.008 0.010 0.003 0.038

numbers of c-db leads taken at random are 39.12, 4.54, 7.42 and 0.07, respectively. The lower CR and higher PRD and PRDN are contributed owing to larger sampling interval (4 ms) of the c-db data. The proposed algorithms are also tested with 55-second mit-db data. This database contains two-channel ambulatory

Fig. 7 Plot of ECG before compression and after reconstruction a c-db data b mit-db data IET Sci. Meas. Technol., 2012, Vol. 6, Iss. 6, pp. 412–419 doi: 10.1049/iet-smt.2012.0004

417

& The Institution of Engineering and Technology 2012

www.ietdl.org Table 4

Comparison with few studies on compression

Algorithm

Database used

PRD, %

PRDN, %

CR

QS

proposed

MIT–BIH compression test MIT–BIH arrhythmia MIT–BIH arrhythmia MIT–BIH arrhythmia MIT–BIH arrhythmia MIT–BIH arrhythmia MIT–BIH arrhythmia MIT–BIH arrhythmia

4.54 1.73 2.6 1.1760 2.5518 1.17 0.641 4.02

7.42 3.14 _ 3.5877 _ _ _ –

39.12 43.54 8 8.24 16.24 18.27 16.9 21.6

8.61 25.16 3.07 7.00 6.36 15.61 29.36 5.37

Hilton [24] Blanco et al. [25] Benzid et al. [26] Fira and Garos [27] Kim et al. [28] Ku et al. [29]

ECG recordings from 48 subjects at 360 Hz sampling. The raw mit-db data are up-sampled at 1 kHz to make the sampling interval uniform. Some test results with mit-db data are given in Table 3. An average CR, PRD, PRDN, QS and Emax obtained are 43.54, 1.73, 3.14, 25.16 and 0.052, respectively, while tested with 30 number of leads from mit-db data. Experimental trail is performed with 55 number of shortduration ECG data, arbitrarily chosen from c-db, mit-db and ptb-db. To ensure the preservation of clinical information in the compressed data, a set of signatures from the ECG wave are identified in consultation with cardiologists. These parameters are computed before compression and after reconstruction at the receive point. These are: RRi ¼ RR interval; QRSdur ¼ QRS duration; QTint ¼ QT interval; Pdur ¼ P width; Ramp ¼ R amplitude; Pamp ¼ P amplitude; Tamp ¼ T amplitude; and Samp ¼ S amplitude. A new parameter named diagnostic distortion factors (DDFs) is introduced and defined as n DDFf =

i=1 fi

n

− fr

× 100

(11)

where n is the total number of complete beats transmitted, fi is the value of signature before compression and fr is the value of signature after reconstruction. For a number of the ptb-db and mit-db data files, the DDF parameters are estimated by using separate algorithms [30]. An average DDF of 2.012 is obtained for 30 single leads. Fig. 7 shows the plot of one c-db and one mit-db lead data before compression and after reconstruction. It shows that the reconstructed data have close morphological match with corresponding original sample plot. For clinical validation of the reconstructed data, two cardiologists are consulted with printed records of six different single lead data before compression and after reconstruction. They confirmed that significant clinical information is retained in the compression process. Performance comparison with other works on compression, 55 s mit-db data is used. This is shown in Table 4, where the other researchers used long strip of mit-db data. However, the objective of this study and application is limited to short-duration ECG. In the proposed algorithm, mit-db data and c-db data are used.

4

Conclusion

This paper describes telecardiology system for offline transmission of patient’s ECG from a rural clinic to a remote cardiologist. In the experimental trial, single lead data is used. The compression algorithm involves simple numerical operations, yet it achieves a good compression ratio as compared to some reported works. The time 418 & The Institution of Engineering and Technology 2012

complexity of the compression and decompression algorithm is computed in a desktop computed with Intel Pentium Dual Core E5200 Processor at 2.50 GHz with 2 GB RAM. For compression using c-db data, the average computational time was found to be 5.09 ms per ECG sample. For the decompression the same figure is found to be 42.4 ms. No denoising for baseline modulation, power line interference and EMG noise is performed in the preprocessing stage of compression. Clinical validation of ECG records and low value of PRD with our algorithm establishes the usability of the proposed technique for telecardiology applications. This technique is not recommended for prolonged ECG recording and only suitable for intermittent use. One of the salient features of the work is low cost infrastructural requirement. The rural clinic (transmitting point) set-up only demand a desktop PC connected with an ECG machine (or hardware acquisition module) for digital recording of the ECG and a GSM modem. The remote end cardiologist, with a low-end mobile phone and a desktop/ laptop computer can reconstruct the ECG for visual or automated diagnosis. The proposed system can provide cardiac care service to outpatients in remote rural clinics, in absence of non-specialists. Also, this will not hamper the busy schedule of cardiologists in city hospitals as the receiving point specialist can perform the (decompression and) visual diagnosis at a later time. The remote cardiologist can send his feedback after visual inspection of the ECG plot. Apart from that, further computerised processing can be performed on the reconstructed data, if needed. Internet and Wi-Fi as communication platform is not chosen since this would enhance the cost involvement of the entire system. In India, the proposed system can significantly contribute in rural healthcare service.

5

Acknowledgments

The authors acknowledge their deepest gratitude to Dr. S. Bhattacharya and Dr. R.C. Saha, Cardiologist, Kolkata, India, for their valuable clinical advice. The authors convey their sincere thanks for the technical support received from the University Grant Commission (UGC) Special Assistance Programme (SAP) Departmental Research Support (DRS) I project at Department of Applied Physics, University of Calcutta during the work.

6

References

1 Jalaleddine, S.M.S., Hutchens, C.G., Strattan, R.D., Coberly, W.A.: ‘ECG data compression techniques – a unified approach’, IEEE Trans. Biomed. Eng., 1990, 37, (4), pp. 329–343 IET Sci. Meas. Technol., 2012, Vol. 6, Iss. 6, pp. 412 –419 doi: 10.1049/iet-smt.2012.0004

www.ietdl.org 2 Abenstein, J.P., Tompkins, W.J.: ‘A new data-reduction algorithm for real-time ECG analysis’, IEEE Trans. Biomed. Eng., 1982, 29, (1), pp. 43–48 3 Cox, J.R., Nolle, F.M., Fozzard, H.A., Oliver, G.C.: ‘AZTEC, a preprocessing program for real time rhythm analysis’, IEEE Trans. Biomed. Eng., 1968, BME-15, (2), pp. 128–129 4 Furht, B., Perez, A.: ‘An adaptive real time ECG compression algorithm with variable threshold’, IEEE Trans. Biomed. Eng., 1998, 35, (6), pp. 489–484 5 Pollard, A.E., Barr, R.C.: ‘Adaptive sampling of intracellular and extracellular cardiac potentials with the fan method’, Med. Biol. Eng. Comput., 1987, 25, (3), pp. 261 –268 6 Ahmed, N., Milne, P.J., Harris, S.G.: ‘Electrocardiographic data compression via orthogonal transforms’, IEEE Trans. Biomed. Eng., 1975, 22, (6), pp. 484–487 7 Al-Nashash, H.A.M.: ‘A dynamic Fourier series for the compression of ECG using FFT and adaptive coefficient’, Med. Eng. Phy., 1995, 17, (3), pp. 197–203 8 Cetin, A.E., Koymen, H., Aydin, M.C.: ‘Multichannel ECG data compression by multirate signal processing and transform domain coding technique’, IEEE Trans. Biomed. Eng., 1993, 40, (5), pp. 495–499 9 Chan, H.L., Siao, Y.C., Chen, S.W., Yu, S.F.: ‘Wavelet-based ECG compression by bit-field preserving and running length encoding’, Comput. Methods Prog. Biomed., 2008, 90, (1), pp. 1 –8 10 Istepenian, R.S., Petrosian, A.: ‘Optimal zonal wavelet-based ECG data compression for a mobile telecardiology system’, IEEE Trans Biomed. Eng., 2000, 4, (3), pp. 200– 211 11 Manikandan, M.S., Dandapat, S.: ‘Wavelet threshold based ECG compression using USZZQ and Huffman coding of DSM’, Biomed. Sig. Proc. Control., 2006, 1, (4), pp. 261– 270 12 Guler, N.F., Fidan, U.: ‘Wireless transmission of ECG signal’, J. Med. Syst., 2006, 30, (3), pp. 231– 235 13 Kim, B.S., Yoo, S.K.: ‘Performance evaluation of wavelet-based ECG compression algorithms for telecardiology application over CDMA network’, Med. Inform. Internet Med., 2007, 32, (3), pp. 179–189 14 Wen, C., Yeh, M.F., Chang, K.C., Lee, R.G.: ‘Real-time ECG telemonitoring system design with mobile phone platform’, Measurement, 2008, 41, (4), pp. 463– 470 15 Capua, C.D., Meduri, A., Morello, R.: ‘A smart ECG measurement system based on web-service-oriented architecture for telemedicine applications’, IEEE Trans. Inst. Meas., 2010, 59, (10), pp. 2530– 2538

IET Sci. Meas. Technol., 2012, Vol. 6, Iss. 6, pp. 412–419 doi: 10.1049/iet-smt.2012.0004

16 Zheng, J.W., Zhang, Z.B., Wu, T.H., Zhang, Y.: ‘A wearable mobihealth care system supporting real-time diagnosis and alarm’, Med. Bio. Eng. Comput., 2007, 45, (9), pp. 877–885 17 Mitra, S., Mitra, M., Chaudhuri, B.B.: ‘A roughest based inference engine for ECG classification’, IEEE Trans. Inst. Meas., 2006, 55, (6), pp. 2198– 2206 18 Sufi, F., Khalil, I.: ‘Diagnosis of cardiovascular abnormalities from compressed ECG: a data mining-based approach’, IEEE trans. Inf. Tech. Biomed., 2011, 15, (1), pp. 33– 39 19 Sufi, F., Khalil, I.: ‘Enforcing secured ECG transmission for realtime telemonitoring: a joint encoding, compression, encryption mechanism’, Sec. Commun. Netw., 2008, 1, (5), pp. 389–405 20 Sufi, F., Fang, Q., Khalil, I., Mahmoud, S.S.: ‘Novel methods of faster cardiovascular diagnosis in wireless telecardiology’, IEEE Sel. Areas Commun., 2009, 27, (4), pp. 537 –552 21 Physionet. Available at http://www.physionet.org, accessed 4 April 2011 22 Josko, A., Rak, R.J.: ‘Effective simulation of signals for testing ECG analyzer’, IEEE Trans. Inst. Meas., 2005, 54, (3), pp. 1019– 1024 23 Lamarque, G., Ravier, P., Dumez-Viou, C.: ‘A new concept of virtual patient for real-time ECG analyzers’, IEEE Trans. Inst. Meas., 2011, 60, (3), pp. 939–946 24 Hilton, M.L.: ‘Wavelet and wavelet packet compression of electrocardiograms’, IEEE Trans. Biomed. Eng., 1997, 44, (5), pp. 394–402 25 Blanco, M.V., Cruz, R.F., Llorente, J.I.G., Barner, K.E.: ‘ECG compression with retrieved quality guaranteed’, IET Electron. Lett., 2004, 40, (23), pp. 1466–1467 26 Benzid, M., Marir, F., Boussaad, A., Benyoucef, M., Arar, D.: ‘Fixed percentage of wavelet coefficients to be zeroed for ECG compression’, IET Electron. Lett., 2003, 39, (1), pp. 830–831 27 Fira, C.M., Garos, L.: ‘An ECG compression method and its validation using NNs’, IEEE Trans. Biomed. Eng., 2008, 55, (4), pp. 1319– 1326 28 Kim, H., Yazicioglu, R.F., Merken, P., Hoof, C.V., Yoo, H.J.: ‘ECG signal compression and classification algorithm with quad level vector for ECG Holter system’, IEEE Trans. Biomed. Eng., 2010, 14, (1), pp. 93–100 29 Ku, C.T., Hung, K.C., Wang, H.S., Hung, Y.S.: ‘High efficient ECG compression based on reversible round-off non-recursive 1-D discrete periodized wavelet transform’, Med. Eng. Phy., 2007, 29, (10), pp. 1149– 1166 30 Chatterjee, H.K., Gupta, R., Mitra, M.: ‘A statistical approach for determination of time plane features from digitized ECG’, Comput. Biol. Med., 2011, 41, (5), pp. 278–284

419

& The Institution of Engineering and Technology 2012