Using Information Technology and Artificial ...

16 downloads 3327 Views 274KB Size Report
Using Information Technology and Artificial Intelligence to. Build an ... 1Faculty of Information Technology. Applied ... total the cover angle will be 270 degrees.
̼

Using Information Technology and Artificial Intelligence to Build an E-Glass for the Blinds KHALED AL-SARAYREH1, RAFA E. AL-QUTAISH1 1 Faculty of Information Technology Applied Science University Amman JORDAN [email protected], [email protected]

KEY-WORDS: - EMBEDDED SYSTEM , NEURAL NETWORKS , ULTRA SONIC WAVE , ARTIFICIAL INTELLIGENCE

1. Abstract Nowadays,

IT

and

communication

systems are used in a wide range of our lives. Therefore, we can not imagine our lives without IT and communication systems. From this point we got an idea to build an e-glass which utilizes of an embedded system and neural networks, however, this e-glass could be used by the blinds to assist them in their ways without

any

assistants

from

other

persons. It is important to note that the hardware and software components of the e-glass are not expensive. When this class is used by the blinds it will make them self

1

confidence, let them walk independently and increase their morality. 诲瞬

瞰 瞲 瞲

2. Introduction The idea of building an e-glass it will be treated as a first step in using the technology

as

an

integrated

or

replaceable part of the damaged human neural system; this will help to make the life easier for the blinds.

The proposed e-glass works through an electronic circuit to scan and collect information about the entire object which could be found in front of the blind, then it will analyse these objects to give a voice command to the blind to keep away from these obstacles (objects).

3. General View of the E-Glass

develop the embedded electronic circuit programs to receive the neural flows from

This e-glass works through the ultra

the human brain and use the pattern

sonic waves by sending radar signals for

recognition to analyse the objects as

distance from 2 meter to 50 meters and

images.

120 degrees as a vertical and horizontal cover angle along with 60 degrees to

4. How the E-Glass Works?

cover the right and left, as a result, in total the cover angle will be 270 degrees.

4.1 First phase: putting the receiver

Therefore, after receiving the information

(radar) on the glass and analysing all the

from the radar system, they will be

data which are collected from the radar,

analyzed in the artificial intelligence

this analysis could be done through the

software and produce a warning about

first electronic circuit programs, then an

any

electronic signal will be sent to the

obstacle

objects

through

a

headphone.

electronic warning and alarming device (the headphone).

One of the characteristics of this e-glass is that it can identify any small object with

4.2 Second phase: the electronic signal

a 2 cm2 area or more from 2 meters

will be sent from the first electronic circuit

distance.

to the second electronic circuit which has the artificial intelligence system to give a

As a future work, this e-glass could be

scaled warning based on the dangerous

embedded to the electronic devices

degree.

within the cars. In addition, we can

2

5. E-Glass Electronic Circuit:

Figure 1: The E-Glass Embedded

Figure 2: The Electronic Scanning

System.

System to Tackle the Objects.

Figure 3: Sending and Receiving Signals Timing System.

3

Figure 4: The Completed E-Glass Electronic Circuit. 6. Conclusion

And Perception of Speech and Music,

The research in the filed of using IT to

John Wiley & Sons, Inc., New York,

assist the special needs person is little

2000

which make it distinctive work from the

2.

social view. Therefore, within research

Speaker Recognition System," Audio

we just started a huge work to make the

Visual

life of the special needs person easier.

Communications

Do

Federal

Lausanne,

1. B. Gold and N. Morgan, Speech

Switzerland,

Audio

Processing

Signal

Processing:

N.,

Institute

7. References

and

4

Minh

"An

Automatic

Laboratory of

Swiss

Technology,

3. EE 578 Digital Speech Processing,

Processing

Levant Arslan’s lecture notes, Spring

Macmillan, New York 1993.

2001, Boðaziçi University, Turkey

9. K.N. Stevens and A.S. House, “An

4. G.

Fant,

Acoustic

Theory

of

of

Speech

Signals,

Acoustic Theory of Vowel Production

Speech Production, Mouton & Co.,

10. “Matlab VOICEBOX”

The Hague, 1970

http://www.ee.ic.ac.uk/hp/staff/dmb/vo

And Some of its Implications”, Journal

icebox/voicebox.html

of Speech and Hearing Research,

11. Mihn Doh, “An Automatic Speaker

4:303-320, 1961

Recognition System”

5.

(Gonzalez,

2002)

Gonzales,

http://lcavwww.epfl.ch/~minhdo/asr_p

Rafael. and Woods, Richard. 2002.

roject/asr_project.html

Digital

12.

image

processing,

second

Mohamed

Gasem,

“Vector

Quantization”

edition, Prentice-hall. 6. (Guo and Liddell, 2002) Guo, S. and Liddell, H. 2000. Support Vector

http://www.geocities.com/mohamedqa

Regression and Classification Based

sem/vectorquantization/vq.html

Multi-view Detection and Recognition

13. S.B. Davis and P. Mermelstein,

IEEE International Conference on

"Comparison

Automatic

representations for

Face&

Gesture

parametric

Recognition, pp.300-305.

Monosyllabic

word

7. J.H.L. Hansen, Slides for ECEN-

continuously

spoken

5022

IEEE

Speech

Processing

&

Recognition, University of Colorado Boulder, 2000, 8.

J.R.Deller,

Proakis.J.H.Hansen.Discrete-Time

5

of

J.G

recognition

in

sentences",