A neural network based model for urban noise

3 downloads 0 Views 205KB Size Report
13, pp. 21-31. 1997. 24. Fortuna, L.; Occhipinti, L.;Vinci, C.;Xibilia, M.G., ”A Neuro-Fuzzy model of Urban. Traffic”. Circuits and Systems, 1994, Proceedings of the ...
A neural network based model for urban noise prediction N. Genaro, I. Requena [email protected], [email protected] Department of Computer Science and Artificial Intelligence, University of Granada, 18071 Granada, Spain

Abstract. Noise is a global problem. Since 1972, when the WHO classified noise as a pollutant, most industrialized countries have decided to regulate noise through laws or local regulations. Scientist of many countries have tried to model urban noise through different techniques, but their results have no been as good as expected. Our approach consist on tackling urban noise prediction with the use of neural networks. After gathering data of varied streets in the city of Granada, Spain, we worked on records with information of variables that are considered influential by experts. The obtained results improved those obtained from mathematical models, and the new system is able to predict noise with high accuracy. The percentage of examples that exceed 5% error in the test set is always less than 2%. To refine the model, a technique of inputs reduction has been implemented, the principal component analysis. The number of inputs was reduced, and although goodness declined slightly, we obtained quite acceptable values.

1

Introduction

The man has lived with noise from the beginning of Times. As early as the fifteenth century in Switzerland was decreed the prohibition on circulation of carts in poor condition, with the aim of avoiding nuisance to pedestrians and neighbors. Until recently, noise was considered only an accident product of human activity, and in which the man should live, but since a little more than thirty years, the authorities have begun to respond to this serious problem. The first international declaration that deals with the consequences of noise was in 1972, when the World Health Organization (WHO) decided to catalogue noise generically as a kind of pollution. Seven years later, the Stockholm Conference classified noise as a specific pollutant. Those first official resolutions were subsequently ratified by the then EEC, which required from the member countries an effort to legally regulate noise pollution. Later, a report published in 1990 showed Spain as the country with the second highest rate of noise in the world after Japan, and estimated that 74% of the population was subjected to

2

levels higher than tolerable. In recent decades there has been a change in the acoustic conditions of the cities (increase in the fleet, mechanization of activities, change of use of public highways, etc..), which resulted in a strong increase in the level of environmental noise. Faced with the evidence of a global problem, the majority of countries in the world, have put or are putting in place measures to protect its citizens. Still, the problem persists even in the places where they are already devoting resources, laws and regulations to regulate, evaluate and dampen noise sources or for the construction of noise barriers. 1.1

Noise

Noise is an unwanted sound. It can be caused by different sources, and the ones we are concerned in this article are those ones related to urban noise. Urban noise is mostly produced by light vehicles, heavy vehicles or motorcycles. It is also caused by other noises such as those produced in commercial or leisure areas (people, music bars...), or those produced by events like the sirens of ambulances, fireman... To measure noise we use the parameter Leq , known universally as the average essential parameter for measuring noise, representing a noise level equivalent and measured in decibels. It is measured with a sonometer and corresponds to the average energy in a variant sound level. Due to the characteristics of the data that researchers handled, and the results that we seek, it is clear that the application of soft computing techniques, is not only desirable, but interesting. In this context addresses this article. The aim is to consider two complementary aspects: firstly, it is planned to carry out a review of existing literature relating urban noise and any of the soft computing techniques, and secondly we want to design and implement a system to predict noise levels with the help of Artificial Neural Networks.

2

Review of publications

We intend to make a detailed review of publications at the international level that link urban noise with Soft Computing methods. We are going to deal with this review on one hand, by dividing the literature by the Soft Computing technique they use, and on the other, in three types, according to the treatment of environmental noise: prediction of noise level, prediction of noise annoyance and classification of noise, for which we found 24 articles. We also include traffic flow prediction with other 5 additional work. Most of publications use fuzzy techniques in predicting the noise annoyance and the level of noise. However, for classification, the Hidden Markov Models have been the most used. There are few references to genetic algorithms and,

3

as for artificial neural networks, they are used for various fields, especially in classification and prediction of traffic flow. A taxonomy of the found articles are shown in table 1.

Soft Computing Technique

Noise level Noise annoyance Noise prediction prediction tion Artificial Neural Networks

[2] [17]

Fuzzy [1] [18] Techniques Hidden Markov Models Genetic [18] Algorithms

[35]

classifica- Traffic flow prediction

[3] [5] [20]

[6] [7] [8] [9] [10] [11] [12] [13] [14] [4] [15] [16] [35]

[22] [23] [24] [27] [34] [24] [34]

[5] [19] [20] [25] [28] [29] [11] [12] [15] Table 1. Taxonomy of the publications

3

A neural network based model for urban noise prediction

For the study of urban noise in the city of Granada, we raised the idea of creating a system that, through neural networks, will help us to obtain more precise results. To that end, after a process of data collection in the city of Granada, which included a total of 12 streets with different characteristics, we proceed to design an artificial neural network structure optimal for solving the problem. The results are compared with data from predictive models of existing urban noise and confirm the goodness of our proposal. 3.1

Data collection process

The measurements were taken with a sonometer since, although in reality there is a wide range of acoustic instruments designed to perform long and short measurements, it is the only device that permits to get the value Leq . Specifically, it has been used the sonometer integrator 2260 ObserverTM, equipped with the software BZ7219. The number and location of measurement points required to identify the sound of an environmental area, depends on the type of measurements to be made. Given the experts’ criteria, data gathering was carried out in 12 streets of disparate characteristics of the city of Granada. We claimed that there was diversity in the values of the 25 measured variables. These variables are shown in table 2.

4 1 2 3 4 5 6 7 8 9 10 11 12 25

Time of day 13 Emergence of abnormal events related to traffic Commercial or leisure environment 14 Emergence of abnormal events non related to traffic Construction works in the area 15 Average speed of vehicles Stabilization time 16 Road slope Traffic flow type 17 Number of lanes upward Ascendant light vehicles flow 18 Number of lanes downward Descendant light vehicles flow 19 Type of the pavement Ascendant motorcycle flow 20 State of the pavement Descendant motorcycle flow 21 Type of the street Ascendant heavy vehicles flow 22 Width of the street Descendant heavy vehicles flow 23 Average height of buildings in the road Number of vehicles with siren 24 Width of the road Distance from the noise source to the receiver Table 2. Input variables considered by the network

3.2

Artificial Neural Network for noise levels prediction

For its property to generalize, and its possibility of supervised learning, we will use a neural network type Backpropagation. The method of gradient used was Levenberg-Marquardt. The neural network has been built with MATLAB, which has multiple functions for data processing with neural networks. We opted by 7 neurons in the hidden layer. The structure of this neural network in MATLAB is shown is figure 1.

Fig. 1. Structure of the proposed neural network

3.3

Data

After data collection commented earlier, we had a set consisting of 289 data records, each of which is represented by 26 features. One of these 26 characteristics is the equivalent level of noise LAeq , our goal, which will serve as the output of the network, while the other 25 will take the characteristics and the factors that influenced the level of noise, i.e. as inputs for the neural network. The 25 input variables from table 2, suffer a process of normalization. Some of them are transformed linearly in range [0 1]. In turn, it was necessary to normalize the exit. Because the range of hearing a person ranges roughly between 20dB and 120dB, we believe that these are the maximum and minimum values with which most appropriate normalize the output.

5

3.4

Results of the proposed neural network

Once selected the structure of the neural network that we use, shown in figure 1, we built 5 different data sets. The 5 data sets are constructed randomly from the 289 input records. Therefore, we obtained five sets of data containing a training set and a test set, each one formed in turn by different records. The training sets contain 200 training records and the test sets contain 89 other records. The structure of the neural network was run five times with each of the five sets of data. In other words, five sets of different initial weights were randomly obtained, and the network was trained once to one of these sets of weights and for each training set. This give us 25 trials and 25 results for the prediction of the level of noise. The behavior of the network has been assessed with the error regarding the expected outcome, as well as statistical measures of the mean and the standard deviation of the error. Table 3 contains the results for the third of five sets of data used. The table shows the 5 runs of the network and network parameters for each run, as the number of times during the training, and results. It also presents the results for the average output of 5 runs. This is of interest, taking into account the theory of the physical errors due to the error that the measuring apparatus can commit. To measure the results we consider the instances that exceed 5% of the error, though, the experts admit an error until 9dB, which represents approximately 12%. We measure as well the mean and the standard deviation of the error. We also think it is interesting to indicate the maximum error committed in the entire test set. Dataset 3 Run 1 Training / test set Inputs / Neurons on hidden layer / Outputs Training / Learning function Goal / Learning rate Epochs / Learning rate 35 Instances that exceed 5% of 1/5 error (Train/Test) Test set maximum error 5,27 Error mean (dB) 0,79 Error standard deviation 0,81 Table 3. Results of the

Run 2 Run 3 Run 4 200 records / 89 records

Run 5 Average

25 / 7 / 1 TRAINLM / LEARNGD 10−5 / Variable 8 16 8 16 0/0 3,20 0,68 0,57 five runs

0/1

0/0

0/2

4,08 2,72 3,92 0,59 0,69 0,59 0,56 0,55 0,59 of the network, dataset 3

0/0 3,10 0,67 0,50

As can be seen from table 3, on all runs, the number of epochs has been very low, especially in the sets 2 and 5. This means that training is very fast, allowing easy use of already trained ANN in virtually real-time.

6

In all sets, the majority of runs learn correctly, and the test set, as much 2 (3 in the set 5) instances exceed 5% of the error regarding the expected output. Only in a case (first set of 3) the number of instances has reached 5, due to poor settings initial random weights. Except for the sets 1 and 5, there are runs in the whole test set below 5% in the output error. Only one case exceed 12%, acceptable error by experts in the measurement of noise. Furthermore, the average total number of instances that exceed 5% error does not reach 2 (the average of the 25 runs is 1.92). The maximum error is 14.47 dB, which this is the only example of error greater than 9dB (in the 25 executions). The average error is kept very low (below 0.79 dB), which indicates a very good global learning. The standard deviation remained steady between 0.55 and 0.7 in most of the instances, except in two cases. On the one hand, in the case of set 4, 4th run, which is 1.05, because of the instance where the error is 14.47 dB. On the other hand, in the case of set 3, run 1, which is 0.81, as the maximum error is a little high (5.27 dB) and also the average error raised up to 0.79, a figure somewhat high compared to the rest of runs. This set of data also has the particularity to contain the single set of training with an error over 5% of the expected value. This confirms that the global learning obtained by the ANN is very good. Special consideration has the outcome of output media (sixth column of the table). As already indicated, the use of this values is clearly justified on the theory of the physical errors. In all data sets, learning is correct, and in the test set, only for the set 4, an instance has a mistake over 5%. The average error is always less than 0.71 and the standard deviation does not exceed the value 0.83, indicating an excellent global learning. The fact that the training of ANN proposals is very fast, and that can be trained at the same time, supports the use of values of the outputs of several ANN, as an output computed by the proposed system. In view of these results, where we get a very low average error of 25 executions: 0,716 dB, and taking into account that, as just outlined, experts admit up to an error of 9 dB, we faced with excellent results, as it involves reducing the error goal by 92%. The average deviations from the 25 runs was 0,786, with a value also very good, given that the mean of error is low. Figure 2 shows the comparative chart of the measured noise level with the one obtained for the test set. We illustrate the results for the data set 3. The green line indicates 5% over 60dB ( 3dB) and the red one 5% over 80 dB ( 4dB). Note that no data goes beyond these limits. 3.5

Comparison between the neural network proposal and the existing predictive models

The aim of this neural network is to improve the performance of existing predictive models. These models calculate the noise level from the characteristics of the environment with the help of mathematical calculations. To verify that

7

Fig. 2. Comparative graph of test set 3, run 4

our approach is able to obtain a noise level closest to the real comparison we compare, in a graph, each of the models with the results the network obtains. There were obtained charts for each group of data, showing here the chart only for the first set of data. This chart, which compares the output measure of the test set with the output of the neural network and each model, is shown in figure 3. As can be seen in figure 3, the models that are closest to actual noise levels are linear model, Granada I [33] and E. Gaja [26], although the neural network is at all times the one that conforms better to the level expected. One of the strengths of the network occurs when there is an example in which the flow of vehicles is zero, where the network approximates very well, while the predictive models give a very low value. This is because they are based primarily on the flow of vehicles for the basis of calculation, and in the case of flow zero, the noise level of output is valued only by the corrections. 3.6

Input reduction through principal component analysis

The ANN receives as input data records consisting of 25 variables, and mathematical models use only a limited number of inputs. This suggests that there may be irrelevant variables to predict the level of noise. Therefore, it is worth considering a study of data reduction and / or feature selection. As a first approach to feature selection, we have applied the principal component analysis, looking for relationships between data, in order to reduce the number of inputs as much as possible. MATLAB offers function for this kind of

8

Fig. 3. Comparative chart of the results of the classic models and neural network

input preprocessing. Applying the analysis to our set of 25 variables with 289 records each (a matrix of 25x289), we get a new array of 11 rows and 289 columns. In other words, the inputs have been reduced from 25 to 11. These inputs do not correspond to any of the primitive, but are linear combinations of them. This makes us think that the reduction of inputs through a features selection is not immediate and requires further study to be addressed in the future. Although the results, as expected, are not as spectacular as in the previous subsection, we consider them relevant and believe that the data set has been reduced considerably. They are presented in table 4, as a summary of the results of 5 executions of the ANN for the data set 4. From table 4, we see that the training time is reduced compared to the one with data without preprocessing. Likewise training data without preprocessing, the sets 1, 3 and 4 are the ones that require more time in training. Likewise, we get a low average of error in data sets 1 and 4, to coincide with two of the three sets with highest number of epochs in training. On the other hand, the five runs behave differently according to the data, which is consistent with the random collection of training and test sets.

9 Dataset 4 Run 1 Training / test set Inputs / Neurons on hidden layer / Outputs Training / Learning function Goal / Learning rate Epochs / Learning rate 13 Instances that exceed 5% of 0/5 error (Train/Test) Test set maximum error 8,12 Error mean (dB) 0,88 Error standard deviation 0,93 Table 4. Results of the five runs of Analysis)

Run 2 Run 3 Run 4 200 records 89 records 11 / 7 / 1

Run 5 Average

TRAINLM / LEARNGD 10−5 / Variable 14 14 14 15 1/3 0/3 2/4 0/5

0/3

6,71 7,08 9,40 6,80 6,70 0,91 0,82 0,99 0,83 0,89 0,85 0,80 1,04 0,82 0,78 the network, dataset 4 (Principal Components

The average for the total number of instances that exceed 5% error is higher, with an average of 6 per set. We find a total of 48 instances in which 5% error was exceeded, with a maximum of an error of 15 dB. This is not the only example of error greater than 9dB, error allowed by the experts, there are 17 others. Given that we have done tests with a total of 7225 data records (25 data sets of 289 each), 17 errors does not represent, however, a large percentage. If we look at the average error for all data, we get 0.92 dB, which is more than acceptable.

Fig. 4. Comparative chart test set 5, run 1

We often find the same example in which the network has made a mistake for all its executions, which is visible in the data sets 1 and 4. For the set 4 we find three examples in which all runs obtained an error greater than 5%. The

10

same applies to the set 1 twice. They are neither instances whose output falls in a particular rank, nor their values of entry have any reference from which we can intuit that the network has not learned properly. Instead, in both sets, the instance of the error coincides three times. That is, data sets 1 and 4 error rises more than 5% in the examples 104, 126 and 186. In the remaining sets, the results for these 3 records were observed and they are also high. Even when these instances belong to the training set, it is clear that the network tends to be confused with them. The standard deviation is kept around 1, but in the set 1 falls a little bit and in the 5 raises fairly. The average deviations of the 25 executions was 1.03, that is an acceptable value, given the average error is low. The analysis of the average output of the five runs improve the outcomes of each individual ANN, which again justifies its use. To conclude this section, we consider interesting the use of the principal component analysis in our problem because, despite we did not obtained results as good as the ones without inputs preprocessing, we emphasize the acquisition of a significant reduction in the input set. We show in figure 4 the comparative chart of the measured noise level with the one obtained for the test set for the dataset 5. The green line indicates 5% over 60dB ( 3dB) and the red one 5% over 80 dB ( 4dB). Only an instance exceed allowable limits.

4

Conclusions and future work

This work was dealt with two main objectives. Firstly, the review of the current literature that seeks to bring together urban noise and soft computing techniques. Secondly, the creation of a neural network based system for the prediction of urban noise. Regarding the first objective, we have made an intensive search in different bibliographic databases, gaining a total of 30 articles related to the topic. We classified them depending on the Soft Computing technique used and according to the treatment of noise adopted, we concluded that there’s little and not varied material on the subject, producing certain authors most aticles. In addition, the references to other articles in the issue are very poor. These facts have encouraged us to continue, and we confirm that the contribution made and the lines of the proposed work are innovative. As for the second objective, we have conducted the design and implementation of an artificial neural network that predicts the level of urban noise given some characteristics of the environment. We opted by a network structure type backpropagation, 7 neurons in the hidden layer and the training algorithm was

11

Levenberg-Marquardt. From this structure, 5 different data sets were created and 5 runs for each of them were made. We obtained excellent results from these 25 tests, earning an average error below 1 dB and surpassing the error allowed by the experts in only one occasion. The results given by the network were compared with those of existing predictive models of urban noise, based on mathematical models. Tests confirmed that the results obtained by the network are better in all data records. Since the network receives a high number of inputs, we have finally implemented a method of reducing them, based on principal component analysis. Inputs will be reduced from 25 to 11, and although goodness declined slightly, the results were acceptable. We consider this as a very positive solution, given the undoubted advantage of the reduced data set. Using the mean of the results of the 5 runs, with the same training set and thus the same test set, as calculated output, improves the results. As future work, there are several proposals that have emerged from this research. – New data gathering is currently being held in the city of Granada. We intend to continue researching with these data, as they give us the opportunity to test the network with new examples. – The application of the method of inputs reduction was positive, always bearing in mind that this was a first approximation to a future application of techniques of feature selection. These techniques will be applied on the input data with the goal of eliminating variables that are not particularly influential.

References 1. Aguilera de Maya, J.L. ”M´etodo de predicci´ on de ruido urbano basado en Teor´ıa Fuzzy”. XXVIII Jornadas Nacionales De Ac´ ustica Y Encuentro Ib´erico De Ac´ ustica. Oviedo. Noviembre 1997. 2. Avsar, Y.; Saral, A.; Gnll, M.T.; Arslankaya, E.; Kurt, U. ”Neural Network Modelling of Outdoor Noise Levels in a Pilot Area”. Turkish J. Eng. Env. Sci. 28, pp.149155, 2004. 3. Berg, T. ”Classification of environmental noise by means of neural networks”. Trondhein. Norway. Forum Acusticum, Sevilla. Septiembre, 2002. 4. Beritelli, F.; Casale, S.; Ruggeri, G. ”New Results in Fuzzy Pattern Classification of Background Noise”. Proceedings of ICSP2000. pp. 1483-1486. 2000. 5. Betkowsa, A.; Shinoda, K.; Furui, S. ”Model Optimization for Noise Discrimination in Home Environment”. Symposium on Large-Scale Knowledge Resources (LKR2005), Tokyo, Japan, pp.167-170. 2005.

12 6. Botteldooren D.; Verkeyn A.; De Cock, M. Kerre, ”Generating Membership functions for a Noise Annoyance Model from Experimental Data”. Soft computing in measurement and information acquisition (L. Reznik, V. Kreinovich, eds.), Studies in Fuzziness and Soft Computing 127, Springer-Verlag, pp. 51-67, 2003. 7. Botteldooren, D.; Lercher, P. ”Soft-Computing base analysis of the relationship between annoyance and coping with noise and odor”. Journal of Acoustic Society America vol. 115 (6), pp. 2974-2985. June 2004. 8. Botteldooren, D.; Verkeyn A. ”Fuzzy models for accumulation of reported community noise annoyance from combined sources” Journal of Acoustic Society America vol. 112, pp. 1496-1508. July 2002. 9. Botteldooren, D.; Verkeyn A. ”An iterative fuzzy model for cognitive processes involved in environment quality judgement”. Proceedings of FUZZ-IEEE 2002, Hawaii, USA, 2002. 10. Botteldooren, D.; Verkeyn A.; Cornelis, C., De Cock, M. ”On the meaning of noise annoyance modifiers: A Fuzzy Set Theoretical Approach”. Acta Acust. united with Acustica vol. 88, pp. 239-251. 2002. 11. Botteldooren, D.; Verkeyn A.; Lercher, P. ”A fuzzy rule based framework for noise annoyance modelling” Journal of Acoustic Society America vol. 114, pp. 1487- 1498. 2003. 12. Botteldooren, D.; Verkeyn A.; Lercher, P. ”Noise annoyance modelling using fuzzy rule based systems” Noise Health vol. 4, pp. 27-44. 2002. 13. Botteldooren, D.; Verkeyn A. ”Aggregation of specific noise annoyance to a general noise annoyance rating: a fuzzy model”. Proceedings of the 9th International Congress on Sound and Vibration, Orlando, Florida, USA, July 2002. 14. Botteldooren, D.; Verkeyn A. ”Annoyance Prediction with Fuzzy Rule Bases”. in-Da Ruan et al. (Eds., 2002): ”Computational Intelligent Systems for Applied Research”, Proceedings of the 5th International FLINS Conference, World Scientific, Singapore. 2002. 15. Botteldooren, D.; Verkeyn, A. ”Fuzzy modelling of traffic noise annoyance”. Joint 9th IFSA World Congress and 20th NAFIPS International Conference. pp. 1176-1181 vol.2. 2001 16. Botteldooren, D.;Verkeyn A. ”The effect of Land-Use variables in a Fuzzy Rule Based Model for Noise Annoyance”. The 2002 International Congress and Exposition on Noise Control Engineering. Dearborn, MI, USA, August, 2002. 17. Cammarata, G.; Cavalieri, S.; Fichera, A. ”A Neural Network Architecture for Noise Prediction”. Neural Networks, Vol.8, No.6, pp. 963-973, 1995. 18. Caponetto, R.; Lavorgna, M.; Martinez, A.; Occhipinti, L., ”GAS for fuzzy modeling of noise pollution”. First International Conference on Knowledge-Based Intelligent Electronic Systems. pp. 219-223. May 1997. 19. Couvreur C., Fontaine V., Gaunard P., Mubikangiey C.G., ”Automatic Classification of Environmental Noise Events by Hidden Markov Models”. Applied Accoustics, Vol. 54, No. 3, pp.187-206, 1998. 20. Couvreur, L.; Laniray, M. ”Automatic Noise Recognition in Urban Environments Based on Artificial Neural Networks and Hidden Markov Models”. The 33rd International Congress and Exposition on Noise Control Engineering. Inter-noise, Prague. 2004. 21. Detailed French Model. Guide du Bruit (1980). ”Pr´eliminaires aux ´etudes de bruit : Partie II”. 1991. 22. Dia H., ”An object-oriented neural network approach to short-term traffic forecasting” European Journal of Operational Research. Vol. 131, pp.253-261. 2001.

13 23. Dougherty M. S., Cobbett M. R., ”Short-term inter-urban traffic forecasts using neural networks” International Journal of Forecasting vol. 13, pp. 21-31. 1997. 24. Fortuna, L.; Occhipinti, L.;Vinci, C.;Xibilia, M.G., ”A Neuro-Fuzzy model of Urban Traffic”. Circuits and Systems, 1994, Proceedings of the 37th Midwest Symposium on vol. 1, pp. 603 - 606. Aug 1994. 25. Gaunard, P. ; Mubikangiey, C.G.; Couvreur, C.; Fontaine, V. ”Automatic classification of environmental noise events by Hidden Markov models” Acoustics, Speech and Signal Processing, 1998. Proceedings of the 1998 IEEE International Conference. Vol. 6, pp. 3609-3612 vol.6. May 1998. 26. Gonz´ alez A. E.; Gaja D´ıaz E.; Jorysz A.; Torres G.. ”Desarrollo de un modelo predictivo de ruido urbano adaptado a la realidad de la ciudad de Montevideo, Uruguay”. Tecnoac´ ustica Madrid. Ref. Pacs: 43.50.Ba. 2000. 27. Ledoux C., ”An urban traffic flow model integrating neural network” Transportation Research-C, Vol. 5, No. 5, pp. 287-300. Elsevier Science. 1997. 28. Ma, L.; Smith, D.J.; Miller, B.P. ”Environmental Noise Classification for ContextAware Applications”. Proc. DEXA, (LNCS 2736), pp. 360-370, 2003. 29. Ma, L.; Smith, D.J.; Miller, B.P., ”Context Awareness using Environmental Noise Classification” Proc. Eurospeech 2003, Geneva, Switzerland, 2237-2240, 2003. 30. Ministerio de Obras P´ ublicas, Transportes y Medio Ambiente: ”Guas metodol´ ogicas para la elaboraci´ on de estudios de impacto ambiental: Carreteras y ferrocarriles”. Secretar´ıa de Estado para las Pol´ıticas del Agua y el Medio Ambiente, 1995. 31. Nordic Prediction Method for Road Traffic Noise (Statens Planverk 96). Nordic Countries. 1996. 32. Richtlinien fr den Lrmschutz an Straben (RLS 90). Germany. 1990. 33. Torija, A.J.; Ruiz, D.P.; Ramos, A. ”Modelo Lineal Multivariante de predicci´ on de descriptores de ruido en la ciudad de Granada. Uso del L50 para la descripcin del ruido de tr´ afico rodado”. Universidad de Granada. Tecnoac´ ustica Gand´ıa. Ref. Pacs: 43.50.Ba. 2006. 34. Yin H., Wong S.C., Xu J., Wong C.K. ”Urban traffic flow prediction using a fuzzyneural approach” Transportation Research Part C: Emerging Technologies, Vol. 10, No. 2, pp. 85-98(14) Elsevier. April 2002. 35. Zaheeruddin, Garima, ”A neuro-fuzzy approach for prediction of human work efficiency in noisy environment”, Applied Soft Computing, vol. 6 pp.283-294, 2006.