Journal of Computer Science and Control Systems

7 downloads 41324 Views 3MB Size Report
Mechanical Vibration Producing Device for the Use of the Sportsmen Training . ..... analysis in preparation for ISO90001: 2000 certification of ..... ASE, 2005. [8].
S

Journal of Computer Science and Control Systems

C

http://electroinf.uoradea.ro/index.php/reviste/jcscs.html JCSCS - Journal of Computer Science and Control Systems, Vol. 8, Nr. 1, May 2015

JCSCS - Journal of Computer Science and Control Systems, Vol. 8, Nr. 1, May 2015

Academy of Romanian Scientists

S

University of Oradea, Faculty of Electrical Engineering and Information Technology Vol. 8, Nr. 1, May 2015

Journal of Computer Science and Control Systems

ISSN: 1844-6043

University of Oradea Publisher

University of Oradea Publisher

Academy of Romanian Scientists

C

S S

University of Oradea, Faculty of Electrical Engineering and Information Technology Vol. 8, Nr. 1, May 2015

Journal of Computer Science and Control Systems

University of Oradea Publisher

2 Volume 8, Number 1, May 2015 __________________________________________________________________________________________________________

EDITOR IN-CHIEF Eugen GERGELY - University of Oradea, Romania EXECUTIVE EDITORS Gianina GABOR

- University of Oradea, Romania

Daniela E. POPESCU - University of Oradea, Romania

Helga SILAGHI

- University of Oradea, Romania

Viorica SPOIALA

- University of Oradea, Romania

ASSOCIATE EDITORS Mihail ABRUDEAN Angelica BACIVAROV Valentina BALAS Eugen BOBAŞU Dumitru Dan BURDESCU Toma Leonida DRAGOMIR János FODOR Voicu GROZA Štefan HUDÁK Geza HUSI Ferenc KALMAR Jan KOLLAR Mohamed Najeh LAKHOUA Anatolij MAHNITKO Ioan Z. MIHU Emilia PECHEANU Constantin POPESCU Dumitru POPESCU Luminiţa POPESCU Alin Dan POTORAC Ioan ROXIN Ioan SILEA Lacramioara STOICU-TIVADAR Lorand SZABO Janos SZTRIK Honoriu VĂLEAN

ISSN 1844 - 6043

Technical University of Cluj-Napoca, Romania University Politehnica of Bucharest, Romania "Aurel Vlaicu" University of Arad, Romania University of Craiova, Romania University of Craiova, Romania "Politehnica" University of Timisoara, Romania Szent Istvan University, Budapest, Hungary University of Ottawa, Canada Technical University of Kosice, Slovakia University of Debrecen, Hungary University of Debrecen, Hungary Technical University of Kosice, Slovakia University of Carthage, Tunisia Riga Technical University, Latvia "Lucian Blaga" University of Sibiu, Romania "Dunărea de Jos" University of Galaţi, Romania University of Oradea, Romania University Politehnica of Bucharest, Romania "Constantin Brâncuşi" University of Tg. Jiu, Romania "Stefan cel Mare" University of Suceava, Romania Universite de Franche-Comte, France "Politehnica" University of Timisoara, Romania "Politehnica" University of Timisoara, Romania Technical University of Cluj Napoca, Romania University of Debrecen, Hungary Technical University of Cluj-Napoca, Romania

This volume includes papers in the following topics: Automation, manufacturing control and robotics, Cyber-physical systems, Databases and information systems, Dependable computing, Data security and cryptology, Medical electronics, Multi-agent systems, Photovoltaic system, Printed circuit boards, Renewable energy, Structured analysis, Surface-mount devices, System modeling and simulation, Wireless sensor networks.

Journal of Computer Science and Control Systems 3 __________________________________________________________________________________________________________

CONTENTS

BEN JOUIDA Haithem1, LAKHOUA Mohamed Najeh2, GHARBI Rached1 - 1 University of Tunis, Tunisia, 2 University of Carthage, Tunisia System Analysis and Quality Operation of a Photovoltaic System ...................................................................................5 DRĂGHICIU Nicolae1, KUN Simona2 - 1 University of Oradea, Romania, 2 Connect Group Belgium, Connectronics Romania, Control Department Reducing the Cost of the Production Process. Selective Soldering – Continuous Improvement ...................................11 KOVENDI Zoltan1, RADA Ioan Constantin1, MAGDOIU Liliana1, CORHA Alin2, BONDICI Cristian2 - 1 University of Oradea, Romania, 2 Technical University of Cluj-Napoca, Romania Checking Algorithms on Differential Equations with Known Analytical Solution .............................................................15 MARCU Florin, DRĂGHICIU Nicolae - University of Oradea, Romania Mechanical Vibration Producing Device for the Use of the Sportsmen Training ............................................................19 SANISLAV Teodora, MICLEA Liviu - Technical University of Cluj-Napoca, Romania A Dependability Modeling Approach for Cyber-Physical Systems..................................................................................23 SITAULA Chiranjibi - Tribhuvan University, Kathmandu, Nepal A Comparative Study of Data Mining Algorithms for Classification ................................................................................29 VEGH Laura, MICLEA Liviu - Technical University of Cluj-Napoca, Romania Authenticity, Integrity and Secure Communication in Cyber-Physical Systems .............................................................33

4 Volume 8, Number 1, May 2015 __________________________________________________________________________________________________________

Journal of Computer Science and Control Systems 5 __________________________________________________________________________________________________________

System Analysis and Quality Operation of a Photovoltaic System BEN JOUIDA Haithem1, LAKHOUA Mohamed Najeh2, GHARBI Rached1 1 University of Tunis, Tunisia, ESSTT, 5 Street Taha Hussein Tunis 1008, U.R: LSDE_C3S, E-Mail: [email protected]; [email protected] 2

University of Carthage, Tunisia, ENICarthage, 45 Rue des Entrepreneurs Charguia II 2035, U.R: SMS, E-Mail: [email protected]

Abstract – Due to the complexity of modeling of production systems, it is necessary to adopt a structured analysis and development methodology. Indeed, the majority of analysis and design methods focus on the processing of information in production systems. The purpose of this paper is to propose an approach based on the use of the so called system analysis OOPP (Oriented Objectives Project Planning) method for the representation of renewable energy (ER) systems. This approach is characterized by the combined use of analytical methods and quality tools applied to photovoltaic systems. Keywords: photovoltaic system; system analysis; quality tools; renewable energy. I. INTRODUCTION Energy has always viewed as vital issue for humans and society. Indeed, human behavior is strongly induced by its availability or non-availability behavior as well as its abundance or its scarcity. Such behaviors can generate new issues especially in environmental and socio-economic balances [1]. Given the importance of these issues (global warming, resource depletion, increased costs...), the consideration of a more rational use of energy as well as an optimization of energetic process becomes an important tasks. As the energy production system is complex, because of its interdependence of the various functions, its analysis and design are usually realized using global approaches. Hence, the approach proposed to conduct such action is a systemic approach. The purpose of this paper is to present an application of system analysis and quality tools in order to represent a renewable energy (ER) system. A case study of a photovoltaic system is presented. II. NEED FOR SYSTEM ANALYSIS Scientific developments and techniques for the systemic approach generate several names to the

systemic approach [2]: systems analysis, systems analysis, structural analysis... The main purpose of these reflections is to find a multi-disciplinary approach which organizes knowledge, creates universal language design and controls complex sets. Moreover, based on the existing literature, the desire of finding a more synthetic approach is conducted according to the three following obstacles [3]: • The first is caused by the variety of outputs generated by the scientific development for the same studied object. • The second resumes the need of an effective method for designing and managing systems that join human and machines. In fact, such concern was already emerged on technological systems, then it was enlarged to cover decision making level and the organization of business and socio-economic systems. • The third corresponds to a need for a multidisciplinary approach that allows restructuring of the knowledge. It results by the search of common model for a wide variety of systems. Besides, the definition of systemic concept is given through two features [4]: • The analytical feature: that consists in describing, understanding, explaining or providing socioorganizational phenomena. • The dynamic feature: that leads to find the detailed rules accompanying the change and to decide about action to be implemented for the drive. Hence, a comprehensive analysis approach is needed to understand the organizational complexity of a given process. The systematic analysis is structured sequentially by steps: the observation of the system by various observers in various aspects, the analysis of interactions and regulatory chains, the modeling that takes into account the evolution of the system, the simulation and the real tests in order to obtain the consensus. Thus the system approach allows two types of analysis according to the structure and the function of the system [5] [6].

6 Volume 8, Number 1, May 2015 __________________________________________________________________________________________________________

Various analysis methods are proposed, in the literature, in to carry out actions of global analysis and different points of view as SADT method (Structured Analysis and Design Technique), Petri Networks (MOP) and OOPP method (Oriented Objectives Project Planning). III. PRESENTATION OF THE QUALITY CONCEPT Recently, in a competitive environment, quality is viewed as a main goal in order response in an optimal way to customers’ needs [7]. In the same context, ISO 9000: 2000 defines the quality as "fitness of a set of inherent characteristics to fulfill requirements" [8]. In addition, NF X 50 -120 termed the quality as "the set of features for an entity that bear on its ability to satisfy stated and implied needs" [9]. Similarly, Dessinoz [10] considers the quality as an integrated business processes with other processes such as the production and maintenance processes. Quality includes several aspects according to the need and requirement of various aspects [11]. For a client or a user: the quality refers the ability of a given product or service in satisfying the needs of users. For the production activity: the quality of a production system lies in its ability to produce at lower cost and in a short time while satisfying consumers’ need. For the company/organization: the quality consists in the implementation of a policy which tends to permanent mobilization of all staff with possible improvements of:  Quality of products and services  Effectiveness of the functionality of the production process  Relevance and coherence of objectives Indeed, the various aspects of quality are summarized in two forms: • The external quality which corresponds to customer satisfaction. This is to provide a product or services that meet customers’ expectations and improve the company's participation in the market [12]. • The internal quality which corresponds to the improvement of the internal operation of the company. The purpose of the internal quality is to implement efficient ways that describe the organization, to identify and minimize the possible dysfunction. The main objective is to control and improve the products’ quality. Such internal quality generally involves a step of identification and formalization of internal processes carried out through a participatory approach of systems’ analysis [12]. Thus, the term "quality" cannot be defined in one aspect as it is a point of view that covers all activities concerning the product, the production processes, the organizational structures and the customers. In fact, a reflection on the quality aspect must takes into account

the relevant parameters of the different interacting processes in a productive system. The quality is a tool or an instrument designed to achieve, efficiently, the sustainable development of a production system. Such tool allows identifying and supporting the improvement and change activities. Furthermore, the tools are so numerous and diverse. Here, we outline some quality basic tools. In this context, the literature preserves mainly [13]: the effects-causes diagram or Ishikawa diagram; the control card ; the Pareto diagram; the graphics; the data record sheet; The histogram; the correlation diagram; the defect concentration diagram; the flow diagram. IV. GENERALS OF RENEWABLE ENERGY Renewable energy (RE) is an attractive alternative for current energy sources. According to [14], researchers showed the importance of the ER in the durability of the door leaf global energy. Consequently, studies should be based on a systems approach that responds effectively on the demand of changes. The systematic approach seeks rather to identify the specific configuration of the energy of renewal to be considered in order to accompany the changes [14]. In the few last years, some types of R.E have been strongly developed. The production of the energy is also increased. Nevertheless, production is still less than the required energy and renewable energy fill a small place regarding other energy resources. Renewable energy are provided by the sun, according to a well studied mathematical approach based on mathematical models that lead to analyze the characteristics of solar energy in Tunisia [15]. A study of the optimization and the comparison between renewable and non-renewable energy systems is presented by Badawe, et al, 2012. Indeed, an adequate and effective solution has been adapted as a wind and solar energy and then applied in a diesel generator. The obtained results using R.E system are efficient and profitable [16]. Renewable energies do not cause or little waste or polluting emissions. They participate in the fight against the greenhouse effect and CO2 emissions. They also facilitate the rational management of local resources. Today, renewable energy falls within the idea of sustainable development and environmental preservation. According to YM Atwa, 2010 a planning model that focuses on energy infrastructure and R.E was proposed. The proposed model enables the development of R.E from the analysts, investors and policy makers in order to have a long-term energy infrastructure [17]. The desired results of the planning model demonstrate the optimization in the minimization of the transmission investment cost [17]. All world countries are engaged in this field. In addition, many individuals in the world are investing in systems that produce "clean" energy. Numerous

Journal of Computer Science and Control Systems 7 __________________________________________________________________________________________________________

companies operating in this area, and generate more and more jobs. Renewable energies have become an economic market.

foresees support plans to encourage investors to develop the RE sector [23].

V. CASE STUDY OF A PHOTOVOLATIC SYSTEM Photovoltaic technology is booming. Around the world, many operating possibilities are examined and tested in the hope of a future commercialization [18]. However, the reduction in price forecasts of photovoltaic modules was so optimistic and the photovoltaic industry is in a difficult situation. Indeed, the complexity of the manufacturing process of photovoltaic modules as well as low production yields higher costs that affect the sales volume. One can hope that in the coming years, photovoltaic technology reaches the "maturity" (simplified processes, good production) and then increasing the production volume that reduces the cost of the modules. According to an applied simulation, the photovoltaic system experiences a result of the optimization system involves the factor cost, the efficiency and the price [19] [20]. Despite these difficulties, the evolution of technology and the photovoltaic market is generally positive. A design and control of photovoltaic system was made by Alberto Pigazo et al, 2009 that focus on test on e-learning-based technology platform, the content and services in order to have a good performance and have long-term developments [21]. According to Jan T. et al, 2002 a simulation was made to justify the presence of energy that consists of photovoltaic energy and generator processes that have a very important role in balancing energy level and good quality energy [22]. A.

Need to the exploitation of renewable energy in Tunisia

Tunisia is a very sunny country, this effect the exploitation of energy sources like solar becomes profitable especially for stand-alone sites [23]. Moreover, a considerable energy deficit is identified, in recent years, in Tunisia. The curve presented in Figure 1 shows the importance of the development of a strategy for exploiting the sources of R.E.

Fig. 2. Evolution of the purchase price PV kWh and cost of conventional combustible KWH [23].

B.

Case of a stand-alone installation

A stand-alone Installation is used for supplying a focus through the PV modules with the connection to a conventional power grid. The photovoltaic Installation produces the need of the home into electrical energy and can also allow the injection possibility of excess production of electrical energy in the conventional network. To this effect, this facility is bidirectional. It aims to receive the electricity network, in case of lack of energy, and to give power to the grid in case of excess energy. Moreover, the rational exploitation of this facility and the control consumption lead to minimize costs and benefit from free energy. This installation is illustrated in the following diagram (Fig.3).

Sun Use

Control System Panel DC

Converter

I1

Counter Fig. 1. Energy balance [23].

According to the forecast increased costs of conventional KWh (Fig.2), the Tunisian government

C2

I3

AC

C1

Network Fig. 3. One of a home supply network.

I2

8 Volume 8, Number 1, May 2015 __________________________________________________________________________________________________________

C.

Result of the OOPP analysis

The OOPP method is considered as powerful tool for communication, analysis and planning of any project whatever its nature and its location [24] [25] [26]. It consists of three basic steps: a problem analysis step; a step goal analysis; a step of activities planning. The overall goal (OG) of the OOPP analysis is "the operational quality analysis of an insured photovoltaic source" and five specific objectives (SO) which are

detailed in 10 results (R). Such results are divided into one hundred and three activities (A). From the analysis, we can identify three states to manage for the stand-alone system.  The state of production and consumption with the injection of excess energy in the network (E1).  The state of production and consumption with total disconnection from the network (E2).  The state of production and consumption with pulling the lack of network energy (E3).

TABLE 1. OOPP Analysis. N° 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27

Code OG OS1 R1.1 A1.1.1 A1.1.2 A1.1.3 R1.2 A1.2.1 A1.2.2 A1.2.3 A1.2.4 A1.2.5 A1.2.6 OS2 R2.1 A2.1.1 A2.1.2 A2.1.3 A2.1.4 A2.1.5 A2.1.6 A2.1.7 A2.1.8 A2.1.9 A2.1.10 A2.1.11 A2.1.12

28

A2.1.13

29 30 31 32 33 34 35 36

A2.1.14 A2.1.15 A2.1.16 A2.1.17 A2.1.18 A2.1.19 A2.1.20 A2.1.21

37

A2.1.22

38 39 40 41 42 43 72 101

R2.2 A2.2.1 A2.2.2 A2.2.3 A2.2.4 OS3 OS4 OS5

Désignation The operational quality analysis of a photovoltaic source assured The operational quality analysis system for capturing solar rays provided Orientation analysis provided solar panels Choose the location of the panels Choosing the sense orientation panels Select the panels tilt angle Connection analysis provided solar panels Connect the serial panels Ensure the quality of serial connection Connect the parallel panels Ensure the quality of parallel connection Sign in mixed panels Ensure quality mixed connection The operating quality analysis carried conversion system Continuous improvement of the quality of the insured conversion Identify problems with the method QQOQCP Rank the issues in order of priority Find relevant data by the quality method "Diagram ISHIKWA" Assess relevant data quality method "Diagram ISHIKWA" Research and evaluate the relevant data by the quality method "6M" Assess relevant data quality method "6M" Find relevant data by the quality method "Pareto diagram" Assess relevant data quality method "Pareto diagram" Collect the relevant data Measure the existing results Identify the causes problems using the FMEA Identify the causes problems with a quality tool Brainstorming Identify the causes of the problems using the method mastering effects due Bridging the gap between the present situation and goals insured clients Improve solutions using a GANTT chart Improve solutions using Histograms Improve solutions to avoid problems with the Graphs Test the most appropriate solutions Validate the most appropriate solutions Monitor the performance of the photovoltaic system Monitor the performance of the photovoltaic system Check the gap between the original data and the results obtained (quality, cost, time, ...) Minimizing maintenance costs of the insured conversion Systematically control the operation of photovoltaic system Systematically monitor the operation of photovoltaic system Implement a strategic plan maintenance Intervene through preventive action on the photovoltaic system The operating quality of analysis provided switching system Operation quality analysis of the internal metering system provided The operating quality of analysis of the counting system provided STEG

Journal of Computer Science and Control Systems 9 __________________________________________________________________________________________________________

But the link between these states requires conditions that may be performed according to the operating quality criteria. By exploiting the concept Markov we can represent the system according to the graph (Fig. 4). a12 a11

a22

E2

E1 a21 a13

a32 a23

a31

E3 a33 Fig. 4. Graph of states.

Thus, we can define a gateway address matrix from one state to another, denoted A, allowing the rational exploitation of the photovoltaic system.

 a11  A =  a21 a  31

a12 a22 a32

a13   a23  a33 

TABLE 3. Simulation table.

R1 R2 R3 R4

Cq

%Cq 100% 67% 44% 33% 30% 22% 20% 15% 11% 10% 7% 5% 4% 2% 1%

3 2 2 1 2

3 3 2 3 2

3 3 3 3 2

3 3 3 3 3

81 54 36 27 24

1 2 1 1 1

2 2 2 1 2

3 2 2 3 2

3 2 3 3 2

18 16 12 9 8

1 1 1 1 1

1 2 1 1 1

2 2 1 1 1

3 1 3 2 1

6 4 3 2 1

Groupe G1

Groupe G2

Groupe G3

From the table, three groups were identified and in the condition of maintaining the decision support system of this state is a pass or decision to the other state. The system's process of moving from one state to another is illustrated by the flowchart of the figure (Fig. 5).

(1)

The passage between the three states follows a quality criterion, denoted , which is defined as:

Identification of the

n

Cq = ∏ Rk

(2)

Real Cq

k =1

Where:  R1 is the result of analysis by quality tool  R2 is the rate of sunshine  R3 is the level of consumption  R4 is the cost per KWh

E1 =

E2 =

E3 =

Thus, depending on the season Rk are assessed (Tab2). Cq є

TABLE 2. Rk level.

Rk

Summer F A

R1 R2 R3 R4 1

2 2 2

H 3 3 3 3

Autumn Winter F A 2 1 2 1 2 1 2

H

3

F A 1 1 1 2 1 2

Spring H

F

3 3

1 1 1

A 2 2 2 2

H

E1=1 & E2=0 &

Cq є

E1=0 & E2=1 &

Cq є

3

Where:  F: Low level (affecting 1 to this level)  A: Acceptable level (affecting 2)  H: High level (affecting 3) By simulating a manual way the different possibilities of Cq. The following table (Tab3) shows the obtained results according to the observed real case.

E1=0 & E2=0 &

Fig. 5. The flowchart describes the transaction from one stage to another.

10 Volume 8, Number 1, May 2015 __________________________________________________________________________________________________________

However if then

with

if

Cq ∈ G1

E1 = ( I1 × I 2 × I 3 ) + ( I1 × I 2 × I 3 )  a11  I1 =  a21   a31 

Cq ∈ G2

then

E2 = ( I1 × I 2 × I 3 ) ( 4 )

with

 a12  I 2 =  a22   a32 

if

(3)

Cq ∈ G3

then

E3 = ( I1 × I 2 × I 3 ) + ( I1 × I 2 × I 3 ) ( 5 )

with

 a13  I 3 =  a23   a33  VI. CONCLUSIONS

In this article, we presented an application of system analysis and quality tools for a renewable energy system. Systems analysis a case study of a photovoltaic system has exploited the OOPP method The analysis results of this work showed that the operation and the management of a production unit of Renewable Energy require optimization of the energy consumption. The operation of photovoltaic units can solve the energy deficit in Tunisia. REFERENCES [1] Bekkouche M.A., Thèse, Modélisation du Comportement Thermique de Quelques Dispositifs Solaires, Algérie, 2008. [2] Lauras M., Méthodes de diagnostic et dévaluation de performance pour la gestion de chaines logistique, Thèse, Toulouse, 2004. [3] Landry et Banville C., Caractéristiques et balises d’évaluation de la recherche systémique, Revue Tunisienne des Sciences de Gestion, Vol.2, N°1, 2000. [4] Bériot D., Manager by the systemic approach, Editions d'Organisation, Paris, 2006. [5] Ben Jouida T., Lakhoua M.N. and Annabi M., Application of the system approach for the automation of a printing system, IREACO, Vol.3, N°1, pp 83-87, January, Italy, 2010. [6] Donnadieu G. and Karsky M., Systemic thinking and acting in the complexity, Liaisons, Paris, 2002.

[7] Pillet M., Contribution à la maîtrise statistique des procédés : cas particulier des petites séries, Thèse, Université de Savoie, 1993. [8] ISO 9000 2000, Quality management systemFundamentals and vocabulary, AFNOR, France, 2000. [9] NF X 50., International Standard Organisation ISO 9000, Compendium, Genève, Suisse, 1992. [10] Dessimoz J-D., Beyond Information: Cognition and Cognitics for managing complexity the case of 'Enterprise' from a holistic perspective, ICIMS-NOE, France, 2000. [11] Dominique, Contribution methodologies of systemic analysis in preparation for ISO90001: 2000 certification of PMI, Thèse, University of Laussane France, 2003. [12] ISO8402 1995 Norme NF EN ISO 8402, Quality Management and Quality Assurance: Edition 1995. [13] Husi G., Szasz C., Chindris V., Artificial Immune System Implementation upon Embryonic Machine for Hardware Fault-tolerant Industrial Control Applications, Journal of Coputer Science and Technology, 10: (4), 20110, pp. 60-66. [14] Hosseini S.H., Seyed Farid Ghaderi and Hamed Shakouri G., An Investigation on the Main Influencing Dynamics in Renewable Energy Development A Systems Approach, Iranian, 2012. [15 Jeddi N. and El Amraoui L., Design of a photovoltaic system for constant output voltage and current, IREC’2015, March 24 - 26, 2015, Sousse, Tunisia. [16] El Badawe M., Iqbal T. and George K. Mann., Optimization and a comparison between renewable and non-renewable energy systems for a telecommunication site, Montreal, QC, 2012. [17] Atwa Y. M., Optimal Renewable Resources Mix for Distribution System Energy Loss Minimization, 2010. [18] Abid H., Tadeo F., Toumi A. and Chaabane M., MPPT of a Photovoltaic Panel Based on Takagi-Sugeno and Fractional Algorithms, IREACO, Vol.7, N°3, May, 2014. [19] Afrouzi, S., Vahabi M., Mohammad A. and Tavalaei J., Economic Sizing of Solar Array for A Photovoltaic Building in Malaysia with Matlab, Bandung, 2011. [20] Setiawan B., Mauridhi H. and Mochamad A., A High Precision Angle Compensation Controller for Dish Solar Tracker Installed on a Moving Large Ship, IREACO, Vol.6, N°6, January, 2013. [21] Pigazo A., Víctor M. Moreno and Emilio J. Estébanez, An Experience on E-learning in Renewable Energy, Porto, 2009. [22] Bialasiewicz J.T. and Muljadi E.., RPMSrm-Based Modeling of Photovoltaic Panels as Energy Sources in Renewable Energy Systems, 2002. [23] Development action plan for renewable energy in Tunisia in September, 2013. [24] AGCD, Manual for the implementation of planned interventions by Objectives, 2nd Edition, Brussels, 1991. [25] Lakhoua M.N., Systemic Analysis of a Wind Power Station in Tunisia, Journal of Electrical and Electronics Engineering, Vol.4, N°1, 2011. [26] Lakhoua M.N., Using Systemic Methods for Designing Automated Systems, International Journal of Applied Systemic Studies, Vol.2, N°4, 2009, pp.305-318.

Journal of Computer Science and Control Systems 11 __________________________________________________________________________________________________________

Reducing the Cost of the Production Process. Selective Soldering – Continuous Improvement DRĂGHICIU Nicolae 1 , KUN Simona 2 1

University of Oradea, Romania, Department of Electronics and Telecommunications, Faculty of Electrical Engineering and Information Technology, University Str.1, 410087 Oradea, Romania, E-Mail: [email protected] 2

Connect Group Belgium , Connectronics Romania, Control Department, Sos. Borsului nr. 40, 410605 Oradea, Romania, E-Mail: [email protected]

Abstract – The increasing diversity of the production influences the production process in terms of organizational and technological aspect considering that it requires more frequent crossing the realization of a product type to another. This leads to the need for a flexible production process in terms of designed and technological aspects. We are talking about improvement because it is necessary in the manufacturing process. A close connection between quality and cost of repairs has developed over time, this led to the need of technical, automatic and computerized solutions, the human operator being slowly replaced. It can be said that the trend is clearly shift their production to a single wave reflow process on both sides of the plate. Thus few PTH (Pin Trough Hole) component will be glued by hand, wave soldering machine or newer by selective soldering process. Keywords: PCB, Selective soldering, SMD, PTH. I. INTRODUCTION Many times there are thousands of interconnection hardware, displays or other electronic components that do not withstand high temperatures during the wave soldering process and manual soldering can be considered a slow and inconsistent process in which the quality of the results is in totally dependent on operator qualification and can vary greatly from day to day, even from hour to hour. Selective soldering is accurate, easy to program and consequent, being an asset and a benefit in factories manufacturing electron products, improving product quality by reducing defects, repair times, automatically process and costs, all of which add up ensuring continuity and progress. II. TECHNICAL DATA OF SELECTIVE SOLDERING MACHINE All the inputs required for the process are run via an operating terminal. A standard solder wave machine consists of three areas. In the area of fluxing by a head ultrasound or a series of nozzles, a sprayer applies a fine mist spray flux on the bottom of the PCB. Flow quantities are accurately controlled and enforced because

its failure causes a weak bonding (sometimes fissured), while too much flux causes cosmetic problems leading to non-compliance in terms of quality [1]. Inside preheat zone is used convection or infrared heaters raising the temperature of the board, avoiding thermal shock to be subjected in the soldering area. The preheat activates the flow, removing all flux carrier solvents. And the bonding area is equipped with a tank containing a large amount of molten alloy and a pump to create a precise wave height con-trolled by the program. When the board passes by the above of the tank, it occurs a contact between its bottom and the alloy wave thus sticking pads dc component and creating a mechanical and electrical bond through a process of rapid transfer. III. ABOUT ERSA VERSAFLOW SOLDERING MACHINE For embodiment, we will consider the selective soldering machine ESRA VERSAFLOW as shown in Fig. 1, and to highlight the reduced costs, improved efficiency and quality, we will take the example of an assembly module used as an electronic Onboard Computer.

Fig. 1. Selective Soldering Machine [3].

VERSAFLOW 3/45 is a machine that contains a new modular platform. Vessels supplementary, hard soldering flux and / or preheat modules can be added to the machine after the initial installation. Depending on the specific requirements, the preheat configuration and the total length of the machine may vary to ensure optimum efficiency and maximum flexibility of the

12 Volume 8, Number 1, May 2015 __________________________________________________________________________________________________________

workflow. Because of the multi-jet and convective heating slopes, it is ensured a flexibility extensive process production represented in Fig. 2 [3].

Traceability data acquisition pursuant to ZVEI Standard (included), Solder protocol, Process recorder, alarm management (included), comfortable touch screen operation, easy and intuitively to operate, downwards compatibility ,linking to management execution program (MES) possible. Because we want achieve a high goal by making these machines, it should be listed several advantages: reducing costs and consumables (nitrogen flow, etc.); soldering robot, robotic arm by moving-use three-axis "X, Y and Z"; fast cycles, optimal transfer level due to parallel processing in the true sense using flow simultaneously, preheating and soldering; soldering consistent, reliable, high quality alloy due to their almost PDM LF and reduce the "zero defects"; optimal bonding multilayer boards and components from the table, the top by convective warming during the selective soldering of the bottom; plate does not move during the soldering process; a perfect process "No Clean" and use of low flow.

Fig. 1. Running the soldering process [3].

The efficiency of this process is much higher, as evidenced by the fact that the final outcome is superior in terms of quality and cost of production. Besides reproducibility, (more common requirement among customers) we can mention their importance of the especially dedicated masks covering the SMD(Surface Mount Technology) components that are bonded through the process reflow, respectively the areas populated with PTH components that are bonded through the process reflow. As a disadvantage of this machine, we can remember the high cost of solder masks, which are designed and obtained at a high cost, but all these additional costs pay off a in relatively short time with PDM production, invariant and high reliability In response to these challenges products . As a response to these challenges, the s elective soldering technology provides conclusive solutions, keeping the costs and a PDM(Product Data Management) to a minimum. "The best, cheapest, cleanest and fastest" is what is meant by the process of selective soldering, in the view of the manufacturers of this machines. ERSASOFT stands for easy operation and optimized equipment availability like we can see in Fig.3.

Fig. 3. The Versaflow Soft [5].

Fig. 4. Comparison of the rate of defects PDM [1].

The table in Fig. 4 shows the advantages of this machine in terms of defects in the production process comparing the human operator, robot or wave soldering machine. IV. PRACTICAL EXAMPLE PERFORMED USING SELECTIVE SOLDERING For embodiment, we will take a module of a set that uses the On-board Computer "PBXXXX". We want to streamline the production by reducing the cost of the repairs required, and reduce their timetrial. The top shows a variety of SMD components mounted on the board by reflow process and some components PTH to be selective soldered using a mask. The bottom, shows SMD components and two USB connectors to be joined manually. All SMD components and holes will be protected by a mask (special frame) specifically designed for the project. Its cost is quite high, but necessary to achieve the final product that will pay off for this expense through increased efficiency, reduced time production and delivery compliance data. Likewise, we will see the difference between the test yields of this product before and after the implementation like is shown in Fig. 5 and Fig. 6,.

Journal of Computer Science and Control Systems 13 __________________________________________________________________________________________________________

Production optimization is tied by the performance in design and statistical design as designers choose a circuit configuration to achieve a desired performance, after that they test "the YIELD" of the circuit [4]. The board will go through these phases, materialized here by images taken directly from the monitor of the machine.

Fig. 5. Yield before test automation.

The Yield is determined by the company management, it is reviewed annually and sets the target to be achieved.

Fig. 7. The transition through the machine [5].

Fig. 6: Yield after test automation If the target is not reached, check the data analysis due to the processes involved. Note the high efficiency due to lack of defects and repairs automatically. Increasing test coverage will have a god impact on quality for low costs [2].

Fig. 7 illustrates the process of the machine and the program settings for PBXXXX project . Fig. 8 shows us the program itself with the bonding and flow coordinates. The advantage is the presence of two units bonding which enables a narrower head for the areas nearby SMD components, namely a larger one used for bonding components with multiple terminals and multiple rows as shown in Fig. 8. It can be set in Program bonding several panels in the frame but then it will use the same soldering head for both bonding units, like we can see in Fig. 9.

Fig. 8. Soldering software [5].

14 Volume 8, Number 1, May 2015 __________________________________________________________________________________________________________

Fig. 9. Soldering program editor-Panel Data [5]. Process efficiency is considerably higher than when the human operator solders a project manual, reaching a cumulative total of 236 seconds as follows: 36 seconds + 40 seconds preheating fluxing + 160 seconds solder. To this is added during implantation PTH components of the system is given as 6 sec / comp. and visual control that inevitably follows the soldering quality and is made also by the operator. However, the efficiency is increased by approximately 50% after implementing this process, visible in daily bases efficiency of the production It is demonstrated the production process improvement through automation, eliminating the human operator in as many stages of production, reducing the time and cost of making a consistent product quality and reliability requirements of the client. V. CONCLUSIONS Improved quality of process, less mistakes, less escalation, less "hand carrying", higher yields, higher throughput, more efficient, more work can be accomplished [6]. Continuous improvement is part of quality management and production control is about simplifying the workflow and making it more reliable so that complex computer based tools are not required to manage it effectively. The main idea is to make the workflow easy and fast at low costs. The technology brings the companies closer to the customers. This fast development has caused profound

qualitative changes in the structure of industrial production by the renewal and diversification of raw materials and materials used mainly by increasing complexity and considerable technological performance of equipment and facilities that the enterprises are equipped with [7]. Industrial enterprises have to face national and international competition increasingly higher, generating a continuous pressure on them.There are also many of those companies who turn to the problem of analyzing price reduction, and therefore cost competitive as a priority. Production process optimization objective is to minimize the difference between predicted and actual yields for yield different patterns of boards or assemblies [8]. The great advantage of this soldering system is the ability to calibrate relatively easy and timely. A special calibration tool allows precise adjustment of the jet in correspondence with the passage of solder assembly conveyor system. Thanks to the option "On the fly" developed by Ersa enables continuous optimization [1]. Therefore selective soldering is the perfect choice in factories producing electronic products of medium and large capacity. REFERENCES [1]. http://kr.ersa.com/media/pdf/prospekte_kataloge/lo etmaschinen/selektiv_prospekt_e_251007_web.pdf . (n.d.). [2]. http://repository.cmu.edu/cgi/viewcontent.cgi?artic le=1056&context=ece [3]. http://www.ersa.com/art-versaflow-3-45-3441457.html [4]. M. Lightner, Yield miximization for use in multiple criterion optimization of electronic circuits, Carnegie Mellon University 2006 [5]. Versaflow 3, Selective soldering system, Translation of the original Operating instructions: 163458 [6]. L. CRĂCIUN, Managementul producţiei, Ed. PrintExpert, Craiova 2008 [7]. Dobrin, C. Flexibilitatea în cadrul organizaţiei, aspecte tactice şi operaţionale, Bucureşti, Editura ASE, 2005 [8]. K Lenaburg, S.Valocchi, J Campbell, “Yield Enhancement Using Final Outgoing Automated Inspection 2003 GaAs MANTECH Technical Digest, pp. 213-217 April 2003

Journal of Computer Science and Control Systems 15 __________________________________________________________________________________________________________

Checking Algorithms on Differential Equations with Known Analytical Solution KOVENDI Zoltan1, RADA Ioan Constantin1, MAGDOIU Liliana1, CORHA Alin2, BONDICI Cristian2 1 University of Oradea, Romania Department of Control Systems Engineering and Management, Faculty of Electrical Engineering and Information Technology Universitatii str. no.1., 410087 Oradea, Romania, E-mail: [email protected] 2 Tehnical University of Cluj-Napoca, Romania Department of Automation, Faculty of Automation and Computer Science Memorandumului str. no. 28, 400114 Cluj-Napoca, Romania, E-mail: [email protected]

Abstract – The issues of control of isotope separation processes are extremely complex. Isotope separation plants may have lot of devices related in a so-called cascade separation. Fundamental equations of isotopic separation can be successfully used in control problems. This paper presents the results obtained from the integration of differential equations of first order with partial derivatives with known analytical solution, respectively the results of numerical integration programs for two axis, and the relative error cumulative in percentage depending on the integration step. Keywords: isotopic exchange, differential equations, cumulative error. I. INTRODUCTION The separation columns within isotopic exchange processes for obtaining a particular concentration of 13C by the isotopic exchange of CO2 - carbamate represents complex automation objects characterized by a large number of variables [1, 2]. The input-output dependency in dynamic and stationary regime is mathematically represented throughout nonlinear differential or algebraic equations [3, 4, 5]. Differential equations have probably the widest applicability in modeling such systems [6]. . It requires a thorough knowledge of real phenomena to obtain and study the models as complete and accurate as possible [7, 8]. The purpose of requiring mathematical modeling of such processes is not only a more complete analysis and understanding of phenomena that occur, but the calculation of specific parameters of great interest in the design and management of plants [9]. The purpose of the paper is to present the results obtained from the integration of differential equations with partial derivatives with known analytical solution which interfere in the separation columns within isotopic exchange processes for obtaining a particular

concentration of 13C by the isotopic exchange of CO2 carbamate [1]. The paper is organized as follows. In Section II there are simulated the differential equations of first order. Finally conclusions are drawn. II. SIMULATION OF DIFFERENTIAL EQUATION OF FIRST ORDER The first issue addressed consists in the simulation of the following differential equation of first order:

P00 ⋅ y + P10 ⋅

∂y ∂y + P01 ⋅ = ϕ (a, b ) ∂a ∂b

(1)

where:

P00 = 1 ; P10 = 3 ; P01 = 5

(2)

for which it corresponds the analytical solution:

y = C 0 + C1 ⋅ a 3 + C 2 ⋅ b 3

(3)

where:

C 0 = 10 ; C1 = 4 ; C 2 = 2

(4)

in order for y to be the solution to the differential equation: ϕ(a, b ) = P00  C0 + C1 ⋅ a 3 + C2 ⋅ b3  +   2 2 + P10 ⋅ 3 ⋅ C1 ⋅ a + + P01 ⋅ 3 ⋅ C2 ⋅ b

(5)

By using the numerical simulation programs given in [2] for integration on axis a, respectively on axis b, the following results have been obtained: A.

Cumulative relative error in percentage depending on the integration step Δa for: a0 = 0 - the initial value on axis a af = 10 - final value on axis a b=1 - value on axis b θ=0 - the first value starting with whom the result is extracted Δθ=1 - extraction range of the results The results obtained are shown in Table 1.

16 Volume 8, Number 1, May 2015 __________________________________________________________________________________________________________

TABLE 1. Cumulative relative error depending on the integration step Δa. Δa Ercp [ % ]

0.01 1.50·10-6

0.05 2.44·10-3

0.1 1.50·10-3

Fig. 3 presents the value of y obtained by integrating after axis a for a=1 and Fig. 4 presents the area obtained by integrating after axis a for a=0, 1, ... . 2500

2000

B.

Cumulative relative error in percentage depending on value on axis b for: - the initial value on axis a a0 = 0 - final value on axis a af = 10 Δa = 0.01 - value of the integration step θ=0 - the first value starting with whom the result is extracted Δθ=1 - extraction range of the results The results obtained for this dataset are given in Table 2. TABLE 2. Cumulative relative error in percentage depending on the value on axis b. b 0 2 4 6 8 10

2500

1500 y( x ) 1000 500 0

0 0 0

2

4

6

8

x

10 10

Fig. 3. The value of y obtained by integrating after axis a for a=1.

ercp [ % ] 0.151·10-5 0.148·10-5 0.135·10-5 0.108·10-5 0.079·10-5 0.053·10-5

Integrating after axis b gave similar results. The graphical results obtained after integrating on axis b for b=1, are depicted in Fig. 1. Similarly, Fig. 2 shows the area obtained by integrating after axis b for b=0, 1, ... . 4500 4000 3500 3000 2500 y( x ) 2000 1500 1000 500 0 0

Fig. 4. The area obtained by integrating the axis a for a = 0. 1, ... .

4500

The second issue addressed here consists in the simulation of the following differential equation of first order:

P00 ⋅ y + P10 ⋅ 0

0

2

4

6 x

8

10 10

Fig. 1. The value of y obtained by integrating after axis b for b=1.

∂y ∂y + P01 ⋅ = ϕ(a, b ) ∂a ∂b

(6)

where:

P00 = 1 ; P10 = 3 ; P01 = 5 (7) for which it is considered the analytical solution: y = C 3 + C 4 ⋅ ε C5 ⋅a + C 6 ⋅ ε C7 ⋅b

(8)

where: C3 = 10 ; C4 = 1 ; C5 = 1.7 ⋅ 10-2 ; C6 = 3 C7 = 2.7 ⋅ 10- 2

(9)

in order for y to be the solution to the differential equation:

)

(

ϕ(a, b ) = P00 C3 + C 4 ⋅ ε C5 ⋅a + C6 ⋅ ε C 7 ⋅b + P10 ⋅ C 4 ⋅ C5 ⋅ ε

Fig. 2. The area obtained by integrating after axis b for b = 0. 1, ... .

C 5 ⋅a

+ P01 ⋅ C6 ⋅ C7 ⋅ ε

(10)

C 7 ⋅b

After using the numerical simulation programs given in [2] for integration on axis a, respectively on axis b, there have been obtained the following results:

Journal of Computer Science and Control Systems 17 __________________________________________________________________________________________________________

C.

Cumulative relative error in percentage depending on the integration step Δa for: a0 = 0 - the initial value on axis a af = 10 - final value on axis a b=1 - value on axis b θ=0 - the first value starting with whom the result is extracted Δθ=1 - extraction range of the results The results obtained are shown in Table 3. TABLE 3. Cumulative relative error depending on the integration step Δa. Δa Ercp [ % ]

0.01 3.74·10-4

0.05 2.00·10-3

0.1 3.70·10-3

Cumulative relative error in percentage depending on value on axis b for: - the initial value on axis a a0 = 0 - final value on axis a af = 10 Δa = 0.01 - value of the integration step θ=0 - the first value starting with whom the result is extracted Δθ=1 - extraction range of the results The results obtained this time for the above given dataset are shown in Table 4.

. Fig. 6. The area obtained by integrating after axis b for b = 0. 1, ... . 15.00

14.8

D.

TABLE 4. Cumulative relative error in percentage depending on the value on axis b. b 0 2 4 6 8 10

14.6 y( x ) 14.4 14.2 14.00

14 0 0

2

4

6 x

8

10 10

Fig. 7. The value of y resulted by integrating after axis a for a=1.

ercp [ % ] 0.376·10-3 0.372·10-3 0.367·10-3 0.362·10-3 0.358·10-3 0.353·10-3

Similar results have been obtained by integrating after axis b. The graphical interpretation of the results obtained after integrating on axis b for b=1, are given in Fig. 5. Fig. 6 gives the area obtained by integrating after axis b for b=0, 1, ... .

14.28

15

14.28

Fig. 8. The area provided by integrating the axis a for a = 0. 1, ... . III. CONCLUSIONS

14.24 14.2 y( x ) 14.16 14.12 14.08 14.08 0 0

2

4

6 x

8

10 10

Fig. 5. The value of y obtained after integrating on axis b for b=1

In Fig. 7 is depicted the value of y obtained by integrating after axis a for a=1, while Fig. 8 provides the area obtained after integrating on axis a for a=0, 1, ... .

This paper presents the results obtained from the integration of some differential equations of first order with partial derivatives with known analytical solution which interfere in the modeling of separation columns within isotopic exchange processes for obtaining a particular concentration of 13C by the isotopic exchange of CO2 - carbamate. All the simulations were made in MATLAB SIMULINK.. The results of numerical integration for two axis by using the programs given in [2], and for the cumulative relative error in percentage in accordance with the integration step provided proved to be very accurate. These results enable the use of this frame for simulating of differential equation of second order.

18 Volume 8, Number 1, May 2015 __________________________________________________________________________________________________________

ACKNOWLEDGMENT The research activity that helped the authors to elaborate the paper is financed by the EUROPEAN project POSDRU with the title ‘’PARTING’’, no. (ID): 137516.

[5]

[6]

REFERENCES [1] T. Coloşi, S. Codreanu, I. Naşcu, S. Darie, "Modelling and Simulation of Dynamical Systems", Casa Cărţii de Ştiinţă, Cluj – Napoca, 1995. [2] T. Coloşi, I. Naşcu, P. Raica, "Introduction in Numerical Modelling and Simulation of Second-Order Partial Differential Equations Through Local Iterative Liniarization Automation", Computers Applied Mathematics Science Journal T.V. Cluj – Napoca, 1996. [3] C.R. Costea, H. Silaghi, U.L. Rohde, A.M. Silaghi, "Grinding Circuit Control Using Programmable Logic Controllers, Recent Advances in Signal Processing, Computational Geometry and Systems Theory", Proceedings of the 11th WSEAS International Conference on Systems Theory and Scientific Computation, ISTASC 2011, Florence, Italy, August 23-25, 2011, pp. 48-52. [4] C.R. Costea, H. Silaghi, E.I. Gergely, G. Husi, L. Coroiu, and Z. Nagy, "Approach of PID Controller Tuning for Ball Mill", Fundamentals of Electrical Engineering (ISFEE),

[7]

[8]

[9]

2014 International Symposium on, November 28-29, 2014, Bucharest, Romania, pp. 1-4, 2014. M.N. Lakhoua, “Systemic analysis of an industrial system: case study of a grain silo”, Arabian Journal for Science and Engineering, Vol.38, 2013, pp.1243–1254. L.M. Matica, , "Non-linear or Unbalanced Electric Consumers and Distributed Power Factor", Journal of Computer Science and Control Systems, Vol. 7, No. 1, May 2014, pg. 43-46. E.I. Gergely, D.C. Spoială, V. Spoială, H.M. Silaghi, Z.T. Nagy, "Design framework for risk mitigation in industrial PLC Control", Proceedings of the 2008 IEEE International Conference on Automation, Quality and Testing, Robotics (AQTR 2008) THETA 16th edition,Cluj-Napoca, Romania, 2008, Tome II, pp. 198202. E.I. Gergely, H. Madsen, Fl. Popenţiu-Vladicescu, V. Spoială, Z.T. Nagy, "Dependability Analysis of PLC I/O Modules", Proceedings of the 3rd International Workshop on Soft Computing Applications SOFA 2009, July 29 – August 1 2009, Szeged Hungary – Arad Romania, pp.175180. G. Husi; P.T. Szemes; E. Dávid; T.I. Erdei; G. Pető, "Reconfigurable Simulation and Research Toolset for Building Mechatronics", In: Proceedings of CERiS'13 Workshop on Cognitive and Eto-Robotics in iSpace. pp. 26-31.

Journal of Computer Science and Control Systems 19 __________________________________________________________________________________________________________

Mechanical Vibration Producing Device for the Use of the Sportsmen Training MARCU Florin1, DRĂGHICIU Nicolae2 1 University of Oradea, Romania, Department of Psycho-Neurosciences, Faculty of Medicine and Pharmacy, University Str.1, 410087 Oradea, Romania, E-Mail: [email protected] 2

University of Oradea, Romania, Department of Electronics and Telecommunications, Faculty of Electrical Engineering and Information Technology, University Str.1, 410087 Oradea, Romania, E-Mail: [email protected]

Abstract – Medical electronics is an interdisciplinary field that develops and promotes, in collaboration with doctors, informatics technicians and biologists, the engineering innovations and the production of the medical devices. Medical engineering is not only about producing highly proficient bio-medical gear, as about the use of this kind of equipment within medical act quality conditions for the patient and the medical staff. The study of the effects the mechanical vibrations do exert on human body is not a new domain, in this respect the research going back to its starting point in the beginning of the 20th century. Now, the benefits of the mechanical vibrations, generated by electronics, on human organism are used both for increasing sport performances and sportsmen recovery. Keywords: vibration generator system, set frequencies, case study. I. INTRODUCTION Vibrations are periodic moves performed by a mechanical system in report with the initial state, and being generated by forces whose measures, directions or application points change. In order to unveil & point out as clearly as possible the mechanical vibes on the sportsman, it is necessary to perform an analysis of the human organism from a biomechanical point of view, using, as a research technique, particular engineering means, respectively modeling technique [1]. Human body organs have different frequency modulation resonances, thus they do not vibrate as a single mass. Consequently, the sportsman's body, as it is made up of hierarchical structured levels subsystems, is a heterogeneous elastic medium which can turn the vibrations it undergoes amplifying or diminishing them according to the laws of mechanics [2]. Several specialist studies plead for using mechanical vibrations in sportsmen preparing and recovery due to its obvious benefits that this preparing method generates. At the sportsmen who perform their training with vibrations stood out the following: the mobilization of 95 % to 97 % of the entirety of muscles fibers that undergo vibrations [3]. In this respect the muscles can be used faster and generate more power; the superficial local

blood circulation amelioration of the stimulated muscular mass. Another benefit of vibrations in athletes is increasing somatotrophin secretion, the growing hormone being a very important element in tissue recovery; articular flexibility improvement at the vibration subjected anatomical areas. As well at them, is observed the rising of testosterone and endorphins secretion, which decreases pain and consecutively energizes the mood; lowering of the mechanical pressure at the tendons and ligaments level thus making the vibrational training useful in preventing accidents and in sportsmen recovery stages and rehabilitation phase [4]. II. THE TECHNICAL CHARACTERISTICS OF THE VIBRATION GENERATING SYSTEM IN SPORTSMEN TRAINING The system or the vibration generating device, used in sportsmen training, has the following features and properties: the amplitudes of the mechanical vibrations can be propagated over the human body unidirectionally, bidirectionally or tridirectionally on X, Y or Z axis, at choice. To be remembered the system (the device) is tightened in adjustment screws and blocks on one direction at a time. The frequency domain of the fundamental amplitude generated by the stand is variable from 0-400 Hz, by modifying the rotation speed through frequency convertors. Another feature of the system is that the frequency domains on which the data are measured, analyzed and interpreted depend on the device or acquisition board performance. The stand also has vibration sensors: triaxial & uniaxial thrusters, vibration speed sensors. While at a certain frequency the moving vibrations amplitudes, speed and thrust induced, can be modified into a multitude of variety; commonly, amplitudes of vibrations at certain frequencies can be induced according to certain standards. The stand, shown in the Fig. 1, has the possibility to generate amplitudes of vibrations for the following parameters: max 11mm/s, rms – speed, max 40m/s2 – 4g, rms – thrust, max 1000 μm, peak to peak – moving. In addition the stand may also have the following endowments: no contact moving translators to measure the amplitudes of relative vibrations movements of the entire vibrating whole, vibration speed sensors. It can

20 Volume 8, Number 1, May 2015 __________________________________________________________________________________________________________

also be endowed with alarming and safety systems in case of the vibrations amplitudes growing out of control as well as a Expert Vibro vibrations measurement, analysis and interpretation system – Delphin Technology Germania, to perform mathematical operations for the measured sizes, automatic regulation of the rotation frequency, alarming & safety option, on-line visualizing and measurement of the measurement process.

overload, overheating, voltage auto adjustment, output by modifying frequency at optimum current. It may as well screen: frequency, revolution (rpm), output current, output tension, alarming, system parameters. Block diagram of the device made and used for training athletes with mechanical vibration is shown in Fig. 2: Monitorizare funcții vitale corp uman

Monitorizare vibrații corp uman

PACIENT

PAT VIBRANT

Sistem de generare vibrații triaxial

Măsurare și monitorizare vibrații mecanice absolute

Fig. 2. Graphics for mechanical vibration generator device.

III. CASE STUDY

Fig. 1. Vibrations generating system – metal support and frequency convertor and the command system.

The technical features and properties for the vibrations generator group presented in Fig. 1 are as follow: vibrations generating electric motors – vibrations in special running, control mode: linear control v/f; optimized vectorial control; PID (positive, negative); PWM. Another characteristics of the vibrations generator group is the fundamental frequency up to 400 Hz – 12 000 rpm and the frequency resolution of minimum 0.01 Hz, adjustable from operating interface. The possibility of modifying the time table to raise the electric-engine revolution up to the fundamental frequency (set frequency) is between 0.1–3000 s. the vibration generating group has also protection functions: in/out phase, tension overload, sub-tension, over-current,

In this study we intended to underline the impact of the general mechanic vibration – Whole Body Vibration, WBV generated by a vibrating bed on the sportsmen before performing the regular training. In this respect four sportsmen have been evaluated. They developed a regular sports activity and expressed in writing the desire to take part in the study. In Table 1 we have the precise values of the three vibration parameters: displacement, speed, vibration thrust according to the 2 used sensors which are inserted into the bed and on the hind thigh of the evaluated subject: - exact values of vibration parameters: movement, velocity, acceleration measured by the accelerometer on the bed; - exact values of the vibration parameters: movement, velocity, acceleration measured by the accelerometer on the anterior face of the thigh;

Table 1. Vibrations parameters values.

Acc [g-RMS] 0.17 0.06

Vertically (X) Velocity Movemen [mm/st RMS] [ym-P-P] 11 240 3.7 150

Acc [g-RMS] 0.2 0.05

Horizontally (Y) Velocity Movemen [mm/st RMS] [ym-P-P] 14 310 7 200

The meaning and values of other data listed in the table are thus: 1g = gravitational acceleration = 9.81 m/s2, RMS - vibration media, Peak to Peak (P-P) - peak to peak vibration = 2xP = 2x1.414 RMS, Peak (P) = 1.414 RMS.

Acc [g-RMS] 0.06 0.02

Pivotally (Y) Velocity Movemen [mm/st RMS] [ym-P-P] 4 88 1.6 56

In order to evaluate the sportsmen into the study we took into consideration the initial values, before starting the study and the final values of the standing jump, long jump & high jump, before the beginning of the study and the final values of the standing jump, long jump and

Journal of Computer Science and Control Systems 21 __________________________________________________________________________________________________________

Table 2. Values and growing percentage obtained at long jump according the performed training sessions.

Number of sessions 48 sessions 40 sessions 28 sessions 16 sessions

Long jump (cm) Initial Final 238 246 236 243 238 241 222 223

% growing +3,36% +2,97% +1,20% +0,45%

the 40 session repeats while with the 16 session one no benefits turn up. Table 3. The values and growing percentage obtained for the high jump, according the number of performed sessions.

Number of performed sessions 48 sessions 40 sessions 28 sessions 16 sessions

Initial 55 43 50 44

Final 62 50 52 44

% growing +12,73% +16,28% +4,0% 0,0%

16,28%

12,73%

4,00%

The graphic representation of the 4 growing percentage of the long jump obtained in the study is as it follows:

0,00% 48 sedinte

40 sedinte

28 sedinte

16 sedinte

Fig. 4. The growing percentage graphics for the high jump after the vibrations training sessions according the number of performed sessions.

3,36% 2,97%

Saritura in lungime (% crestere)

High Jump (cm)

The graphic representation of the high jump with the evaluated 4 sportsmen, respectively the high jump length after finishing the comparative study with the initial one, before it started goes as follows:

Saritura in inaltime (% crestere)

high one, these representing the objective parameters. Within this study we evaluated the subjects with the help of a subjective parameter represented by Likert scale, to measure the quality and which comprises a set of assertions either in favor or not concerning the study which makes the research object. The comparative analysis of the results evolution obtained in the long jump revealed the fact that the number of training sessions has a directly proportional impact on the obtained results. Thus, out of the Table 2 data, we can notice the growing percentage of the performance drops from 3.36%, with the 48 session subject to only 0.45% with the 16 session subject. Out of the 4 results evaluation which reflects the research benefits we came to the conclusion there is a grown difference between the success of the sportsmen with 48-40 training sessions and those with 28-16 training sessions (p< 0.001).

Out of the results evaluation obtained on Likert scale it turns out the subjects to have performed 48 or 40 training sessions considered the study as being highly efficient while the sportsman to have performed 28 rehearsals perceived it only as efficient. The 16 rehearsal sportsman perception was indifferent as to the training sessions on mechanical vibrations. In Fig. 5 we have the graphics of the obtained results.

1,26%

0,45%

40 sedinte

28 sedinte

16 sedinte

Fig. 3. The growing percentage graphics for the long jump after the vibrational training session according the number of training session.

The number of the vibration session trainings has an obvious impact in the high jump, too. The growing percentage of the consecutive obtained results being directly proportional with the number of rehearsals. When analyzing the initial results in comparison to the final ones, gathered in Table 3, at the 48 session sportsman is to be noticed an increase in jump length from 55 cm to 62 cm. The increased percentage of the obtained results, in report to the rehearsal frequency, is obviously higher at

6

6

Scala Likert

48 sedinte

5

3

48 sedinte

40 sedinte

28 sedinte

16 sed

Fig. 5. Graphics of the results for Likert scale evaluation as following after vibrations training sessions according to their number.

22 Volume 8, Number 1, May 2015 __________________________________________________________________________________________________________

IV. CONCLUSIONS Through this vibrations generating system we intended the conceiving and development of some modern training techniques, based on mechanical generated vibrations which do not expose either the subject or the investigator to any health damaging risks for health or life whatsoever. The results of the present study confirm the specialized researches achieved through the vibration technique generated on sportsmen and which proved the enhancement of the physical condition and a faster recovery [5]. The benefits of using this training mode are also extended over the comfort and psychic wellness & balance of the sportsmen [6]. That aspect is also revealed by the results obtained with the help of Likert scale for measuring quality perception of the vibration training. Obviously, the mechanical vibrations training is but an efficient instrument in sportsmen training only when accompanied by physical classic sportsmen training. This scientific work represents only a step within a larger study to be continued and thoroughly

studied in order to underline as accurate as possible the mode and measure through and in which mechanical vibrations lead to enhanced sport performances. REFERENCES [1] Amonette, W., Abercromby, A., Hinman, A., and Paloski W.H, Neuromuscular responses to two whole-body vibration modalities during dynamic squats. Abstract presented at NSCA National Conference, USA, 2005. [2] Florin Marcu, Liviu Lazar, Studying the implementation of mechanical vibration in sportsmentraining, GeoSport for Society, volume 1, no 1-2 (2014) [3] Cardinale M., Bosco C., The use of vibration as an exercise intervention, Exercise Sport Sciences Reviews, 2003 January, Volume 31: 3–7 [4] Hopkins W., Measures of reliability in sports medicine and science, Sports Medicine 2000 July,30: 1-15 [5] Roelants, M., et al., Effects of 24 weeks of whole body vibration training on body composition and muscle strength în untrained females. În: International Journal of Sports Medicine, nr. 25 (1), 2004. [6] Vibraţii mecanice. Ghid pentru efectul vibraţiilor asupra sănătăţii corpului uman, SR ISO 12349/2000.

Journal of Computer Science and Control Systems 23 __________________________________________________________________________________________________________

A Dependability Modeling Approach for Cyber-Physical Systems SANISLAV Teodora, MICLEA Liviu Technical University of Cluj-Napoca, Romania Department of Automation, Faculty of Automation and Computer Science, 26-28 G. Barițiu Street, 400027 Cluj-Napoca, Romania, [email protected], [email protected]

Abstract – Cyber-Physical Systems (CPSs) represent a fusion between embedded systems and distributed systems, which implies among others the development of new formalisms for dependability assurance within this type of systems. In this context, the present paper introduces a dependability assurance methodology in order to create a dependability model (DAM) of CPSs in a dynamic and evolving context. The methodology combines ontology development techniques with a well-known dependability analysis techniques. The usefulness of the proposed methodology is demonstrated by the implementation of a dependability model of a case study CPS for environmental monitoring. The model plays a central role in the CPS architecture and drives the aspects related to system diagnosis. The exploitation of the model through queries highlights its flexibility and its integration capability with CPS software modules. Keywords: ontology; dependability modeling; cyberphysical systems; wireless sensor networks. I. INTRODUCTION The concept of Cyber-Physical Systems (CPSs) resulted from a combination of two concepts: embedded systems, and distributed systems. CPSs are complex systems and have a System-of-Systems (SoS) aspect. The CPSs complexity has its main disadvantage consisting in the fact that many of the formalisms and mathematical tools developed for current systems are not suitable for this new category. The interactions between the physical, cyber and human worlds have to be formalized and modeled, so that they can be integrated into new CPS specific models. In this context, the CPSs research challenges cover the following areas: (a) new methods and tools for analysis, design and verification; (b) new comprehensive and formal models to integrate the physical models with the digital ones, taking into account the overall behavior, the real-time performance of the system, the semantic interoperability between its components, and the networking models; (c) new developments in the field of sensors and actuators in terms of virtualization, fusion, energy management, QoS management; (d) new dependability models, which include new methods for threats identification, new fault-tolerant mechanisms, new test methods, and certification and validation metrics; (e) design and

development standards [1, 2, 3, 4, 5] to obtain functional, efficient, and dependable CPSs, which can be used in various application domains (e.g. environmental monitoring, building automation, critical infrastructure, smart manufacturing, health care and medicine, intelligent transportation and service robot). The paper addresses the research agenda for developing theoretical foundations concerning the modeling of CPSs dependability while dealing with reaction to change and to unexpected events in an evolving and adaptable context, and it fits the (d) aforementioned area. Dependability represents the ability to deliver service that can justifiably be trusted [6]. Accordingly, a dependable CPS should operate properly without interruptions and it should deliver all the requested services. Assuring CPSs dependability is a difficult task, taking into account that these are adaptable and reconfigurable systems, whose structures are dynamically changed depending on context. Proper dependability assurance methodologies and models, that consider the modification of CPSs behaviors at run-time, have to be defined. The dependability assurance methodologies have to involve the accomplishment of CPSs dependability qualitative and quantitative analyzes from the early phases of the design flow, in order to provide a feedback for the system refinement and to reduce the risk of late discovery of dependability flaws consequences. The dependability assurance models (DAMs) have to follow an adequate methodology, and to represent the knowledge related to CPSs errors, faults and failures in a formal manner. This paper presents a dependability assurance methodology for creating a dependability model of CPSs in a dynamic and evolving context. The methodology combines a dependability analysis technique (DAT) with ontology development techniques in order to define a DAM, which plays a central role in the CPS architecture and drives the aspects of the system related to its diagnosis. Also, the current paper presents the DAM of a case study CPS for environmental monitoring and some modalities of DAM exploitation by different software modules using a specific ontology query language. The DAM is defined as an ontology dedicated to analyze the CPS behavior in the presents of failures and faults. The rest of the paper is structured as follows. Section II presents the most common techniques for

24 Volume 8, Number 1, May 2015 __________________________________________________________________________________________________________

dependability analysis, since one of them represents the core of the proposed dependability assurance methodology. Section III presents the proposed methodology for CPSs dependability assurance and the resulting DAM of a case study CPS. Section IV highlights several DAM querying modalities to test it before its use by the CPS software modules. The final section gives the conclusion and directions for future work.

identification of failure modes at the top level of the system, and then it proceeds backward to identify the causes of the failures [7]. FTA, Markov analysis, PN, and RBD are top-down techniques. In practice, combinations of these types of analyzes are used. FMEA, FTA, Markov analysis and PN are the most used DATs. FMEA can be transposed easily into a knowledge representation language, and therefore it was chosen for the modeling of CPSs dependability.

II. DEPENDABILITY ANALYSIS TECHNIQUES

Failure Mode and Effect Analysis (FMEA) FMEA is an inductive analysis technique used to identify potential failure modes of a system, to quantify the risk level associated with each failure, and to identify and to implement corrective actions in order to address the most serious issues. FMEA involves the identification of system components, functions, failures, effects of failures, failures causes, and corresponding control actions. FMEA has the following steps: 1) Identification of the failure modes (FMij; j=1..m) for all the system components (Si; i=1..n); 2) Description of the effects of each failure modes and the severity assessment for each effect (SEVij; i=1..n; j=1..m); 3) Identification of the possible causes of each failure mode (PCijk; i=1..n; j=1..m; k=1..p); 4) Quantifycation of the probability of occurrence of the causes of each failure mode (OCCijk; i=1..n; j=1..m; k=1..p); 5) Identification of all existing control rules that contribute to the prevention of the occurrence of the cause that corresponds to each failure mode; 6) Determination of the ability of each control in preventing or detecting the failure mode or its cause (DETijk; i=1..n; j=1..m; k=1..p); 7) Calculation of Risk Priority Numbers (RPNijk; RPNijk=SEVij*OCCijk*DETijk). All this information are centralized in a tabular form and they have to be updated every time the design or process changes or when new actions/data cause the modification of the SEV, OCC, or DET values.

The dependability analysis techniques (DATs) are used for the prediction, verification and improvement of dependability attributes, especially reliability, availability, maintenance and safety. The following IEC 60300-3-1 techniques are mainly used in different system life-cycle phases: Event Trees Analysis (ETA), Failure Mode and Effect Analysis (FMEA), Failure Mode, Effect, and Criticality Analysis (FMECA), Fault Trees Analysis (FTA), Functional Failure Analysis (FFA), Markov analysis, Petri Net analysis (PN), Preliminary Hazard Analysis (PHA), and Reliability Block Diagrams analysis (RBD) [7]. A.

DATs Characteristics DATs have qualitative and/or quantitative aims. The main objectives of qualitative analysis consist in the identification of components failure modes, their system-wide consequences, the failures causes and their effects, and the determination of the possible repair/recovery strategies [7]. FMEA, FFA, and PHA are the most used qualitative techniques. The quantitative analysis aims at defining the numerical reference data that will be used as input parameters for reliability and availability models, and at estimating the reliability and availability metrics using probabilistic or stochastic assumptions [7]. FMECA, and Markov analysis are the most used quantitative techniques. There are several techniques capable to provide both type of analyses, such as ETA, FTA, PN, and RBD. Also, DATs support the exploration of cause - effect relationships. From this point of view, the techniques are categorized as follows: deductive techniques, which start from known effects and find unknown causes (e.g. FTA, FFA), and inductive techniques, which start from known causes and forecast possible effects (e.g. ETA, FMEA, FMECA) [7]. The dependability analysis of a system can be achieved using either a bottom-up or a top-down approach. The bottom-up analysis involves the identification of faults/failures at the component level. Each fault/failures of a component is considered separately and its effects to the next higher level of the system are studied. This technique is repetitive in order to reveal the effects of the fault/failures at all functional levels of the system [7]. ETA, FMEA, FMECA, FFA, and PHA are bottom-up techniques. The top-down analysis involves the investigation of the effects of multiple faults/failures occurrences. It starts with the

B.

III. CPSs DEPENDABILITY MODELING APPROACH DATs, which were presented in the previous section, model the CPSs dependability in an off-line manner. For run-time modeling these techniques must be adapted. The ontologies are suitable formalizations capable to respond to this challenge. Such DAMs have to be developed based on a reusable and scalable methodology. A.

CPSs Dependability Methodology The proposed CPSs dependability assurance methodology combines FMEA with a well-known ontology development methodology, Onto-Agent [8]. The proposed methodology follows the five phases of the software development process (Fig. 1): Phase 1 - Generalization involves the identification of the ontology purpose and requirements. Its main

Journal of Computer Science and Control Systems 25 __________________________________________________________________________________________________________

purpose is the modeling of CPSs dependability in a dynamic and evolving context. Phase 2 - Conceptualization involves the determination of the main concepts and terms for the ontology, and their inter-concept relationships as well. The main dependability concepts and terms are presented in paper [6]. Phase 3 - Formalization involves the definition of the ontology concepts and their relationships organized into a hierarchy and the definition of the ontology rules and axioms. The methodology proposes the definition of two main dependability concepts, fault and failure, starting from the FMEA concepts. A fault is characterized based on four elements, namely the fault cause, the fault result, the system's location where the fault occurred, and the fault occurrence time. A failure is defined through the following attributes: the failure happening mode, the effect of the failure under the entire system, the cause of the failure, the severity of the failure, the occurrence of the failure within some time interval, the possibility to detect the failure, and the failure occurrence time. The relationship between the two concepts specified above is causal: a fault always causes a failure. The definition of the rules and axioms is consistent with the criteria for evaluating the SEV, OCC and DET values of failure modes, based on which RPN is calculated. Phase 4 - Implementation and testing aims to transpose the conceptual and formalized DAM in a specific ontology representation language, and to test the ontology using a query language. The ontology classes and their hierarchy, as well as their properties and instances are defined. The ontology exploitation consists in the achievement of queries which show various ways in which the DAM can be used to diagnose the system. Phase 5 - Evaluation involves the verification of the ontology in order to see whether it satisfies the requirements in terms of clarity, comprehensiveness and popularity. B.

CPS Dependability Assurance Model – Case Study The scalability and usability of the proposed methodology is validated on a case study CPS for environmental monitoring. This type of CPS was chosen because it has to operate without human intervention for long periods of time and with minimal energy consumption, and therefore its dependability assurance is very important. The case study CPS is composed of three layers, as it is shown in Fig. 2. The CPS bottom layer consists of Wi-Fi devices developed around a low power wireless LAN module, allowing battery lifetimes of several years [9]. The Wi-Fi devices comprise several types of sensors to measure temperature, relative humi-dity, pressure, carbon dioxide, and light intensity levels. These sensors can be encountered in any combination for each different Wi-Fi device. The Wi-Fi devices can be set up to periodically connect to a wireless LAN and to send packets to an IP address

within the network or to one that is visible on the Internet via UDP.

Fig. 1. CPSs dependability assurance methodology.

At the CPS middle layer, dedicated software modules interpret the transmitted packets and save them into a measurement database. This layer includes the DAM and dedicate software modules which diagnose the CPS based on the ontology axioms, and which interrogate and update the ontology. The CPS top layer contains web services and client applications. This subsection presents the DAM of the case study CPS, following the proposed methodology. The first three phases are generic for all the CPSs. Phase 4 Implementation and testing involves the following steps: • Definition of the ontology classes and of their hierarchy; • Definition of the classes’ properties according to the formalization phase; • Definition of the restrictions for the classes’ properties; • Creation of the class instances.

Fig. 2. Case study CPS architecture.

26 Volume 8, Number 1, May 2015 __________________________________________________________________________________________________________

The DAM of the case study CPS follows the first three steps, as they are presented in paper [10]. The classes of the DAM follows the dependability taxonomy [6]. New classes have been defined according to the FMEA (e.g. Failure-Effects and Failure-Modes) and to the CPS architecture (e.g. Locations). Two categories of classes’ properties, ‘Object’ and ‘Data’, characterize the main concepts of the model. ’Equivalent To’ type restrictions indicate that the classes of the model must have all their properties expressed as instances of other classes in case of ’Object Property’ or as predefined specific values in case of ’Data Property’. Protégé [11] facilitated the creation of the DAM in OWL (Ontology Web Language). Fig. 3 highlights the obtained result. WebProtégé [12] represents an alternative to Protégé. The exploitation of the ontologies implemented in WebProtégé is difficult due to the URIs. But, the ontologies implemented in WebProtégé have the avantage of being evaluated by the scientific community. WebProtégé offers the possibility of sharing the ontologies with others and the oportunity to have feedback. Paper [13] describes the DAM implemented in WebProtégé. The fourth step of the Phase 4 - Implementation and testing involves the achievement of the case study CPS FMEA. Paper [13] presents the FMEA of the Wi-Fi devices. The classes’ instances consist in the FMEA table lines, expressed according to the definition of the two concepts Fault and Failure, and the relation between them. As an example, an instance of the Detectable-Failures class, called Wi-Fi-Sensor-Out-ofWork, is presented. This instance has to be defined through seven properties that characterize a failure. The hasMode, hasEffect, and hasCause properties lead to the earlier definition of the following instances: IncorrectFunction-Achievement of the Function-AchievementMode class, Environment-Data-Loss of the No-Data-

Fig. 3. DAM classes and their properties in Protégé.

Achieving class, and Wi-Fi-Sensor-Battery-Depleted of the Hardware-Faults class. The hasSeverity, hasOccurence, and hasDetection properties take the following values: 5, 3 and 2, respectively. The hasDetection property value (= 2) characterizes a detectable failure of the CPS case study. The hasTime property takes the timestamp value indicating when the failure was first detected. Table 1 highlights the mapping of the first line within the FMEA in instances of the dependability ontology. Fig. 4 shows the Wi-FiSensor-Out-of-Work instance in Protégé. The testing of the DAM is an important stage and it involves the ontology exploitation using dedicated languages. The presentation of several test cases is the subject of the following section.

TABLE 1. First line of the FMEA table for the case study CPS. FAILURE

FAILURE MODE

(Failures subclasses instances)

(Failure-Modes subclasses instances)

Wi-FiSensor-Outof-Work

IncorrectFunctionAchievement

hasMode

FAILURE EFFECT (Failure- Effects subclasses instances) hasEffect

EnvironmentData-Loss

SEVERITY (integer) hasSeverity

5

FAILURE CAUSE (Faults subclasses instances) hasCause

OCCURENCY DETECTION

(dateTimeStamp)

hasOccurence

hasDetection

hasTime

3

2

2014-1122

Wi-Fi-SensorBatteryDepleted

║║

║║

║║

║║

DetectableFailures instance

FunctionAchievementMode instance

No-DataAchieving instance

HardwareFaults instance

TIME

(integer)

(integer)

Fig. 4. Wi-Fi-Sensor-Out-of-Work instance in Protégé.

Journal of Computer Science and Control Systems 27 __________________________________________________________________________________________________________

IV. TESTING OF THE CPSs DEPENDABILITY MODELING APPROACH The proposed DAM has to be interpreted, queried, and updated at run-time by means of software modules, referred in the CPS architecture as: the Diagnostic software module, and the Ontology management software module. For being able to do this, the dependability model needs to be tested first. The query of the DAM and the interpretation of the results provide several test cases of the model. The SPARQL (SPARQL Protocol and RDF Query Language) query ontology language [14] is the most commonly used. SPARQL was standardized in 2008 by the World Wide Web Consortium (W3C) and it has reached version 1.1 [15]. It became the main language for the semantic web through its adoption by the most ontologies implementation environments in RDF and/or OWL formats. The mechanism used to achieve SPARQL queries is based on the matching of ontology sub-graphs matching [16]. Protégé provides an editor for creating SPARQL queries. The next section of code has to be declared in order to query the DAM. It contains the definition of the implicit prefixes which indicate the OWL, RDF, XSD, RDFS schemas. The ONT prefix makes the connection to the DAM, called untitled-ontology-13.

Fig. 5. Instances of Failures class and its subclasses.

Fig. 6. Number of instances with DET = 2 and SEV > 4. PREFIX rdf: PREFIX owl: PREFIX xsd: PREFIX rdfs: PREFIX ont:

The SPARQL queries of the ontology use these prefixes. The following queries will be used by the Diagnostic software module. a) Display all the instances of the Failures class and its subclasses, and calculate their corresponding RPN (Fig. 5 – result in Protégé); SELECT ?instances ?MODE ?EFFECT ?SEV ?CAUSE ?OCC ?DET ?TIME (?SEV*?DET*?OCC AS ?RPN) WHERE { ?instances ont:hasMode ?MODE . ?instances ont:hasEffect ?EFFECT . ?instances ont:hasSeverity ?SEV . ?instances ont:hasCause ?CAUSE . ?instances ont:hasOccurence ?OCC . ?instances ont:hasDetection ?DET . ?instances ont:hasTime ?TIME . }

b) Display the number of instances of detectable failures (DET = 2) and with high and very high severity levels (SEV > 4) (Fig. 6 – result in Protégé). SELECT (COUNT(?detectable_instances) as ?no_instances) WHERE { ?detectable_instances ont:hasSeverity ?sev . ?detectable_instances ont:hasDetection ?det . FILTER ((?sev > 4) && (?det = 2)) } GROUP BY ?no_instances

The screenshots above highlight the functionality of the SPARQL queries on the proposed DAM. The two test cases demonstrate that the DAM of the case study CPS is valid in term of queries. All that remains to be shown is the fact that the model can be easily integrated with the software modules of the CPS by means of toolkits or APIs, such as Jena Semantic Web Toolkit [17], and OwlDotNetApi [18], in order to update its content according to the CPS evolving context. V. CONCLUSIONS AND FUTURE WORK This paper addresses the CPSs scientific challenges related to the development of new dependability models, which include new methods for the identification of threats (faults, failures and errors). Furthermore, the paper tries to establish theoretical foundations concerning the modeling of CPSs dependability. In this direction the paper proposes: • A dependability assurance methodology of the CPSs for creating a corresponding run-time model capable to adapt itself in a dynamic and evolving context. The methodology uses the FMEA dependability analysis technique and an ontology development method in five phases. • A dependability assurance model of a CPS for environmental monitoring, developed based on the proposed methodology. The model is implemented in Protégé and all the steps required for its generation were emphazied.

28 Volume 8, Number 1, May 2015 __________________________________________________________________________________________________________

• Several test cases of the case study CPS dependability assurance model. The testing through SPARQL queries demonstrates that the model can be used by CPS software modules, the obtained results matching the expected ones. The achieved model can be used by other CPSs, because its construction was completed based on a methodology which imposes that only the creation of classes’ instances depends on the system application domains. This highlights the scalability and usability of the proposed dependability assurance methodology. The integration of the model with CPS software modules and the demonstration of its usefulness at the entire system level are subjects for future work. ACKNOWLEDGMENT This paper was supported by the Post-Doctoral Programme POSDRU/159/1.5/S/137516 - PARTING, project co-funded from European Social Fund through the Human Resources Sectorial Operational Program 2007-2013. REFERENCES [1] R. Baheti and H. Gill, “Cyber-physical systems”, in Impact and Control Technology, T. Samad and A.M. Annaswamy Eds., IEEE Control Systems Society, 2011, available at www.ieeecss.org [2] J. Wan, M. Chen, F. Xia, D. Li and K. Zhou, “From machine-to-machine communications towards cyberphysical systems”, in Computer Science and Information Systems, vol. 10, no. 3, pp. 1105–1128, 2013 [3] M. Broy, M. V. Cengarle and E. Geisberger, “Cyberphysical systems: Imminent challenges”, in Large-Scale Complex IT Systems. Development, Operation and Management, Lecture Notes in Computer Science Series, Ed. Springer Berlin Heidelberg, pp. 1–28, 2012 [4] L. Zhang, J. He and W. Yu, Challenges and solutions of cyber-physical systems, 2014, available at http://onlinepresent.org/proceedings/vol4\s\do5(2)012/8.pdf [5] Steering Committee for Foundations in Innovation for Cyber-Physical Systems, Foundations for innovation:

Strategic R&D opportunities for 21st century CyberPhysical Systems, Workshop Report, 2013 [6] A. Avizienis, J. Laprie, et.al., “Basic concepts and taxonomy of dependable and secure computing”, in IEEE Transactions on Dependable and Secure Computing, vol. 1, pp. 11-33, 2004 [7] S. Bernardi, J. Merseguer and D. Petriu, chapter 6 “Dependability analysis techniques”, in Model-Driven Dependability Assessment of Software Systems, Ed. Springer, ISBN: 978-3-642-39511-6, pp. 73-90, 2013 [8] M. Hadzic, P. Wongthongtham, T. Dillon and E. Chang, “Ontology-based multi-agent systems”, in Studies in Computational Intelligence, Ed. Springer-Verlag Berlin Heidelberg, 2009 [9] S.C. Folea and G. Mois, “A low-power wireless sensor for online ambient monitoring”, in IEEE Sensors Journal, vol. 15, no. 2, pp. 742 - 749, 2015 [10] T. Sanislav and L. Miclea. “An ontology-driven dependable water treatment plant CPS” in Journal of Computer Science and Control Systems, vol. 6, no. 1, pp. 99-104, ISSN: 1844 - 6043, 2013 [11] Protégé documentation, available at http://protege.stanford.edu/ [12] T. Tudorache, C. Nyulas, N. F. Noy and M. A. Musen, “WebProtégé: A collaborative ontology editor and knowledge acquisition tool for the web”, in Semantic Web, vol. 4, no. 1, pp. 89–99, 2013 [13] T. Sanislav, G. Mois and L. Miclea “A new approach towards increasing cyber-physical systems dependability”, in Proceedings of the 16th International Carpathian Control Conference (ICCC 2015), in press [14] World Wide Web Consortium (W3C), SPARQL 1.1 Federated Query, 2013, available at http://www.w3.org/TR/2013/REC-sparql11-query20130321/ [15] S. Harris and A. Seaborne, SPARQL 1.1 Query Language. W3C Recommendation, 2013, available at http://www.w3.org/TR/sparql11-query/ [16] I. Kollia and B.Glimm, ”Optimizing SPARQL query answering over OWL ontologies”, in Journal of Artificial Intelligence Research, vol. 48, pp. 253-303, 2013 [17] P. McCarthy, Introduction to Jena, 2004, available at http://www.ibm.com/developerworks/library/j-jena/j-jenapdf.pdf [18] OwlDotNetApi Project Documentation, 2010, available at https://code.google.com/p/owldotnetapi

Journal of Computer Science and Control Systems 29 __________________________________________________________________________________________________________

A Comparative Study of Data Mining Algorithms for Classification SITAULA Chiranjibi Tribhuvan University, Kathmandu, Nepal Central Department of Computer Science and Information Technology TU Rd, Kirtipur 44618, Nepal, Email: [email protected] Abstract – In this paper, different data mining

algorithms are analyzed using relational database available for machine learning purpose. The algorithms are selected from different sections- rule based or neural network or statistical based. Heavy relational database are used for the work. Although different data mining algorithms are available today, they are suitable for different conditions- some algorithms work nicely for numeric data, whereas some work nicely for categorical data and others may work in a more efficient manner for textual type of data. In this research, analysis is made using numerical data. The numerical classification is made using different algorithms- Naïve Bayes, Bayes Network, Logistic, Decision Table, MultiLayerPerceptron, REPTree, ZeroR, and AdaBoost. The analysis is made using their precision, recall and f-score. Keywords: relational database; data mining; clustering.

I. INTRODUCTION Data Mining means extracting hidden information from the database. Database may be numerical, categorical or textual type. Depending upon the database, we have different terms to define the field. If we discuss more about textual database for finding hidden information, we are termed as text miner. Similarly, if we simply talk about mining of information from simple table, consisting of rows and columns, it can be termed as simply data mining, although the term data mining is broad because it covers not only a relation but also amass of data except those tables. For the Data Mining purpose, many more algorithms are developed by scientists. These algorithms are basically not equally workable for heterogeneous type of data. Homogenous type of data may be preferable for that algorithm. And the philosophy of development of algorithm is also different in different scenario. The operations performed in data mining purpose are of different type- classification, clustering, association, ranking, et cetera. Classification means categorizing the instances in different perspectives i.e., distance based, probabilistic based or rule based, whereas, clustering means making a group of data to form cluster so that inter-cluster similarity is high in comparison to intra-

cluster similarity and association means likelihood probability. The data mining algorithm and their functions can be illustrated in a much clearer manner in Fig. 1.

Database Daabase …………………… …………Database

What is the pattern?

Fig. 1. Data Mining Algorithms.

In Fig. 1, the database is given in a big drum-like diagram. The pattern of the database instances is user’s question and she is trying to find it with the application of the database algorithm. For finding a pattern, it can be clustered, which we can term as clustering. II. LITERATURE REVIEW For the study of performance evaluation of data mining tools, different researchers performed analysis using open source software. Details about the research in the related field are given below: [1] employed intrusion detection task using different data mining algorithm. They compared decision trees, naïve Bayes etc for the analysis purpose and they advised to apply combined approach to data mining as their conclusion. Similarly, [2] applied stream data mining algorithms for their performance evaluation, where tree based approach performed well in their case. In the same way, [3] compared two algorithms- C5.0 and CART for customer card classification. [4] performed data mining algorithm analysis in medical databases. They used algorithms like Artificial Neural Network and Decision Tree Algorithms. According to [5], the practical comparison study of data mining query language is made. They used six existing query language from data mining algorithms comparison. Similarly, [6] used spam filtering task for analyzing different data mining algorithms. Email spam

30 Volume 8, Number 1, May 2015 __________________________________________________________________________________________________________

filtering task was chosen as a job to analyze. [7] used outlier detection task in order to analyze the task of data mining algorithms. [8] used classification task in order to analyze the data mining algorithms analysis. They used social media data for the analysis purpose. And, finally, [9] used open source software WEKA [10] as a tool to analyze data mining algorithms. The algorithms, used in this work, were around eight in number from different categories of classification.

The algorithms used in this research are Bayes Naïve, BayesNet, Logistic, Multilayer Perceptron, ZeroR, AdaBoost, Decision Table and REPTree. A.

Naïve Bayes: Depending on the prior probability, if data are classified, it is termed as Naïve Bayes theorem in computer science. According to naïve bayes approach, initially, the individual data are given certain probabilistic values, with the help of which they are classified even with the changing environment [11].

P ( B | A) =

P( A | B) =

P( A  B) P( B) P( A  B) P ( A) P ( B | A) P ( A) P( B)

(1)

ZeroR: It is the simplest classification method because it depends on the target and throws all predictors. It just predicts that group having majority of categories. It has no predictability power [15]. AdaBoost: It is very difficult to predict the orientation of data having both qualitative and quantitative data. So, in order to solve the problem of such scenario, an approach called AdaBoost is proposed. According to this algorithm, initially the data are classified with weak learner, which is the then boosted using the parameters of this algorithm. It has both pros and cons with its use because worst weak learning value may cause the problem of overfitting [16]. G.

Decision Table: Like decision tree, decision table are also classification algorithm. With the help of machine learning algorithms, they are induced. It consists of hierarchical table in which higher level gets divided into additional child table with distinct nature [17] .

H. (2)

(3)

Here, the probability of B is given, with the help of it, the prediction of B is made as in Equation (1). Similarly, in Equation (2), with the prior probability of A, the prediction of B is made. After calculating the values from previous equation, the Equation (3) calculates the value of A with the help of B, which is called Bayesian theorem. B.

Bayes Net: It is the graphical representation of probabilistic values between random variables. With the help of joint probability values of those variables, they are classified. It uses its own calculation for joint probabilities in order to classify the items [12]. C.

Simple Logistic: It is a classifier in order to make linear regression models. It uses exponential function in order to find its values. Depending on its values, the data are classified having similar quantity of regression value [13].

D.

E.

F.

III. ALGORITHMS

P( A | B) =

technology of classification. It has multiple layers to calculate the values successively so as to produce the output [14].

MultiLayer Perceptron: It is a nonlinear classifier where we are given the concept of simulation of human brain. In this algorithm, with the help of dendrite like input signals, which is fed to inner portion, the outputs are achieved processing from inner portion. It is based on neural network

REP Tree: As we know, the decision tree algorithm is a popular approach in order to classify an item. As the size of data become higher, the size of tree become much complex. In such a case, it may raise error and degrade performance. Considering those things on mind, an approach called Reduced Error Pruning Tree is adopted, which without degrading its accuracy, the unnecessary steps are pruned making it shorter and suitable [18]. IV. EVALUATION AND OUTPUT For the evaluation purpose, the tools used was WEKA and the dataset are obtained from machine learning database [19, 20]. These were performed using 10-fold cross validation approach. The following database as shown in Table 1 has been used. Table 1. Data set Information. Database

Attributes

Tuples

Car browsers

9

100

Bmw responses

4

3000

The final result obtained from the research is tabulated in Table 2. While analyzing Table 2, it is observed that, among these algorithm, Logistic is the best in case of long instances - 100 instances. It has both high weighted Precision and Recall, too. Similarly, in Table 3 are presented the results obtained for car browser.

Journal of Computer Science and Control Systems 31 __________________________________________________________________________________________________________

According to Table 2, the result showed that all algorithms possessed around equal f-score while giving around 100 records, though some variations still existed. Logisitic Regression based classification algorithm performed best among other algorihm which is statistical algorithm, whereas ZeroR algorithm remained at the bottom possing least f-score value. It is considered for the small number of records and the results of large number of records were also tested with these algorithms, which is tabulated in Table 3. Table 2. Accuracy of the Research.

number of instances, as well. Similarly, AdaBoost also performed well for higher number of attributes although it’s not true for a minimum number of attributes. It has lot of limitation also because the algorithms are not checked for other type of data- textual, categorical, et cetera. The conclusion is drawn simply by taking a few data of numerical types with 10-fold cross validation approach. It needs to be tested with heavy data of different types in order to confirm our algorithms. ACKNOWLEDGEMENTS

Algorithms

Precision

Recall

F-Score

Bayes Naive

0.548

0.548

0.548

Bayes Net

0.546

0.545

0.56

Logistic

0.551

0.551

0.572

Multilayer Perceptron ZeroR

0.54

0.53

0.53

0.258

0.508

0.343

AdaBoost

0.543

0.543

0.541

Decision Table

0.549

0.549

0.549

REPTree

0.556

0.556

0.555

Table 3. Evaluation of Data Mining Algorithms for car browsers.

Algorithms

Precision

Recall

F-Score

Bayes Naive

0.75

0.74

0.743

Bayes Net

0.76

0.75

0.753

Logistic

0.808

0.79

0.814

Multilayer Perceptron ZeroR

0.789

0.79

0.789

0.372

0.61

0.462

AdaBoost

0.769

0.73

0.733

Decision Table REPTree

0.708

0.69

0.694

0.789

0.75

0.753

While observing the result for around 3000 tuples and less number of attributes, Logistic Regression outperformed other algorithm; however, ZeroR again stood at the same position as in Table 2. V. CONCLUSION AND LIMITATIONS From the experiment it is seen that the best algorithm in terms of precision, recall and F-score is Logistic algorithm, which gave higher values among others. It worked well for higher number of attributes and higher

I would like to thank to all my dear students whose supports were really incredible for me to drive my career ahead. Furthermore, my teachers, my family and my dear friends are also my source of motivation. So, I would like to say that I am grateful with you all. REFERENCES [1] A. Ajayi, S.A Idowu and A. Anyaehie.” Comparative Study of Selected Data Mining Algorithms Used For Intrusion Detection”, International Journal of Soft Computing and Engineering, Vol. 3(3), pp. 237-241, 2013. [2] T. Tusharkumar, B. Praveen.,” A Comparative study of Stream Data mining Algorithms”, International Journal of Engineering and Innovative Technology, Vol. 2(3), pp. 149-154, 2012. [3] R. Nilima, L. Rekha and C. Vidya,” Customer Card Classification Based on c5.0 and CART Algorithms”, International Journal of Engineering Research and Applications, Vol. 2(4), pp. 164-167, 2012. [4] F. Olaiya, “ Comparative Study of Different Data mining Techniques Performance in knowledge Discovery from Medical Database” International Journal of Advanced Research in Computer Science and Software Engineering, Vol. 3(3), pp. 11-15, 2013. [5] B. Hendrik, C. Toon , F Elisa, G. Bart, P. Adriana and R. Celine, “ A Practical Comparative Study of Data Mining Query Language”, unpublished. [6] K. R. Kishor, G. Poonkuzhali. P. Sudhkar, “ Comparative Study of Email Spam Classifier using Data Mining Techniques”, Proceeding of the International MultiConference on Engineers and Computer Scientists, Vol. 1, 2012. [7] B. Zuriana, M. Rosmayati, A. Akbar, D. Mustafa Mat, “A Comparative Study for Outlier Detection Techniques in Data Mining”, IEEE, 2006. [8] P. Nancy and R. Geetha Ramani, “ A Comparison on Performance of Data Mining Algorithms in Classification of Social Network Data”, International Journal of Computer Applications, Vol. 32(8), pp. 47-53, 2011. [9] N. Salvithal and R.B. Kulkarni, “ Evaluating Performance on Data Mining Classification Algorithm in Weka”, Vol. 2(10), pp. 273-281, 2013. [10] http://download.informer.com/win-11931209226e208a75-5cd9f206/weka-3-7-4jre.exes. [11] R. Jiangtao, L. Sau Dan, C. Xianlu, K. Ben, C. Reynold and C. David, “Naïve Bayes Classification of Uncertain Data”, unpublished. [12] C. Jie and G. Russell, “ Learning Bayesian Belief Network Classifiers Algorithms and System”, unpublished.

32 Volume 8, Number 1, May 2015 __________________________________________________________________________________________________________

[13] C. Weiwei and H. Eyke, “Combining Instance-Based Learning and Logistic Regression for Multilabel Classification”, tutorial. [14] J.A.K. Suykens and J. Vandewalle, “ Training Multilayer Perceptron Classifiers Based on a Modified Support Vector Method”, IEEE Transactions on Neural Networks Vol. 10, No. 4. Pp 907-911, 1999. [15] http://www.saedsayad.com/zeror.htm [16] V. Rodrigo, J. Ruiz-del-Solar, C. Mauricio, “Gender Classification of Faces Using Adaboost”,Springer, pp. 6878,2006.

[17] L. Hongjun and L. Hongyan, “Decision Tables: Scalable Classification Exploring RDBMS Capabilities”, Proceeding of the 26th International Conference on Very Large Database, Egypt, pp 373-384,2000. [18] Z. Yongheng and Z. Yanxia, “Comparison of Decision Tree Methods for Finding Active Objects”, Beijing, China, accepted for publication. [19] http://www.informatics.buu.ac.th/~ureerat/321641/Weka/ Data%20Sets/BMW/bmw-browsers.arff [20] http://www.informatics.buu.ac.th/~ureerat/321641/Weka/ Data%20Sets/BMW/bmw-training.arff

Journal of Computer Science and Control Systems 33 __________________________________________________________________________________________________________

Authenticity, Integrity and Secure Communication in CyberPhysical Systems VEGH Laura, MICLEA Liviu Technical University of Cluj-Napoca, Romania Department of Automation, Faculty of Automation and Computer Science Baritiu st, no. 26-28, 400027 Cluj-Napoca, Romania E-Mail: [email protected], [email protected]

Abstract – We live in a world that relies on technology in more and more areas of life. As such, security has become of crucial importance. New ideas in both software and hardware development emerge every day. A new paradigm that has proven its usefulness over the past few years is cyber-physical systems. A remarkable aspect about these systems is that they are not standalone devices; they are a network of interacting elements with physical input and output. Their complexity raises challenges in the area of their security. The present paper introduces a new approach for ensuring the security of cyber-physical systems. We designed a hierarchical system in which access rights are established based on a public-key encryption algorithm. Digital signatures are used in addition to encryption, for authentication purposes but also to help verify the integrity of the system. Keywords: cyber-physical systems, digital signature, steganography, hierarchical access, multi-agent systems. I. INTRODUCTION A.

Overview In a world dependent on technology in more and more areas, security has become of crucial importance. New technologies appear in the blink of an eye and security needs to improve just as fast. Cryptography, steganography, digital signatures are the main areas in security research. Many algorithms have been developed and many more are being tested. The algorithms to use, or how to use them depends on many factors, such as what type of data has be secured, what type of system are we applying them to, what types of attacked are to be expected. However, no matter what type of system we discuss, security is an aspect that cannot be overlooked; we can-not have an insecure system. There are several ways secure a system. Cryptography is probably the most used at the moment and it means altering the form of a message in such a way that it becomes unreadable to any third party. There are several types of cryptographic algorithms such as symmetric key cryptography, where a single key is used for both encryption and decryption; asymmetric cryptography also called public-key cryptography,

where a public key is used for the encryption of the message and a another key, called private key, is used to decrypt the message, identity based cryptography which is a type of public-key cryptography and so on. The main public-key algorithms are Data Encryption Standard (DES), Advanced Encryption Standard (AES) or ElGamal. In the present paper, we will use a publickey type of algorithm, derived from the original ElGamal algorithm [9]. A rather different type of approach to security is the digital signature. Much like its "on paper" counterpart, a digital signature is used to demonstrate the authenticity of a document or message. These signatures are mathematical schemes where in most of the cases a set of public and private keys are used to sign a message and to verify the authenticity of the signature respectively. Other applications of digital signatures include integrity using for example the fact that any modification in a digitally signed message will alter the signature; and non-repudiation of origin, meaning that a signature cannot be denied at a later time. B.

Cyber-physical systems Cyber-physical systems (in short CPS) represent a new paradigm that is being used in more and more areas such as transportation systems, gas and water networks, or national disaster control systems. They are systems which integrate both computational and physical processes. The complexity of such systems requires a complex security architecture. Additionally, the critical nature of the applications in which they are used requires once more a high level of security [1, 3]. Research in the area of CPS security is an ongoing challenge, certain solutions have been developed but more are needed. This re-search can be done from a general point of view or it can vary according to the area in which the CPS is being used. The need for security is more prominent in areas such as data interpretation, control and distribution of information, confidentiality, avail-ability and so on [2, 4, 6]. An aspect that can often be found when researching security in CPS is that of communication, ensuring secure communication channels so that information is always exact, complete, available and with known origins.

34 Volume 8, Number 1, May 2015 __________________________________________________________________________________________________________

There are several ways to simulate cyber-physical systems. One such way and also the one used in the present paper is agent based modeling. Agents are autonomous components with decision making capabilities. A multi-agent system has many characteristics, the most important being that every agent has incomplete data to solve the tasks, there is no global control system, meaning that every agent has only the data necessary to complete its own task, no one agent in the system has all the information [11]. All these aspects make multi-agent systems ideal to model cyber-physical systems. In terms of security, there are many tools used to secure multi-agent systems, such as java sandbox or code signing. A special interest falls into finding a way to secure both the agent itself and its medium, MagicNET (Mobile Agent Intelligent Community Network) [10] representing a good example of such an architecture. II. HIERARCHICAL CRYPTOGRAPHIC SYSTEM Hierarchical systems are a different type of systems, in which access to information is restricted according to the position occupied in the hierarchy by each user. Such systems are particularly useful when designing security architectures. They are also appropriate in many real-life applications, such as medical facilities, where for example, a file could be available to a surgeon, but not to a nurse and so on. Hierarchy represents an additional layer of security, since information can only be viewed by those who have the access right. There are several ways to design a hierarchical system, such as [7]. In the current paper we propose the usage of an encryption algorithm with divided private key. Hierarchy is established based on this key; every user can decrypt only certain messages using its private key. The system is based on an encryption algorithm with divided private key, namely, ElGamal with (k+1) degrees of access [5]. It is an algorithm derived from the original ElGamal cryptosystem [9], being a public-key type of algorithm, in which data is encrypted using the public key and is decrypted using a secret, divided key. Like the original ElGamal system, the algorithm with (k+1) degrees of access, has three main phases: key generation, encryption and decryption. We will not go into detail regarding the mathematical aspects of the algorithm, as it is not the main focus of the present paper, however all the formulas needed to generate keys, encrypt and decrypt can be found in [5]. Regarding the implementation, we designed our hierarchical CPS as having a tree structure. We used the Java programming language, as it can be used on most platforms. The encryption algorithm was implemented separately from the system itself so that it could be used independently from the current cyber-physical system. The CPS was modeled as a multi-agent system. Each user, each agent in this case, is assigned a private key with which it will be able to decrypt certain messages. More agents can have the same type of key, with the same degree of access, but not the same key. Fig. 1

illustrates an example of a tree structured system for k=2, or three access levels.

Fig. 1. Tree structured system with k=2.

Note that each user is named using "f" followed by some indices. The indices indicate their position in the tree structure, as the first indices are copied from the ascend-ant user, while the last digit is the order number on the level. Thus a user who is the third ascendant on its level of user f11 will be named f113 and so on. Level 0 is the highest level and the user on this level has the highest degree of access, meaning that it can decrypt any data. The rest of the agents are the middle level users and they have access only to restricted information. Finally, on the last level there are the leaves of the system. They have a special status as they are the users who encrypt message. Access wise, they have the lowest degree of access, but decryption is not their main goal, since they perform encryption. An important aspect to note regarding the algorithm is that information, unencrypted data, is view as a set of messages. When such data is received, it is divided in a set of messages with a number of elements equal to the number of leaves existent in the system. Upon division, each leaf receives a message from the set. They will then proceed to encrypting the message and sending it to the intended level. The receiver level is chosen by the sender of the message. Each leaf will send the encrypted data to its ascendant. Of course, if the receiver level is not immediately above the leaf level, the users in between could have access to it, according to their rights. However, we built the program in such a way that the message goes directly to the intended receiver, to avoid unnecessary operations. The generation of the system's structure - starting all the agents, computing and distributing the public and private keys respectively are tasks performed by an entity outside of the tree structure. This agent, named tentatively "the system manager" is shut down once the tree structure is generated and the system is functional in order to increase the level of security – we do not want one user in the system to have all the keys. In short, the functionality of the system at this stage is as follows: • The system manager gathers data regarding the structure of the system: how many users, how many levels, how many users on each level – most importantly, how many leaves. • Using this data, it will start the corresponding agents, generate the keys and distribute them to their rightful owners.

Journal of Computer Science and Control Systems 35 __________________________________________________________________________________________________________

• Once the key distribution is completed, the system manager agent is shut down. The system is now functional, messages can be received. Note that receiving messages is cyclic behavior for the agents – an action repeated for as long as the agent is alive. • The outside user will send data to the system for a specific level. • The data is received and separated in as many messages as leaves exist in the system. • Each leaf encrypts the data and sends it to its ascendant situated on the previously designated level. • Once an encrypted message is received, a verification for a link to the leaf if performed and if such a link exists, the message is decrypted. These are the basic operations of the system. Of course, what is described here was used for simulation purposes. The proposed architecture can be integrated in a larger CPS; there are no limits to the number of users that can be a part of the system, as long as the tree structure is respected. In the following chapter we will discuss way to enhance the functionality of the system, by adding different options for securing a message. III. DIGITAL SIGNATURE Digital signatures can be used for authentication, integrity and non-repudiation purposes. Cyber-physical systems can benefit from the use of digital signatures in practically all these areas. Even in a system protected by the use of encryption, digitally signing the messages could prove useful. In most scenarios encrypted data is considered safe as long as the private key needed to decrypt it has not been intercepted. However, it is possible to modify data even without understanding it. In such a case, the receiver would decrypt a message containing altered data without having the possibility to verify the integrity of the message. In this scenario, a digital signature is useful, as modifying a signed message will automatically alter the signature. Due to their complexity, CPS can benefit from signatures in various ways. In our system, we use them both for authentication purposes and for integrity. However, not every message should be signed. Requesting a signature is performed at random time intervals with the role of authenticating certain users in the system, to ensure the integrity of certain messages or even when a suspicion of intrusion arises. Most of the times a document is signed by one user. There are however times when a message should be signed by more users together. Such an algorithm can be found in [5], the signature generated by the ElGamal system with divided private key. This scheme allows several users to sign a message together. Each user has its own private key which is used to sign the message. In the end, only one signature is used, one signature is used on a document. This final signature is verified using a single, common public-key. The signing algorithm contains the three phases most digital signatures algorithms contain: key generation,

signing and verifying the signature. The algorithm specifically requires working with large numbers and when possible the keys should be prime numbers. The algorithm was originally designed to work for an odd number of users. In order to have an even number of users, one user should have two private keys. A.

Theoretical Aspects

i. Key generation phase. The first phase in the digital signature algorithm used is key generation. We will need a set of private keys with as many keys as users wanting to sign a document. The algorithm begins by randomly choosing a number q, prime, for which the problem of the discrete logarithm is difficult and another number g, the generator. The two numbers have to be distinct. From the set {1,…, q-1} the set of private keys is chosen. As previously stated, each user will have its own private key, therefore the generated set will be {1,…,x2n+1} where 2n+1 is the number of users. The keys are again chosen randomly, they are, if possible, prime numbers and they are distinct. Once these numbers are generated, the next step is computing the public-key which is composed of three elements: q, g and another element h which is computed using all the numbers previously generated, including the private keys. As stated in [5], we compute:

h i = g x i mod q

(1)

Once all hi are computed, h will be: h = (h1 ⋅ h 3 ⋅ ... ⋅ h 2 n +1 ) ⋅ (h 2 ⋅ h 4 ⋅ ... ⋅ h 2 n ) −1 (mod q)

(2)

The shared public key is g, q, h while the private keys are {x1,…, x2n+1} and are not shared, each user knows only its own key. ii. Signing phase. The second phase of the algorithm is signing messages/documents. In order for the signing phase to begin, the users will need the hash function, H(m), of the message they will sign. In the next step, the users signing the message will choose together a random number y, 0