ADVANCED INFORMATION AND COMMUNICATION

2 downloads 0 Views 7MB Size Report
Free access signals to any subscriber must be provided without the procedure ...... Soft switch performs the basic functions of the telephone exchange (PBX) and ...
НТТ РЕЗ

2015 1ST INTERNATIONAL CONFERENCE ON

ADVANCED INFORMATION AND COMMUNICATION TECHNOLOGIES-2015

AICT’2015 Conference Proceedings

October 29 – November 1 Lviv, Ukraine 2015

УДК 621.3, 004 These proceedings depicts new directions of development of information and telecommunication systems, networks and technology, principles of optical transport networks construction, signals processing methods and methods of data protection in telecommunication networks. We recommend this paper for scientists and engineers in the field of information and telecommunication networks and systems.

CONFERENCE PARTNERS:

Proceedings of 2015 1st International Conference on Advanced Information and Communication Technologies-2015 (AICT'2015), Lviv, Ukraine, October 29 – November 1, 2015, 188 p.

Papers are presented in authors’ edition.

© Lviv Polytechnic National University, 2015

CONFERENCE ORGANIZERS Lviv Polytechnic National University Lviv Regional Scientific and Technical Society of Radio Communications

Engineering,

Electronics

and

ORGANIZING COMMITTEE CHAIRMAN: Prof. Mykhailo Klymash - Head of the Telecommunications department, Lviv Polytechnic National University CHAIRMAN’S DEPUTY: Dr. Bogdan Strykhalyuk - Deputy head of the Telecommunications department, Lviv Polytechnic National University CONFERENCE SECRETARY: Dr. Bohdan Buhyl - Assistant lecturer, Lviv Polytechnic National University MEMBERS OF THE COMMITTEE: Marian Kyryk, LLC "Inform-consulting"; Marian Selyuchenko, Lviv Polytechnic National University; Mykhailo Andriychuk, Institute of Applied problems of Mechanics and Mathematics of NAS of Ukraine; Olena Krasko, Lviv Polytechnic National University; Olga Shpur, Lviv Polytechnic National University; Orest Kostiv, Lviv Polytechnic National University; Orest Lavriv, Lviv Polytechnic National University;

Pavlo Huskov, Lviv Polytechnic National University; Roman Korzh, Lviv Polytechnic National University; Taras Maksymyuk, Lviv Polytechnic National University; Volodymyr Fast, Lviv Polytechnic National University; Volodymyr Fedyna, JSC "MTS-Ukraine"; Volodymyr Pelishok, Lviv Polytechnic National University; Zenoviy Kharkhalis, Lviv Polytechnic National University.

On behalf of the Organizing Committee, it is with great pleasure that we welcome you to the 2015 International Conference on Advanced Information and Communication Technologies (AICT’2015), held in the magnificent city of Lviv, the “culture capital” of Ukraine. AICT 2015 will be an excellent presentation event, offering the opportunity for researchers, engineers and business people to exchange ideas and discuss state-of-the-art solutions for ICT field. In addition to the traditional sessions, we are offering an Industry event – Lviv Cisco Day. Here, the present and the future of Information and Communications Technologies will be discussed with professionals from Cisco, Ukrtelecom, SoftServe, Lohika. Lviv is one of the most exciting cities in the Europe and increasingly recognized as one of the leading scientific and education center with its 12 universities, 8 academies and more than forty research institutes. The city is known as a center of art, literature, music and theatre. Nowadays, the indisputable evidences of the city cultural richness is a big number of theatres, concert halls, creative unions, and also high number of many artistic activities. Lviv's historic center has been on the United Nations Educational, Scientific and Cultural Organization (UNESCO) World Heritage list since 1998. We hope that everyone will find something to enjoy and have a great and memorable stay in Lviv. We would like to thank all the members of the Organizing and Technical Program Committees, as well as the numerous reviewers, paper authors, presenters, speakers and volunteers, who have worked diligently to make this conference a great success. Last, but not least, the support of the Lviv Regional Scientific and Technical Society of Radio Engineering, Electronics and Communications, and all our patrons, exhibitors, and supporters is greatly appreciated. AICT’2015 Organizing Committee

TECHNICAL PROGRAM COMMITTEE CO-CHAIRMANS: Minho Jo Andriy Luntovskyy Oleksandr Lemeshko

Korea University, Sejong Metropolitan City, S. Korea University of Cooperative Education, Dresden, Germany Kharkiv National University of Radioelectronics, Kharkiv, Ukraine

MEMBERS OF THE COMMITTEE: André Lima Férrer de Almeida Aziz Mohaisen Chin-Chen Chang Chin-Feng Lai

Federal University of Ceara, Fortaleza, Brazil University at Buffalo, NY, USA Feng Chia University, Taichung, Taiwan

Qiang Li Reza Malekian Sasitharan Balasubramaniam

Dmytro Fedasyuk Haiyang Ding

Chia Nan University of Pharmacy & Science, Tainan,Taiwan Lviv Polytechnic National University, Lviv, Ukraine Xidian University, Xi'an, China

Tae Woong Jeon

Hyong-Woo Lee

Korea University, Seoul, S. Korea

Taras Andrukhiv

Odessa National Academy of Telecommunications n.a. O.S. Popov, Odessa, Ukraine Lviv Polytechnic National University, Lviv, Ukraine Winston-Salem State University, Winston-Salem, USA

Tarcisio Ferreira Maciel

Ivan Lisovyy Ivan Prudyus Jinsuk Baek Kouichi Sakurai Larysa Globa Leo van Moergestel Longzhe Han Lubov Berkman Markian Pavlykevych Mohammad Upal Mahfuz Natalia Kryvinska Nguyen Dinh Han

Kyushu University, Fukuoka, Japan National Technical University of Ukraine «Kyiv Polytechnic Institute», Kyiv, Ukraine Utrecht University of Applied Sciences, Utrecht, Netherlands Nanchang Institute of Technology, Nanchang, China State University of Telecommunications, Kyiv, Ukraine Lviv Polytechnic National University, Lviv, Ukraine University of Wisconsin-Green Bay, Wisconsin, USA University of Vienna, Vienna, Austria Hung Yen University of Technology and Education, Hung Yen Province, Vietnam

Seung-Wan Ryu

Tao Han

Vasilis Friderikos Vasyl Kychak Volodymyr Mosorov Volodymyr Popovskyy Wun-Cheol Jeong Xiaohu Ge Yaroslav Matviychuk Yevgen Yashchyshyn Yulong Zou Zujun Hou

Huazhong University of Science and Technology, Huazhong, China University of Pretoria, Pretoria, South Africa Tampere University of Technology, Tampere, Finland Chung-Ang University, Seoul, S. Korea Korea University, Sejong Metropolitan City, S. Korea Huazhong University of Science and Technology, Huazhong, China ALP JSC "Ukrtelecom", Lviv, Ukraine Federal University of Ceara, Fortaleza, Brazil King's College London, London, UK National Technical University of Vinnytsia Vinnitsa, Ukraine Lodz University of Technology, Lodz, Poland Kharkiv National University of Radioelectronics, Kharkiv, Ukraine Korea University, Sejong Metropolitan City, S. Korea Huazhong University of Science and Technology, Huazhong, China Lviv Polytechnic National University, Lviv, Ukraine Warsaw University of Technology, Warsaw, Poland The University of Western Ontario, London, Canada Institute for Infocomm Research, Singapore

CONTENS Distributed systems and cloud computing Luntovskyy A., Klymash M. Performance and energy efficiency in parallel computing Globa L., Stepurin O. Experimantal research of power consumption in distributed data center Shestopal Ye., Semenko A. Signal detection algorithms for parallel data transmission systems Klymash M., Huskov P. A Survey on cloud radio access networks deployment strategies Tkachova O., Yevsieieva O. Software-Defined Network сontrollers in the cloud: performance and fault tolerance evaluation Chaikovsky I., Chaban K., Nykonchuk P. Research architecture and features of network processors

10 14 16 19

23 27

Wireless and mobile communication Lemeshko O., Haider Dheyaa Kamil Al-Janabi, Hailan Ahmad, Hojayev Oraz Research of factors influencing the subchannel allocation to subscriber stations in WiMAX Bubnov N.S., Svetsinskaya E.S., Yaschuk A.S., Sunduchkov K.S., Golik A.L., Sunduchkov A.K., The problem of processing digital signals format 5G in mobile communication and it’s solution Kyryk M., Yanyshyn V. The spectrum sensing techniques efficiency analysis in cognitive radio networks Globa L., Volvach Ye. Recovery process in the mobile networks Longzhe Han, Seung-Seok Kang Information-centric based communication scheme for wearable health care services Globa L., Kurdecha V., Ananina D. Structure of the quality server in Wi-Fi offload technology Popov V., Skudnov V., Vasiliev A. Modern heterogeneous telecommunications network Haider Abbas Al-Zayadi, Koval B. QoE-based monitoring of LTE networks Ageyev D., Al-Ansari Ali LTE RAN and services multi-period planning method analisys Maksymyuk T., Minho Jo., Gavronskiy V. Deployment of Massive MIMO Systems for 5G Heterogeneous Networks

31

33 35 38 40 42 44 47 49 52

Traffic engineering and data processing Barannik V.V., Sidchenko S.A., Tarnopolov R.V., Tupitsya I.M. The process of forming layers of bit zones in the method of crypto-semantic presentation of images on the basis of the floating scheme

59

Komolov D., Zhurbynskyy D., Kulitsa O. Selective method for hiding of video information resource in telecommunication systems based on encryption of energy-significant blocks of reference i-frame Buhil B., Shpur O., Melnyk T. Improving the effectiveness of data transfer in IP/MPLS network Barannik V.V., Tverdokhleb V.V., Kharchenko N., Sergey Stasev Improved method of dynamic bit rate control of video stream Larin V., Krasnikov P., Gavrilov D. The analysis of template method of video processing Barannik V. V., Bekirov Ali, Podlesny S.A., Yalivets K. Classification of methods of error control and concealment in video communications Veres O., Shakhovska N. Feature application big data information technology Barannik V. V., Ryabukha Yu. N., Krasnorutskyi A. ,Hrinivetska A. Model for assessment of the semantically meaningful segments of the video frame in encrypted coding systems Aleksieiev S., Romanchuk V. Evolution of the modern cyber threats in the age of big data Lavriv O., Kahalo I., Kolodiy R. Application of NoSQL approach in data-centered network architectures: big data case Urikova O. Informational resource management as a facility of corporate enterprises strategic managerial efficiency increasing Rozdimakha E.A., Omelchenko A.V., Fedorov O.V. Network traffic shaping with the aid of linear prediction models Lebedenko T., Ievdokymenko M., Ali Salem Ali Research of influence flow characteristics to network routers queues utilization Yeremenko O., Tariki N., Mohanad Najm Abdulwahd Improvement of multipath routing flow-based models for different paths classes

62 65 69 72 75 78

81 83 85

88 90 93 95

Telecommunications, Security and E-learning Denisov A. Study of dual-frequency absorbing materials based on frequency selective surfaces Barannik V.V., Sergii Shulgin, Musienko A.P. Methodological base for representation transformants in equilibrium uneven-diagonal area Semenko A., Bokla N. Modified gold’s pseudorandom sequence forming with the length of 15 + 1 impulses Tolupa S., Prus R. Method of energy-efficient conversion of PCM signals in secure information transmission systems Pilipenko G.V., Vrublevsky A.R., Lesovoy I. P. Routing based on fuzzy logic for RIP protocol Rubel A.S., Fedorov O.V., Omelchenko A.V. Detection of hidden data embedded by the Koch and Zhao method Duravkin Ye., Yevsieieva O., Carlsson Anders Model of low-rate attack detection on the OpenStack Strykhaliuk B., Koval B., Kharkhalis Z. Applying the backup overlay concept to security purposes

99

104 106

108 111 113 115 117

Koval V.V., Kalian D.O. Highly efficient synchronization systems for infocommunication networks Rodyhin M., Fedorov O. Quality Estimation of a Compressed JPEG Image

119 121

Distributed systems and cloud computing

Performance and Energy Efficiency in Parallel Computing Luntovskyy Andriy

Klymash Mykhailo

BA Dresden Univ. of Coop. Education Dresden, Germany e-mail: [email protected]

Department of Telecommunication Lviv Polytechnic National University Lviv, Ukraine e-mail: [email protected]

Abstract—Performance-to-energy tradeoffs in parallel computing are discussed. Performance models are examined. Energy optimization can be reached under use of the advanced technologies like IoT. Keywords — parallel computing and clustering; performance models; on-board-controllers; IoT, Internet of Things; Fog computing

I.

the view of future transfer from Clouds/IoS to the Fog Computing/IoT. Surely the deployment of low-cost and low-energy computing nodes based on on-board microprocessors should be considered in the frame of a given math-log problem from the aspects of the importance, priority and is strong dependent on appropriate resource use, which is in scope of this paper. II.

ENERGY EFFICIENCY IN NETWORKS

After 2011, the development of cluster and cloud computing methods was commonly triggered by the trend of “green IT” with increasing energy demand and prices. The computing centers were built more often in colder regions of Earth. So, for example, Google Data Centers achieve the PUE (Power Usage Effectiveness) of 1.12 due to further optimization of hardware, waste heat recycling systems and building construction features like improved air circulation, reuse of waste heat etc. This means that only 12% of energy required for computing was used not by servers, but by other services like conditioning, energy distribution, lighting, surveillance systems etc. However, according to Uptime Institute 2012 Data Center Survey, the average PUE in the domain was about 1.89, which means Google achieved a significant improvement compared with others. In general, the goal of optimization in this domain is to decrease consumed energy and costs where the acceptable quality of service (QoS) is offered, i.e.:

Max( PUE)^ [QoS QoSmin ]^ [Costs Costsmax ]

(1)

where Costsmax ,QoSmin – the Cost and Quality of Service constraints. A new trend to low-cost and low-energy computing nodes based on cheap on-board microprocessors (RISC/ARM) is to consider nowadays as a serious alternative to expensive state-of-the art nodes within the up-to-date IoS (Internet of Services) [1][2]. The deployment of low-cost and low-energy computing nodes such as Arduino, Raspberry Pi, Intel Edison means significant increasing of energy outcomes as well as technologically important new step towards of IoT (Internet of Things) [3][4]. Scenarios for the so-called Fog Computing within IoT will steady gain in importance in mid-term. Instead of use of IoS (Internet of Services) with heavy-weighed processors and VMs, agile and energyefficient on-board microprocessors should be operated: see

PERFORMANCE MODELS

First let us define the most important performance factors. The performance parameters of modern computers are as follows: Tact (Clock) frequency, f; MIPS: Million Instructions per Second, Mega IPS; FLOPS: Floating Point Operations per Second. System clock signal (System Clock) synchronizes the operation of multiple functional blocks within a CPU. The system tact is a periodical function based on PeirceFunction, NOR. Some examples about the performance of certain recent CPU models are given below (Table I). TABLE I.

PERFORMANCE OF CERTAIN SELECTED CPU MODELS

#

CPU model

1 2 3 4 5

Athlon FX60 Xeon Harpertown ARM Cortex-A15 AMD FX-8150 Intel Core i7 2600K

Performance, MIPS 18.938 9.368 35.000 108.890 128.300

Tact GHz 2,6 3 2,5 3,6 3,4

frequency,

The following performance formula can be used [1][2][5]: (2) P f n1 I n2 where P - performance in GFLOPS, f – CPU tact frequency in GHz, n1 – number of cores within a CPU, I – CPU instructions per tact, n2 – number of CPUs per computing node. Let us consider the integral performance criterion FLOPS: Example 1. Let’s consider a 2-Socket-Server with CPU Intel X5675 (3,06 GHz, 6 cores, 4 instructions/tact): Performance = 3,06 × 6 × 4 × 2 = 146,88 GFLOPS. Example 2. We have a 2-Socket-Server with CPU Intel E5-2670 (2,6 GHz, 8 cores, 8 instructions/tact): Performance = 2,6 × 8 × 8 × 2 = 332,8 GFLOPS.

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

10

A. Speedup and effectiveness of computing process Factors of speedup and effectiveness are computed as follows:

T1 , En Tn

An

An 100 n

models talk a lot also about the saturation effects, especially due to communication processes within a cluster between the processors (threads) and energy losses (in form redundant warm, waste heat).

(3)

T1 – computing time for a math-log problem with use of only one CPU, Tn – computing time of the parallelized on n processors or threads solution An – speedup factor, E n – effectiveness for speedup on n

where

CPUs. An exemplarily section distribution by task parallelization and the influence of cluster communication exchanges by Message Passing between the processors or threads is depicted in Fig. 1. The computation time gain is possible only due to higher (p/s) – ratio within a parallelized task (a math-log problem). The time estimations are as follows, refer (4):

T

s T

T

p n

s

s

p T

s

p n

Fig. 1. Sections distribution by a math-log problem parallelization and the influence of cluster communication (exchanges) by Message Passing

(4)

k n, e 1 p

where T – overall computing time, s – sequential part of a task (percentage), p – potentially parallelized part of a task (a math-log problem), i.e. on n threads or CPUs, e – part for sequential computing time, k – negative influence of communication by message passing between CPU/ threads (this component can also be neglected, k=0). B. Amdahl’s Law One of mostly appropriate and useful approximations for Speedup factor is the G.M.Amdahl’s one (1967):

T

1, 1 (1 p) p 1 1 An 1 p (1 p) p n 1 1 A max , An k 1 p (1 p) p

(5)

n

k n

where p – potentially paralleled part of a math-log problem, n – number of available CPU/ threads, k – negative influence of communication by message passing between CPU/ threads (this component can also be neglected, k=0). Example 3. Let’s consider a math-log problem with Toverall=20h; Tsec=1h (i.e. 5%), Tpar=19h (i.e. 95%); SpeedupMAX=20. Then by n=10 processors (threads): p=0,95, Speedup=1/((10,95)+0,95/10)=1/(0,05+0,095)=6,9 < SpeedupMAX. On the other hand: n=95 processors (threads) Speedup=16,7 only.

So we can obtain the following graduated depiction of the speedup factor (Fig. 2). There are some critic points regarding to this realistic model: too pessimistic representation of the parallel computing status. But other

11

a) Speedup vs. Effectiveness

b) Amdahl’s Speedup by different p-values

Fig. 2. Pessimistic Amdahl’s model for the speedup factor depending on p={0,5 … 0,95}: saturation effect, no more profit due to increasing of n – number of threads

C. Speedup Model Overview Table II illustrates the set of integrated models and approximations of speedup factors are typically used for distributed (parallel) computing. The approximations of speedup factor [6][7] are given on dependence of the criteria. There are the mostly used models and laws including Amdahl’s, Barsis-Gustafson‘s, Moore‘s law (exponential model) and some further models. Example 4. We would like to define herewith the value e (refer formula (4)), i.e. the normally unknown part for sequential computing time for a math-log problem, on the basis of the Karp-Flatt-Metric (1990), refer Table II (Pos.9): Number of CPU n = 100, measured speedup A = 10, 1/A = 0,1 e= (0,1 – 0,01) / (1 – 0,01) = 0,09/ 0,99 = 0,0909; e = 9,1%; it can be for parallelized p = 91% !!! Number of CPU n = 100, measured speedup A = 25, 1/A = 0,04 e= (0,04 – 0,01) / (1 – 0,01) = 0,03/ 0,99 = 0,0303; e = 3,03%; it can be parallelized for p = 97%!!! Number of CPU n = 100, Speedup A = 66, 1/A = 0,0151

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

e= (0,0151 – 0,01) / (1 – 0,01) = 0,0051/ 0,99 = 0,0052; e = 0,52%; it can be parallelized for p = 99,5% !!!

Considering (3), (4) we can obtain the next useful formula (6) for the criterion:

1 An

1, e( An , n) 1 1

1 p 1

An 1 n

An

p

1

1 1 An

An

1

An

n

n

(6)

An n

An

1 En

100

Example 5. Let us consider the number of CPU should n = 100, Speedup A = 66, effectiveness = 66%, then the math-log problem can be parallelized for the p ratio (cp. Example 4):

p

(66 - 1) / (66 - 0.66) = 65/ 65.34 = 0,995

D. Own experiments For the hardware basis (Fig. 3a) offered at Dresden University of Technology the following results (Table III) on speedup have been obtained. It was a voluminous experiment in November 2006 aimed to simulate the signal power propagation of WLAN/ WiMAX networks on complex 2D-environments (maps of the obstacles with given material features) [2][8][9][10][11] . The simulation has been realized with use of CANDY Software and Web Services for SSL-access to MARS. The following results are obtained (Fig. 4, refer Table II). These results can be approximated as follows, cp. Grosch’s law:

An

T1 Tn

n , T1

8021 sec,

0,95

(7)

The following results are obtained (Fig. 4, refer Table II). Example 6. The new hardware basis from the same hand is called TAURUS Bull HPC-Cluster [2]. This cluster is more powerful than the formerly leading MARS and has nowadays the following features (Fig.3b): 4320 cores Intel E5-2690 (Sandy Bridge) 2.90GHz. 704 cores Intel E5-2450 (Sandy Bridge) 2.10GHz as well as 88 NVidia Tesla K20x GPUs. 2160 cores Intel X5660 (Westmere) 2.80GHz. SMP (Symmetric Multiprocessing) nodes with 1TB RAM.

Fig. 4. Computing time and speedup factor in depending on threads number obtained on the multi-core high-performance computer MARS @ TU Dresden (Basis – CANDY Framework 2006)

TABLE II.

Number of threads

Computing time, s

1 2 5 10 20 30 55 70

8021 4163 1749 908 471 321 181 144

PEAK PERFORMANCE COMPARISON (STATUS 2015)

Cluster or grid

137 TFLOPS as peak performance (without GPUs).

(a) Hardware basis: High (b) Up-to-date Hardware basis: Performance Computing Cluster TAURUS Bull HPC-Cluster MARS SGI Altix 4700 @ TUD with with 137TFLOPS 1024 cores possesses the performance 13,1TFLOPS Fig. 3. Hardware basis: High Performance Computing at TUD

Speedup factor An=T1/Tn 1 1,9 4,6 8,8 17,0 25.0 44,3 55,7

The so called SNP (Symmetric Multiprocessing) with large RAM capacities gains in its deployment nowadays more sympathizers than the NUMA (Non-Uniform Memory Access) with the offered unique address spaces as well as correspondently the cache-coherent NUMAs. A performance comparison is given in Table III. Herewith some worldwide known clusters and grids from the Top500 List are referred in correspondence to the above mentioned performance of MARS and TAURUS systems. MARS performance is given as “a unit”. TABLE III.

1 PB SAN disk storage, Bullx Linux 6.3.

COMPUTING TIME FOR A COMPLEX SIMULATION TASK OF WLAN/ WIMAX PROPAGATION

Tianhe-3 (this specifying is without awareness) Tianhe-253 (a supercomputer from Popular Republic China) Titan (USA-supercomputer) BOINC (grid system hosted at Berkeley University of California, USA) Juqueen (hosted at Research Centre FZ Juelich, supported via IBM)

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

Peak performance, PFLOPS

Multiplicity factor (regarding to “MARS units”)

100

7692

33,86 17,59

2605 1353

9

692

4,1

315

12

SuperMuc (hosted at LeibnizInstitute in Munich) TAURUS (hosted at TU Dresden) MARS (TU Dresden, taken into account via the authors of this paper for our own WLAN/WiMAX simulation experiments with signal power propagation in 2006)

III.

2,8

93

0,137

11

for parallel computing offers the following features [12][13]: DC Supplying per USB, 3,5W / CPU 700 MHz. Energy efficient Cloud IaaS. SD-Card as extern disk drive.

0,013

Low power Data transfer and exchange via LAN Ethernet.

1

ENERGY EFFICIENCY WITH PARALLELIZED

OS Raspbian.

MICROCONTROLLERS

But none of the above mentioned computing systems is sufficiently energy-efficient. The electricity consumption acts in just MWh area. Energy efficient solutions can be provided via small, low-cost and low-energy on-board processors. The electricity consumption surrounds in this case at most kWh area. Low-energy home intelligent nodes (3-10W) for private Cloud-solutions, File Server, Web Server, Multimedia Home Centre etc. can be placed on the low-cost energy-efficient on-board microcontrollers like Arduino, Raspberry Pi or Intel Edison etc. as the trade-off solution. They offer a cheap alternative and symbolize step-by-step shift to the IoT. An appropriate solution will be the Raspberry Pi on-board-microcontroller (firstly deployed in 2011 in Cambridge, UK) with only credit card dimensions, in a pod like a matchbox as well with the following characteristics [12][13]: ARM/ RISC CPU with RAM=256MB-1GByte.

IV.

Performance-to-energy tradeoffs in parallel computing are discussed. Performance models are examined. Energy optimization can be reached under use of the advanced technologies like IoT. The most significant criterion for the choice of appropriate computing system is not only performance considering but also use of the combination of the criteria, inter alia energy minimum [5][6][7].

Ftact=700-900MHz, Fig. 5. Energy-efficient Raspberry Pi cluster (source: http://www.pro-linux.de)

Extern storage like a SD-Card, the IO-ports: USB 3.0, HDMI, LAN 10/100-Ethernet. Low-cost, price depends on the models Pi A/B or Pi2 B.

[1] [2]

OS: Linux/ BSD, RISC OS or Raspbian OS, as well as diversity on GPL-Software, also under Win 10.

[3]

SSH/ Samba protocol support.

[4]

USB-Drive for about 0,5 – 3TByte.

[5]

Low power, depends on models A/B: ca. 3,5-5W. Naturally there are a lot of scenarios on economical network nodes. The newest Raspberry Pi 2 Model B supports Windows 10 and acts as a mini-PC with 6 times CPU-performance due to tact frequency 900MHz, RAM=1GByte and quad-core architecture being oriented to the Windows Developer Program for IoT. Thus the most significant criterion for the choice of appropriate computing system is not only performance considering but also use of the combination of the further criteria as follows: reliability and QoS, data security, anonymity and privacy, energy minimum and tiny OPEX (operational expenditures). Example 7. Herewith a small example addressing the discussed trade-offs. A „supercomputer“ with 64 cheap Raspberry Pi‘s und two Lego racks is depicted in Fig. 5. This low energy cluster (64x3,5W, max 0,25kW) is built with use of low-cost and energy-efficient on-board microcontrollers. The small but smart Raspberry Pi Cluster

13

CONCLUSIONS

[6] [7]

[8]

[9]

REFERENCES BOINC Grid (Online): http://boinc.berkeley.edu/. HPC Clusters in Dresden, ZIH@TUD (Online): http://zih.tudresden.de. F.Bonomi, R.Milito, J.Zhu, S.Addepalli. Fog Computing and Its Role in the Internet of Things, 2007, CISCO Corp. USA, CA, 15p. Z.Shelby, C.Bormann. 6LoWPAN: The wireless embedded Internet, in EE Times, 2011. Performance of Grids and Clouds (Online): http://cidse.engineering.asu.edu/seminar-performance-of-grids-andclouds-may-20/. J.L.Gustafson. Re-evaluating Amdahl's Law. Communications of the ACM 31(5), 1988. pp. 532-533. A.H.Karp, H.P.Flatt. Measuring Parallel Processor Performance. Communication of the ACM 33 (5), 1990, pp. 539–543, Online, doi: 10.1145/78607.7861. Luntovskyy A.O., Guetter D.G., Melnyk I.V. Planung und Optimierung von Rechnernetzen: Methoden, Modelle, Tools für Entwurf, Diagnose und Management im Lebenszyklus von drahtgebundenen und drahtlosen Rechnernetzen. Handbuch. – Springer, Vieweg + Teubner Verlag, Wiesbaden, 2011. – 411 p. (in German). Luntovskyy A.O., Klymash M.M., Semenko A.I. Distributed Services for Telecommunication Networks: Ubiquitous Computing and

Cloud

Technologies.

Monograph.



Lviv,

Lvivska

Politechnika, 2012. ─ 368 p. (in Ukrainian). [10] Luntovskyy A.O., Klymash M.M. Data Security in Distributed Systems, Monograph. – Lviv, Lvivska Politechnika, 2014. – 464 p. (in Ukrainian). [11] CANDY Framework and Online Platform (Online): http://candy.inf.tu-dresden.de. [12] M.Richardson, S.Wallace. Getting Started with Raspberry Pi: O'Reilly Media, 2012. – 175 p. (ISBN: 978-1-449-34421-4). [13] L.Orsini. How To Host a Website with Raspberry Pi (Online): http://www.readwrite.com/

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

Experimantal Research of Power Consumption in Distributed Data Center Globa Larysa

Stepurin Oleksandr

Institute of Telecommunication Systems National Technical University of Ukraine “KPI” Kyiv, Ukraine e-mail: [email protected]

Institute of Telecommunication Systems National Technical University of Ukraine “KPI” Kyiv, Ukraine e-mail: [email protected]

Abstract – In this paper it was reviewed several existing investigations about minimization power consumption of data centers. Approach of task distribution in computing networks based on function of power consumption was proposed. Basic principle of task routing was described. Keywords — Energy-efficient computing; distributed computing; data center power consumption; virtual machine; experiment planning; live migration of multiple VMs; VM migration costs; VM migration time.

I.

INTRODUCTION

Telecommunication systems and service providers are moving towards provisioning of 5G technologies which are based on using of small cells and distributed datacenters. From the other hand increased ecological standards requires lower power consumption from 5G networks. Saving of power consumption could be reached by minimizing loses in antennas and minimizing of power consumption is data centers. Current research discovers approach of power consumption minimization in data centers under consistently increasing load. Distributed computing systems could play role of distributed data centers in 5G networks. They allow doing a large number of high-precision calculations without using of supercomputers. The basic principle of such networks is to break the task into subtasks and their distribution between network nodes. This approach allows obtaining highly accurate results without the additional costs as it allows the use of existing infrastructure. However, all these networks have a significant drawback - high power consumption. In this paper, we propose a way to reduce the energy consumption of such networks through effective distribution of tasks (parallel execution) between computing devices based on the criteria of minimizing energy consumption. II.

REVIEW OF EXISTING INVESTIGATIONS

The most significant research in current area was done in developed counties of Western Europe where green and power saving technologies are on high level of development. Researchers from Dresden Technical University investigated power consumption of servers during virtual machines migration [3]. It has been shown on practice and confirmed by experiments that movement of VMs and shutting down unused physical servers will help to minimize power consumption of system.

Virtual machine migration time was investigated in [1]. Researches held quite number of experiment and built dependency, which shows influence of virtual machine load on migration time. In addition, they discovered that summary time of migration of several virtual machines could be reduced if migration will be done one by one in order, which depends on VM’s load type. This is an important investigation which helps to understand influence of VM migration on service availability. III. FORMULATING OF PROBLEM Described researches in the part of article give a good background for further investigation but also they do not contain methods of evaluation of power consumption and analytical approaches. They just describe practical value of task redistribution and it’s influence on power saving. In current research analytical way to solve this problem is proposed. To describe task in formalized way computation network could be shown using a picture on Fig.1. It consists of M computation devices (servers) and load balancer (managing node) which receives a flow of incoming tasks

Fig. 1. Structure of the computer network.

Each task in the queue characterized by the required processor time (C) and the required amount of RAM (V) for its solution. Task Scheduler controls the distribution of the requests flow based on the requirements of current task, workload of each node and energy characteristics of each machine. The flow of applications is characterized by intensity λ. Development of power efficient task redistribution mechanism could be performed in the next 3 steps:

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

14

1.

Determination of the function of power consumption for each machine in a network. This function shows dependency of power consumption from CPU and RAM load. Functions are received using experimental data and methods of experiment planning.

2.

Using functions from step 1 and known information about task flow characteristics minimax equation system could be built.

3.

Task Scheduler solves the minimax problem formulated for each task from the queue using information about current state of the network using feedback connectivity.

IV.

EXPERIMENT AND ANALYTICAL RESULTS.

To determine the function of power consumption of each network node the experiment had been performed. The power consumption depends from CPU load and memory volume. To simplify the task of finding the dependence of energy consumption, influence of the hard disk boot could be ignored. This simplification is possible because the computer network computers can be loaded using a network interface and does not use the hard disk. The experiment was held using an uninterruptible power supply unit which can measure output power and benchmark software which allows to load the processor and RAM to a certain value. Received values were handled using methods of experiment planning [4] and function of energy consumption depending on the CPU and memory is as follows:

Usage of such approach could provide significant cost saving and energy saving which is quite important nowadays. Also it gives opportunity to redistribute tasks between nodes to reach optimal utilization from power efficiency point of view. Such mechanisms could be used in educational centers, institutes, data centers used for high performance calculations and virtualized data centers. System could be quite useful in case of power outage in data center when data center is switched to batteries. In such cases the most critical is to keep running applications which have significant impact as long as it is possible. System will allow to redistribute services between nodes in the most optimal way from power saving point of view and turn off unused machines and terminate non critical services. Such approach will help to increase vitality of data center. REFERENCES [1]

[2]

[3]

[4] [5]

(1) Where x1 – CPU load, x2 – RAM load. On the basis of dependencies for each node in the network can write the first equation minimax problem:

[6]

[7] [8]

(2) Second part of minimax equation and resolution of system of equations will be performed in the third stage. On the basis of solving the problem will take decisions about redirecting requests to a specific host computer network. Correction and clarification of the results will be made using the feedback system. V.

[10]

[11]

CONCLUSIONS

In current research approaches of handling calculation streams in data distributed data centers were described system data centers. Further work in this area will be connected to creation of program mechanisms of minimax equation resolution and mechanism of task redistribution. It is planned to build practical model of described system and perform tests which will show practical value in power saving comparing to other methods of task redistribution in distributed calculation networks.

15

[9]

[12]

[13]

Kateryna Rybina, Abhinandan Patni and Alexander Schill, “Analysing the Migration Time of Live Migration of Multiple Virtual Machines”, Technical University Dresden, 01062 Dresden, Germany. Waltenegus Dargie, Anja Strunk, and Alexander Schill, “EnergyAware Service Execution” , Technical University Dresden, 01062 Dresden, Germany. Kateryna Rybina, Waltenegus Dargie, Anja Strunk, and Alexander Schill, “Investigation into the Energy Cost of Live Migration of Virtual Machines”, Technical University Dresden, 01062 Dresden, Germany. Grachev Y.P, Plaksin Y.M., “Mthematical Methods of Experiment Planning”, Moscow, DeLi print, 2005 – 296p. Christoph Mцbius, Waltenegus Dargie and Alexander Schill, “Power Consumption Estimation Models for Processors, Virtual Machines, and Servers”, Technical University Dresden, 01062 Dresden, Germany. https://communities.intel.com/community/itpeernetwo rk/datastack/blog/2008/02/20/datacenter-powermanagement-power-consumption-trend L. Minas and B. Ellison, “The Problem of Power Consumption in Servers”, Dr. Dobb’s Journal, May 2009 A. Strunk and W. Dargie, “Does live migration of virtual machines cost energy?”in The 27th IEEE International Conference on Advanced Information Networkingand Applications (AINA-2013), 2013. W. Dargie, “Analysis of the power consumption of a multimedia server under different dvfs policies,” 2012 IEEE Fifth International Conference on Cloud Computing, vol. 0, pp. 779–785, 2012. A. Strunk, “Costs of virtual machine live migration: A survey,” in Services (SERVICES), 2012 IEEE Eighth World Congress on, june 2012, pp. 323 –329. S. Akoush, R. Sohan, A. Rice, A. Moore, and A. Hopper, “Predicting the performance of virtual machine migration,” in Modeling, Analysis Simulation of Computer and Telecommunication Systems (MASCOTS), 2010 IEEE International Symposium on, aug. 2010, pp. 37 –46. T. Imada, M. Sato, and H. Kimura, “Power and qos performance characteristics of virtualized servers,” in Grid Computing, 2009 10th IEEE/ACM International Conference on, oct. 2009, pp. 232 – 240. Y. Wu and M. Zhao, “Performance modeling of virtual machinelive migration,” in Cloud Computing(CLOUD), 2011 IEEE International Conference on, july 2011, pp. 492 –499.

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

Signal Detection Algorithms for Parallel Data Transmission Systems Shestopal Yevgenii

Semenko Anatoliy

State University of Telecommunications Kyiv, Ukraine

State University of Telecommunications Kyiv, Ukraine

Abstract — In this document, we consider signal detection algorithms for receiving part in parallel data transmission systems, especially for Multiple Input Multiple Output (MIMO) or Multiple Input Single Output (MISO) technology. This paper includes main features and advantages of fixed sphere decoding and detection algorithms used in the adaptive multi-mode MIMO system (simplified MAP algorithm, Modified Bit-Flipping, modified fixed sphere decoding for space division multiple access. Also, comparisons to other detection methods are provided. Keywords — Signal detection; Fixed sphere decoding; spatial diversity; Spatial multiplexing; Space division multiple access; MIMO; MISO.

I.

INTRODUCTION

It is obvious that nowadays wireless and mobile communication systems are improved significantly, have fast data rates and are relatively reliable. However, over the years users’ demands are growing, whereas each system has its own limits and boundaries of performance. Take LTE (Long Term Evolution) networks as an example, which widely apply MIMO and MISO systems to use a bandwidth more effectively. On the other hand, each method at a stage of such system (e.g., coding or signal detection) has its drawbacks. Therefore, to guarantee the Quality of Service (QoS) and to meet future users’ demands each method must be improved or another approach can be applied. While a fixed transmitting part of MIMO system has more capacity to perform complex operations, a user’s portable device is more restricted in terms of computational ability. Thus, detecting and decoding techniques must be effective, reliable and, simultaneously power- and hardware- effective. As far as detection methods are concerned, there is a vast range of those ones for systems with multiple antennas. They can be divided into two main groups: hard- decision MIMO detection and soft MIMO detection. The former includes methods such as Zero Forcing (ZF), Sphere Decoding (SD), Fixed Complexity SD, Maxi-mum Likelihood (ML), Reduced-Dimension ML Search. Techniques like Soft ZF, Max-Log Detection, Soft Minimum Mean Square-Error (MMSE) can be referred to the latter group. Although there are other detection methods, we cite to most efficient and practical ones. I. FIXED SPHERE DECODING

II.

FIXED SPHERE DECODING

Computational complexity of MIMO detection algorithm depends exponentially with the number of used spatial streams. To meet new requirements, maintain a minimal latency and a high constant rate, effective MIMO detection methods should achieve quasi-ML performance requiring lower computational complexity. A reasonable solution for that is a detection algorithm called Fixed Sphere Decoding. Unlike Sphere Decoding (SD), it performs only a fixed number of operations during the detection and achieves quasi-ML performance thereby overcoming a variable complexity of other methods. The algorithm combines ordering of the channel matrix with a search on a small subset of the complete transmit constellation which is independent of the noise level and the channel conditions. It is important to determine the subset of the complete transmit constellation that needs to be searched. In Fixed Sphere Decoding is employed Fixed Sphere Decoder (FSD). FSD algorithm tackles the problem of high computational complexity and varying throughput depending on the noise signal. This approach is used to detect the transmitted signal with low complexity. FSD is based on the Breadth First Search (BFS) tree technique. With the help of this method, the throughput is increased by performing parallel search. The detection order of FSD is determined by iterative manner. When the algorithm proceeds from level one to the next level, the symbol with the smallest postdetection SNR among those yet to be searched is chosen for detection, if level belongs to the FE stage; otherwise, the one with the largest post-detection SNR is chosen [1]. After the detection order is obtained, reordering of the columns of H is performed to obtain H giving y = Hs + n, where QR decomposition of H is denoted as H = QR. The steps of the algorithm are as follows: 1. Inputs to FSD detector are the received soft bits (Y) from soft slicer and QAM bits. 2. The detected signal is QR decomposed to get the received signal, Q is a matrix with orthonormal columns, and R is a square upper triangular matrix. It is given by: Y= Hs + n

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

(1)

16

where Hs is the channel matrix and n is the white circularly symmetric Gaussian noise vector: H=QR

(2)

The QR decomposed received signal is: Y=Qst + Y

(3)

3. The Metric distance is calculated between the QR decomposed received signal and the constellations points: Mp (si) = Mc (si) + Ma (si)

(4)

where Mc (si) is the channel based metric increments and Ma (si) is a priori based metric increments. 4. The signal with minimum Euclidean distance (minimum metric) is then calculated to find the transmitted signal. In addition to FSD, architectures of soft slicer and turbo decoder can be applied. A soft slicer generates quantized data and associated soft data. A decoder with error recovery generates decoded quantized data and a soft sequence and is capable of correcting one bit of the quantized data. Soft bit values are the probability that the given data symbols resides at a particular point in the coordinate system. A Turbo decoder consists of two soft-input soft-output decoders and one interleaver / deinterleaver. Decoding process in a turbo decoder is performed iteratively through the two soft-input soft-output decoders via the interleaver and the deinterleaver by feeding the extrinsic information of one to the other. LLR values are computed based on the input symbols. The computation of LLR values uses MaxLog MAP algorithm. Further to reduce the memory size, the block of input symbols is subdivided into several subblocks and decoding is done using sliding window method Forward State Metric Unit, Backward State Metric Unit, Dummy Backward State Metric Unit and Likelihood ratio computation unit are the main components in the SISO decoder. In theory, MIMO systems have been shown rise in spectral efficiency. Since channel coding was not employed in the BLAST system, high wireless SNR was required to achieve target bit-error rates. Consequently, system performance was far from the channel capacity limit. Turbo decoding techniques used in MIMO system achieve even higher spectral efficiency. III. DETECTION ALGORITHMS APLIED IN THE ADAPTIVE MULTI-MODE MULTIPLE INPUT SINGLE OUTPUT SYSTEM Since mobile communication systems have variable sNR and fading properties within massive ranges, and each MIMO mode performs differently in specified conditions, it would be more effective to apply multi¬mode MIMO system which could be adapted according to channel conditions. Therefore, MIMO transmission techniques are to be switched include spatial

17

multiplexing (SM), space-division-multiple-access (SDMA), and spatial diversity (sD). For such an adaptive system, multiple signal detectors need at the receiving side each one corresponding to the respective mode. An efficient implementation would be an integration of multiple MIMO detectors into a single module, which can be reconfigured for the respective mode at run-time. Then, a detector should be capable of not only providing the binary estimation of each bit but also its reliable measurement to achieve further performance enhancement. Such a device is called a soft-output signal detector that supports 64-QAM modulated sM/SDMA/SD triplemode signals for up to 4x4 MIMO transmission. The unification of multi-mode processing is mainly realized by algorithm-level exploitation, where the algorithms for each mode consist of similar mathematical operations to enable substantial hardware reuse [2]. For SD mode is used an extensively simplified MAP algorithm which leverage the orthogonality of Alamouti’s signals and the matrix-decomposition operations. For decoding Alamouti’s signals, it processes the signals in a per antenna manner instead of the per subcarrier vector detection in SM and SDMA modes. With the assumption that the channel is frequency flat, at least over two subcarriers, the received signals at the jth antenna is formulated as:

r1 j (r2j ) *

H *j , 2

H j ,1 *

H j,2

*

H j ,1

x1 x 2

(5)

where (n = l, 2) is the signal received at the j th antenna of subcarrier n and ( =1,3) is the channel between the transmitting antenna and jth receiving antenna. In Low-Complexity Log-Likelihood Ratio (LLR) Generation Based on FSD, FSD divides the real-valued search tree into two unique parts using a parameter D. A full-search is performed in the first D layers, exhaustively expanding all branches per node, while in the remaining (2N -D) layers, a single-search is adopted, expanding only one best branch per node. The original FSD was extended to provide accurate soft values while maintaining its low computational complexity. With this purpose, is used a symbol-level bit-flipping scheme for performance improvement and a polygon-shaped constraint technique to reduce unnecessary node extensions. Then, LLR Accuracy Improvement by Modified BitFlipping is employed. Basically, there are two major reasons that FSD tends to generate poor-quality LLRs. One is the occurrence of vacant bits in the candidate list L, existing in most tree search algorithms. Even for those existing bits, FSD cannot guarantee the minimization. This is because FSD simply extends all nodes at the first D layers while only one is remained. Such a tree travel scheme does not guarantee the inclusion of best vectors in the candidate list. To tackle these two issues with reasonable complexity overhead, there is a modified bit-flipping scheme which replaces the whole vector re-calculation with a per symbol recalculation scheme.

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

FSD detection is originally developed for SM signal. In the following, the algorithm is modified to be adopted for the SDMA mode. Detecting downlink SDMA signals is unique in that signals those are dedicated to the user are reserved, while the signals intended for other users are discarded after detection [3]. To make full profit on this feature, a layer reordering step was added to the preprocessing stage of FSD such that the desired signal is moved to the top layer of the FSD search tree where multiplied candidates are extended. Briefly, a soft-output detection algorithms for SM and SDMA modes are based on an efficient extension and modification of the hard-output fixed-complexity sphere decoder (FSD) [4]. Then, a symbol-level bit-flipping scheme, which generates accurate LLR values with marginal hardware increment. Additionally, a polygon shaped constraint technique can be adopted to facilitate the reduction of unnecessary node extensions in the tree search procedure. A low complexity MAP algorithm owning a unified detection procedure that is independent of antenna number is employed for SD signal detecting, for instance Altamonte’s space-frequency block codes (SFBC). It allows for parallel detection of the real and imaginary parts of each transmitted symbol with the help of QR decomposition to the orthogonal real-valued channel matrix. Taking advantage of these implementationoriented algorithms, a unified VLSI architecture is capable of being reconfigured to support different MIMO modes at run-time. This is a beneficial feature, which allows switching the system between 3 modes for a better performance within variable channel conditions. IV.

CONCLUSION

Each detection method has its advantages and drawbacks, for example: a great performance, but a huge complexity. There is no universal method yet, which meets all requirements and demands. However, we can overcome

channel conditions, correlation, and interference, latency and so on by applying a multimode adaptive MIMO system those use combined and modified detection techniques. A multimode soft-output MIMO detector supports the detection of spatial multiplexing, space division multiple access and spatial diversity signals. It includes most effective and reliable algorithms for signal detection: linear-complexity MAP detection for SD mode, lowcomplexity LLR generation based on FSD, modified bitflipping and application to SDMA mode. Another solution is Fixed Sphere Decoding. The FSD algorithm minimizes high computational complexity and overcomes varying throughput depending on the noise signal. Unlike Sphere Decoding, it performs only a fixed number of operations during the detection and achieves quasi-ML performance. Although, it does not support MIMO multi-mode, it can be applied to more simple systems with lower requirements so that using less hardware capacity.

REFERENCES [1]

[2]

[3]

[4]

J.M.Mathana, P.Rekha, V.Sai Anitha, B.Suchitra, K.B.Bavithra. “VLSI architecture of MIMO detector using Fixed Sphere Decoding,” Review of Information Engineering and Applications, 2014, pp. 11-23 L. Liu, J. Lofgren, P. Nilsson, V. Owall. “VLSI implementation of a soft-output signal detector for multi-mode adaptive MIMO systems,” Very Large Scale Integration (VLSI) Systems, IEEE Transactions, volume 21, issue 12, Dec. 2013, S. Dragan, L. Angel, and B. P. Constantinos, “Design and experimental validation of MIMO multiuser detection for downlink packet data,” EURASIP J. on Applied Signal Processing, vol. 11, pp. 1769-1777, 2005. L. G. Barbero, J. S. Thompson, “Fixing the complexity of the sphere decoder for MIMO detection,” IEEE Trans. on Wireless Communications, vol. 7, no. 6, pp. 2131-2142, Jun. 2008.

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

18

A Survey on Cloud Radio Access Networks Deployment Strategies Pavlo Huskov

Mykhailo Klymash

Department of Telecommunication Lviv Polytechnic National University Lviv, Ukraine e-mail: [email protected]

Department of Telecommunication Lviv Polytechnic National University Lviv, Ukraine e-mail: [email protected]

In classical C-RAN cells array is connected to BBU (baseband unit) or DU (data unit), where the signal is processed. From the BBU to the access point – RRH (Remote Radio Head) radio signals are transmitted through dedicated lines. C-RAN enables cost and energy efficient deployment of mobile network. As well as adaptability to nonuniform traffic, scalability, increase of throughput, decrease of delays, ease in network upgrades and maintenance. II. ARCHITECTURE EVOLUTION In the traditional architecture (Fig.1.) the main BS functions (baseband processing and radio transmission) are executed inside a base station. BS connects with other BS by X2 interface and with mobile core network by S1 interface. This approach causes high power consumption and resources underutilization.

19

• Synch • Control • Transport • Baseband • RF • PA

BS 1

S1/X2

S1/X2

BS n

• RF

S1/X2

BBU 1

RRH 1

• Synch • Control • Transport • Baseband

• RF

RRH n

S1/X2

BBU n

Mobile Core Network

Traditional RAN architecture is facing various challenges: exponential growth of user devices quantity; new mobile services; cloud computing and IoT; complicated maintenance due to 2G, 3G and 4G network architecture integrations, RAN densification, interference problems, insufficient performance gain and inefficient spectrum utilization. Also, lots of current mobile operators face a problem of larger CAREX and OPEX expenditures for RAN development along with more and more datahungry users. Therefore, the development of new energy and cost-optimized architectures is essential for extensive RAN evolution. One of the most promising options to meet the 5G mobile networks requirements is to improve overall efficiency of mobile networks by C-RAN technology.

Traditional RAN

I. INTRODUCTION

RAN with RRHs

Keywords — Cloud RAN, fronthaul, virtualization, 5G.

In the architecture with RRH, the BS is divided into two units: radio unit (RRH) and signal processing unit (BBU or DU). RRH is placed together with antenna at the remote site. It creates the optical interface to BBU with CPRI (Common Public Radio Interface), provides digital signal processing, signal conversion and power amplification. One BBU can serve multiple RRH and neighboring RRHs can be easily connected with each other. This architecture enables lower power consumption and allows to place BBU more convenient.

C-RAN with RRHs

Abstract — The Cloud Radio Access Network (C-RAN) concept has drawn an abnormal attention of MNOs (mobile network operators). The synthesis of a novel RAN architecture with the functional split between different RAN entities is one of possible solutions to effectively address a number of networks challenges based on growing end user’s needs. In this paper, we provide an overview of Cloud RAN deployment strategies, investigate different approaches for energy and cost savings, fronthaul development and RAN virtualization. The crucial challenges and potential solutions for C-RAN including fronthaul and RAN virtualization are presented.

• RF

Ir RRH 1

• RF

Ir

BBU Pool

RRH n

Fronthaul

• Synch X2 • Control • Transport • Baseband

S1

Backhaul

Fig. 1. RAN Architecture evolution

To dynamically optimize BBU utilization between different base stations BBUs are centralized. Centralized BBU array is called BBU Pool. BBU Pool is the main entity on C-RAN concept, that was firstly proposed by [2]. C-RAN further minimize power consumption and cost reduction, as lower number of BBUs is needed. There are three fundamental stages for C-RAN: centralization of current baseband equipment, splitting of processing between commercial equipment and specialized chips, and total network virtualization. Functional splits that keep some of the baseband processing within the cell site make it possible to reduce fronthaul capacity and latency requirements. Nevertheless, it requires considerable transport resources between RRH and BBU. Currently CRAN deployments are based on BBU clustering with control/processing centralization (Fig. 2). Simultaneously,

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

particular attention is drawn to resource pooling and network virtualization approaches. BBU Clustering

Control and Processing Centralization

• Simple grouping\stacking of BBU • Installation cost savings • Fast site acquisition

• More efficient signal processing for capacity improvement • CoMP improvement • Small Call Implementation

Deployed in South Korea

Worldwide deployment

Resource Pooling • Resources Pooling across BBUs • Specialized chipsets usage

Virtualization • BBU Pool virtualization • NFV and SDN usage in C-RAN • CPU instead of specialized chipsets

Preparation for the deployment and research

Fig. 2. C-RAN deployment scenario

C-RAN implementation into existing mobile network has to solve a number of problems for the effective functioning of proposed architecture. In particular, the important aspect is the base station functions shifting (Fig. 1.) to the core network. This will simplify base stations equipment and reduce capital expenditures for RAN deployment. III. DEPLOYMENT STRATEGIES Nowadays C-RAN concept is of great interest for both industry and academia during the research towards 5G mobile technologies. Leading companies such as Alcatel Lucent (France), Huawei (China), Ericsson (Sweden), Ceragon and QUALCOMM (both based in US), Foxconn (Taiwan), Nokia (Finland), ZTE (China) and others make active investments in C-RAN deployment. Existing solutions for C-RAN deployment strategies are classified below. A. Increase of throughput The authors of [8] shown that coordination of cells in wide-area systems is not only beneficial for average spectral efficiency and cell edge data rates, but can also be implemented. COMP was demonstrated for uplink and downlink in two testbeds in urban areas. COMP schemes for the UL range from joint multicell scheduling to more complex joint detection, and can be centralized or decentralized. In the DL the schemes range from less complex coordinated scheduling to more challenging joint processing approaches. In [9] authors focused on uplink CoMP reception in 3GPP long term evolution advanced (LTE-A). Deployment scenarios are introduced in both homogenous networks and heterogeneous networks and lots of features are taken into consideration, such as receiver schemes, realistic channel estimation, and impact of reference signals orthogonality. CoMP Joint processing shows the best performance out of the three schemes presented in the paper. As shown in a field test presented in [10] - Uplink Coordinated Multi-Point (CoMP) joint processing has the potential to improve uplink capacity by joint processing the received signal from more antennas and nodes. The simulation evaluations and field test results show that uplink CoMP has great gain at the cell edge and can drastically improve the cell edge throughput. In next generation heterogeneous networks uplink CoMP can become irreplaceable for interference suppression between macro cells and small cells.

An overview of the theory and currently known techniques for multi-cell MIMO (multiple input multiple output) cooperation in wireless networks in presented in [11]. MIMO cooperation techniques between numerous cells are examined from fundamental information-theoretic limits, a review of the coding and signal processing algorithmic developments, and practical issues related to scalability. In [12] authors propose a downlink antenna selection scheme, which selects transmit antennas based on the large scale fading. The article describe the joint optimization of antenna selection, regularization factor, and power allocation to maximize the average weighted sumrate. Joint optimization problem is decomposed into subproblems, each of which is solved by an efficient algorithm. Finally, recent works [11,12] proved that cooperative techniques such as ICIC, CoMP and Massive MIMO can be enhanced in C-RAN. B. Wireless fronthaul Primary physical medium for C-RAN fronthaul is optical, but some projects are focused on wireless fronthaul [13, 15]. Innovative wireless backhaul and fronthaul solutions provide capacity up to several Gbps, enable high mobility and remote reconfiguration. quick and relatively cheap deployment, suitability for any terrain. Wireless is best suited to outdoor fronthaul, where distances are short and capacity requirements are relatively low. In [13] authors present the design alternatives, issues, and challenges for backhaul solutions for 2G (GSM, CDMA) and 3G (UMTS, CDMA2000) radio access networks (RANs). This article analyses the prevalent and upcoming backhaul technology trends for their technical and commercial feasibility. In [15] a novel end-to-end transport network solution is proposed to meet the operational and technical challenges of C-RANs. Additionally, as a complement to fiber, it’s proposed to use wireless microwave links to provide higher performance gains. C. Optical fronthaul Dark fiber is the best fronthaul solution in terms of productivity, but it requires a high number of fiber connections. In [1,10,15-17] authors focus on evaluation and optimization of optical transmission in C-RAN fronthaul employing WDM, OTN, passive optical networks and CPRI over Ethernet. These solutions have higher latency in comparison with dark fiber, but can significantly reduce costs. So, they can be used for outdoor small-cell fronthaul in dense urban areas. It’s shown that C-RAN transport network strategies should flexibly support centralized and distributed radio baseband solutions, as well as being multiservice capable [15]. Another approach [16] proposes a WDM-OFDMA union passive optical network architecture, that enables dynamic resource allocation and supports variable rate access. The use of OTN is defined as optimal solution for IQ transport between RRH and BBU pool when mobile network operator has a cost-efficient access to legacy OTN network [17]. Authors show the possibility of transmitting radio interface protocols over OTN, that allows to exploit the benefits of C-RAN.

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

20

D. Energy and cost savings This happens to be the most interesting topic in terms of meeting telecom business requirements, which pursues latest trends, such as ecological friendliness. Authors of [1], [4], [6] present various energy saving solutions adopted from different systems to make C-RAN an ultimate solution and a background for future development of “green telecom”. Authors believe that the Green agenda and a low Carbon economy are the trends for the whole of human social development. This is to be achieved by moving towards architectural innovations, one of which is C-RAN. As for the cost savings, C-RAN answers this challenge with efficient spectrum utilization techniques instead of extensive escalating of resource capacity to ensure the adequate performance for end users. E. IQ Compression IQ (In-phase Quadrature) compression helps to reduce the need of the fronthaul links with high bandwidth. At the fronthaul side data rate can be up to 55 times higher in comparison with data rate on the radio interface. This is caused by CPRI IQ sample width and modulation techniques. Compression schemes should be developed to optimize the transmission process through fronthaul links with limited capacity. In [18,19] authors propose to remove redundancies in spectral domain, perform block scaling and use non-uniform quantizer. The solutions discussed in [20] propose a distributed compression scheme, which is combined with base station selection algorithm. For saving the benefits of BBU Pooling it’s essential to leverage the quantity of fiber fronthaul links or use complex IQ compression solutions. F. RAN virtualization Virtualization allows to create logically isolated subsystems over abstracted physical networks. In that case, network resources can be efficiently shared between numerous virtual subsystems. Most works in RAN virtualization are focused on network resource and hardware virtualization and lead to BBU Pool virtualization. C-RAN is technically based on virtualization and WCN (Wireless Network Cloud) concept [1,2]. Zhao et al. [21] study the design issues of wireless virtualization in LTE and discuss different spectrum sharing strategies. The approach on implementing multiple wireless LANs on single physical infrastructure is presented in [22]. A new LTE virtualization framework is described in [23]. It enables multiple virtual operators and eNBs to share spectrum. Numerous works are dedicated to NFV and SDN use in C-RAN [24-27] that solve the following problems: 

Mobility management. High-density mobile networks generate more handover procedures because of the small cells size. Thus, the mobility management should be based not only on the basis of information about the quality of the radio coverage, but also on the current load of network segments. SDN solutions makes it easier to switch between RAP and implement mechanisms for load control.

21



Connection of physical distributed network segments into a single logical structure. Data plane for each user can be separated into several transmitting devices (for load balancing), while the signaling plane is logically centralized at the network controller.



Energy resources and backhaul optimization. According to user requests and the current status of RAN certain backhaul equipment may be put on hold, which will significantly improve the energy efficiency of the system.

RAN Optimization Energy and BH. Depending on user requests and the current state of network equipment and RAN BH can be switched to standby mode that significantly improve the energy efficiency of the system SDN makes clear distribution between control and data planes in C-RAN and enables simplified network devices configuration from a single location [24]. SDN paradigm can be efficiently utilized at the core network [25] and control plane for RAN [26] as SoftRAN (Software Defined RAN). SoftRAN divides control plane into two parts: local (at the network nodes) and global (at the central controller). Central controller abstract physical distributed nodes as “virtual base station”. This approach is a flexible alternative to fully distributed control plane (as in LTE) and fully centralizes (as in 3G - RNC). The CROWD Project [27] proposes to apply SDN solutions for dynamic backhaul reconfiguration, connectivity management and MAC layer reconfiguration. With the existing virtualization technologies network functions can be deployed as software running [28]. In this case NFV has many advantages, but the main challenge in integration of software and hardware from different vendors is still unresolved. NFV working group focuses on core network virtualization. SDN and NFV can be deployed separately or in conjunction according to MNO requirements. G. Deployment scenarios In [5,14] authors describe deployment scenarios of CRAN, including green solution, capacity boosting of existing networks and building from scratch. The optimal architectures for limited fronthaul resources and deployment cases for capacity boost are proposed. As a green solution, RRH and BBU Pool should be places according to network planning. In [5] authors maximize statistical multiplexing gain by integrating office and residential base stations in one BBU Pool. Various approaches of capacity improvement are described in [14]: 

HetNets – replacement of existing BBUs by BBU Pool and use of additional small cells.



Cell split – split macro cells into smaller parts that increase capacity gains, improve interference management.



Overlay – adding frequency bands and RRHs. This scenario needs efficient interference management techniques to be enabled.



Super-hotspots – serving of many users in one location.

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies



Railway/highway – users are moving fast and this requires frequent handovers.



Centralized RAN – BBU support many RRH.



Cloud RAN Phase 1 – resources are pooled; baseband processing is done by using DSPs (digital signal processors).

[12]

Cloud RAN Phase 2 – resources are virtualized; baseband processing is done by using GPP (general purpose processors).

[13]



IV. CONCLUSIONS The evolution of current mobile networks toward the next generation will require new approaches for RAN development. It’s shown that Cloud RAN concept can significantly reduce capital costs and improve performance of the mobile networks by enabling interlayer interaction, decoupling scalability of the RAN functionality, managing network load and resolving functional asymmetries of access points. This thesis has surveyed the key challenges and candidate solutions for energy and cost efficient C-RAN deployment. Key problems related to the fronthaul development, energy and cost savings, spectrum utilization, compression schemes and virtualization of BBU resources have been discussed. These issues open new perspectives for research in C-RAN that should be focused on synthesing of optimal RAN architecture due to the fact that it simultaneously offers potential cost savings and improved network performance. However, the various deployment strategies of C-RAN need to be validated with practice. REFERENCES [1]

“C-RAN The Road Towards Green RAN,” China Mobile Research Institute, Tech. Rep., October 2011. [2] Y. Lin, L. Shao, Z. Zhu, Q. Wang, and R. K. Sabhikhi, “Wireless network cloud: Architecture and system requirements,” IBM Journal of Research and Development, january-february 2010. [3] J. Segel, “lightRadio Portfolio: White Paper 3,” Alcatel-Lucent Bell Labs, Tech. Rep., 2011. [4] “ZTE Green Technology Innovations White Paper,” ZTE, Tech. Rep., 2011. [5] A. Checko, H. Holm, and H. Christiansen, “Optimizing small cell deployment by the use of C-RANs,” in European Wireless 2014 (EW 2014). [6] H. Jinling, “TD-SCDMA/TD-LTE evolution - Go Green,” in Communication Systems (ICCS), 2010 IEEE International Conference on, 2010, pp. 301–305. [7] H. Holma and A. Toskala, ”LTE-Advanced: 3GPP Solution for IMTAdvanced”. John Wiley and Sons, Ltd, 2012. [8] R. Irmer, H. Droste, P. Marsch, M. Grieger, G. Fettweis, S. Brueck, H. P. Mayer, L. Thiele, and V. Jungnickel, “Coordinated multipoint: Concepts, performance, and field trial results,” Communications Magazine, IEEE, vol. 49, no. 2, pp. 102–111, 2011. [9] Y. Huiyu, Z. Naizheng, Y. Yuyu, and P. Skov, “Performance Evaluation of Coordinated Multipoint Reception in CRAN Under LTE-Advanced uplink,” in Communications and Networking in China (CHINACOM), 2012 7th International ICST Conference on, 2012, pp. 778–783. [10] L. Li, J. Liu, K. Xiong, and P. Butovitsch, “Field test of uplink CoMP joint processing with C-RAN testbed,” in Communications

[11]

[14] [15]

[16]

[17]

[18]

[19]

[20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28]

and Networking in China (CHINACOM), 2012 7th International ICST Conference on, 2012, pp. 753–757. D. Gesbert, S. Hanly, H. Huang, S. Shamai Shitz, O. Simeone, and W. Yu, “Multi-Cell MIMO Cooperative Networks: A New Look at Interference,” Selected Areas in Communications, IEEE Journal on, vol. 28, no. 9, pp. 1380–1408, 2010. A. Liu and V. Lau, “Joint power and antenna selection optimization for energy-efficient large distributed MIMO networks,” in Communication Systems (ICCS), 2012 IEEE International Conference on, 2012, pp. 230–234. H. Raza, “A brief survey of radio access network backhaul evolution: part I,” Communications Magazine, IEEE, vol. 49, no. 6, pp. 164–171, 2011. C. Chen, J. Huang, W. Jueping, Y. Wu, and G. Li, “Suggestions on Potential Solutions to C-RAN,” NGMN Alliance, Tech. Rep., 2013. Z. Ghebretensae, K. Laraqui, S. Dahlfort, J. Chen, Y. Li, J. Hansryd, F. Ponzini, L. Giorgi, S. Stracca, and A. Pratt, “Transmission solutions and architectures for heterogeneous networks built as C-RANs,” in Communications and Networking in China (CHINACOM), 2012 7th International ICST Conference on, 2012, pp. 748–752. B. Liu, X. Xin, L. Zhang, and J. Yu, “109.92-Gb/s WDM-OFDMA Uni-PON with dynamic resource allocation and variable rate access,” OPTICS EXPRESS, Optical Society of America, vol. 20, no. 10, May 2012. A. Checko, G. Kardaras, C. Lanzani, D. Temple, C. Mathiasen, L. A. Pedersen, and B. Klaps, “OTN Transport of Baseband Radio Serial Protocols in C-RAN Architecture for Mobile Network Applications,” MTI Mobile and Altera, Tech. Rep., March 2014. D. Samardzija, J. Pastalan, M. MacDonald, S. Walker, and R. Valenzuela, “Compressed Transport of Baseband Signals in Radio Access Networks,” Wireless Communications, IEEE Transactions on, vol. 11, no. 9, pp. 3216 –3225, September 2012. B. Guo, W. Cao, A. Tao, and D. Samardzija, “CPRI compression transport for LTE and LTE-A signal in C-RAN,” in Communications and Networking in China (CHINACOM), 2012 7th International ICST Conference on, 2012, pp. 843 –849. S.-H. Park, O. Simeone, O. Sahin, and S. Shamai (Shitz), “Robust and Efficient Distributed Compression for Cloud Radio Access Networks,” Vehicular Technology, IEEE Transactions on, vol. 62, no. 2, pp. 692–703, February 2013. L. Zhao, M. Li, Y. Zaki, A. Timm-Giel, and C. Gorg, “LTE virtualization: From theoretical gain to practical solution,” in Teletraffic Congress (ITC), 2011 23rd International, 2011, pp. 71– 78. G. Aljabari and E. Eren, “Virtualization of wireless LAN infrastructures,” in Intelligent Data Acquisition and Advanced Computing Systems (IDAACS), 2011 IEEE 6th International Conference on, vol. 2, 2011, pp. 837–841. M. Li, L. Zhao, X. Li, X. Li, Y. Zaki, A. Timm-Giel, and C. Gorg, “Investigation of network virtualization and load balancing techniques in lte networks,” in Vehicular Technology Conference (VTC Spring), 2012 IEEE 75th, 2012, pp. 1–5. H. Kim. and G. F., “Improving network management with software defined networking,” Communications Magazine, IEEE, vol. 51, no. 2, pp. 114–119, 2013. X. Jin, L. E. Li, L. Vanbever, and J. Rexford, “Softcell: Scalable and flexible cellular core network architecture,” in Proceedings of the Ninth ACM Conference on Emerging Networking Experiments and Technologies, ser. CoNEXT ’13. New York, NY, USA: ACM, 2013, pp. 163–174. A. Gudipati, D. Perry, L. E. Li, and S. Katti, “SoftRAN: software defined radio access network,” in ACM SIGCOMM Workshop on Hot Topics in Software Defined Networking (HotSDN’13), ACM SIGCOMM 2013, pp. 25–30. H. Ali-Ahmad, C. Cicconetti, A. de la Olivia, M. Draexler, R. Gupta, V. Mancuso, L. Roullet, and V. Sciancaleporee, “CROWD: An SDN Approach for DenseNets,” in Second European Workshop on Software Defined Networks (EWSDN), 2013, pp. 25–31. NFV working group, “Network Function Virtualization Introductory white paper ETSI,” 2012.

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

22

Software-Defined Network Controllers in the Cloud: Performance and Fault Tolerance Evaluation Tkachova Olena

Yevsieieva Oksana

Telecommunication systems department Kharkiv National University of Radio Electronics Kharkiv, Ukraine e-mail: [email protected]

Telecommunication systems department Kharkiv National University of Radio Electronics Kharkiv, Ukraine e-mail: [email protected]

Abstract— This paper is devoted to the analysis of interaction between OpenStack network module and SDN. The main components of Neutron and their interaction with third-party elements are also observed. The attention was focused on OpenStack-SDN integration, including interaction between plug-ins of Neutron and SDN controller. Different types of SDN controllers for integral OpenStack-SDN solutions were analyzed. In order to evaluate performance and reliability of different SDN controllers an experimental model of SDN-OpenStack integration was created. It allows to obtain such characteristics of SDN-OpenStack solution as delay, max amount of UDP and TCP flows.

Since the network performance depends on the controller performance and controller’s characteristics. Secondly, there is a lack of proper resources and knowledge to evaluate the integration of SDN controllers into OpenStack. Consequently, this has led to confusion in the market about the promised benefits of SDN in OpenStack.

Keywords - OpenStack technology; module Neutron; SDN controller; cloud computing; performance and reliability evaluation.

II. CONNECTION ESTABLISHMENT BETWEEN NEUTRON MODULE AND SDN CONTROLLER

I. INTRODUCTION The OpenStack technology has greatest popularity among the existing management tools in cloud computing [1]. However, convergence OpenStack with traditional information systems often does not lead to successful results. The possibilities of technologies have significant limitations in case when virtual compute nodes are located in different zones of network infrastructure. OpenStack convergence problems often are associated with limitation of load balancing algorithms, traffic filtering rules and delays in processing and data transmission [2]. Today paradigm of Software-Defined network (SDN) is used to ensure the effective functioning and correct interaction between physical and virtualized elements. The main feature of SDN is a centralized management and monitoring. This functions are performed by single network element – a controller [3]. The controller is a key element of SDN that directly interacts with the OpenStack network module. Many companies, such as Cisco, HP, Juniper, IBM provide their solutions based on the concept of SDN. In addition, they propose own methods of integration into existing technology. The advantages of SDN integration into cloud technology are advanced management functions, optimal load balancing and traffic allocation [4]. Such approach provides a seamless convergence and compliance with required quality of service. However, the network solutions that are based on the SDN concept have several disadvantages. Firstly, controller becomes a potential point of failure of network.

23

Analysis of the interaction between OpenStack and SDN controller, performance and fault tolerance evaluation for different types of controllers are important tasks. The solution to these tasks will help to develop a series of recommendations for future advanced networks.

A. Owerview of OpenStack network module Neutron or Openstack network module provides an interface between virtual machines with other network elements. Functional core module based on the model abstraction, which includes virtual network subnet, IPaddresses and ports [5]. Neutron main components are:  Neutron-server — this is the main process of OpenStack Networking server. It forwards requests from the API to configured OpenStack Networking plug-in. Administrator of cloud computing network chose the appropriate plug-in type. In addition, part of the Neutron includes three agents that interact with the main process through a message queue, or via a standard API and allow for the network settings.  Neutron-dhcp-agent provides DHCP-services (Dynamic Host Configuration Protocol) to all endpoints.  Neutron-l2-agent supports L2 functionality for virtual machines to an external network.  Neutron-l3-agent supports L3/NAT functionality, to allow virtual machines to access external network. Neutron functionality has some significant disadvantages and limitations. It is the absence of interaction of the flexible transport layer and some types of routers. This leads to restrictions of the functions and scalability limitations in VLAN, firewall and NAT configuration. Such situation does not allow to effectively

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

organizing the interaction of distributed computing nodes via Neutron [6]. Software Defined Networking is introduced to overcome the deficiencies of Neutron. SDN is a new networking model that allows open API communication between the hardware (Physical elements) and the Virtualized elements [7]. B. OpenStack-SDN connection establishment Information about network resources or topology changes arrives from Neutron to SDN controller through Northbound API. The controller processes the new information, creates or updates the management rules and sends them to both physical and virtual network elements through Southbound API. The Figure 1 presents a general idea about integration SDN Controller into OpenStack [5]. The first step is to obtain request for the introduction of a new service or the provision of the existing one. Orchestrator Heat generates queries to the primary components of the Nova and Neutron. The Flow Rules (3) describes how to steer the traffic between the ports, which is required to support traffic splitting among different

network functions (2). Nova will retain the requests until the last one is received. A new API has been introduced in Nova and called by Heat to specify constraints such as minimum amount of bandwidth required in the connection between two VMs (3). Neutron creates a network connection between the virtual machines that are involved in the provision and data processing: configure the network and subnet, determine the active IP - addresses and ports of virtual machines (4). This model is based on a set of rules generated by the Neutron module and delivered Nova kernel (5). Next Nova scheduler creates virtual machines and provides information about them (IP-address and the active ports) module Neutron (6, 7). All network management rules that created in Neutron are transmitted SDN controller only after the port of virtual machines receive status «active» (8). SDN controller generates its own set of commands, which based on the FlowMods rules. This set of commands allows to interact the components of the virtual and physical infrastructure, control and monitor of network elements states (9).

Fig. 1. SDN Integration into OpenStack.

Table I shows the main types of network operating systems and plug-ins that used in the Neutron module for interactions with SDN controller [7]. Neutron interacts with other network components, in particular the controller SDN, by the special plug-ins. Plug-ins provide correct communication algorithms and transmission of management information. Today variety of plug-ins is developed [6, 8]. Plug-ins have different features and performance. The choice of plug-in type

depends on network controller’s.

operation

system (NOS) of

However, the existing NOS are significant differences, which significantly affects to the controller’s performance. Currently available next plug-ins: Open vSwitch, Cisco UCS/Nexus, Linux Bridge, Nicira Network Virtualization Platform, RYU OpenFlow Controller, NEC OpenFlow. Also several types of network operating systems such as Open Daylight, RYU, Floodlight, NOX, that allow to

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

24

organize the interaction between Neutron and SDN controller [6-8]. TABLE I.

CONTROLLERS AND PLUG-INS TYPE Open Daylight

Flood Light

RUY

POX

Linux

Linux, Windows

Linux, Windows

Linux, Window s

Modular Layer 2

Rest Proxy plug-in

RUY plug-in

No

Integration level

Hight

Middle

Hight

-

OpenFlow support

1.0, 1.3

1.0

1.0, 1.2, 1.3

1.0

Platform

Neutron Plugin

A point-to-point topology (Gigabit Ethernet) was considered as a physical connection. The logical connection between the VMs and network environment is provided through the GRE tunnel. In experiment the next SDN controllers characteristics are considered: Open Daylight, RUY, Floodlight, POX. During the experiment, the characteristics of virtual machines remain constant; the type of controller’s network operating system is changed in each part of experiment. To evaluate the basic network parameters of controllers the following commands that allow you to install were used: 

Average flow setup latency (ms)

$ for i in `seq 1 20`; do ping -c 1 -q 10.10.10.2

III. PERFORMANCE ANALYSIS OF SDN CONTROLLERS In the case of SDN controllers integration into OpenStack should be evaluated by the following parameters: scalability coefficient, delay time, packet loss ratio under different load types. It is reasonable to note controller’s parameters are greatly depend of NOS type and selected Neutron plug-in. Let us consider the fragment of a network that consists of multiple virtual machines (Management Node and User node) and remote physical node. Each experiment was run on the exact same setup as described below.  Dataplane components: two servers of 24-core Xeon X56560 @ 2.67 Ghz; 128 GB RAM running Centos 6.4.  Workload VM information: The two VMs spawned used for the functionality and dataplane tests had 4 vCPUs pinned 1:1 to host CPUs, 8GB memory, 12 GB hard disk in the qcow2 format.



Average steady-state latencies (ms)

$ ping -c 50000 -f 10.10.10.2  Max TCP unicast through under varying MSS (viz., 90 bytes, 1490 bytes, 9000 bytes) $ for i in 90 1490 9000; do iperf -c 10.10.10.2 -t 60 -m -M $i  UDP under varying number of parallel sessions (1 session and 3 session) $ for i in 1 3; do iperf -c 10.10.10.2 -u -t 60 -b 10G – P  Max allowed TCP flows between known hosts, without zero drops (measured in flows/sec). The interaction between interfaces is shown in Fig. 3.

OpenStack

and

SDN

 Controller server information: Each SDN solution had a controller running on CentOS 6.4 server with 8-core Xeon 32 GB RAM. Figure 2 shows the fragment of experimental network.

Fig. 3. The interaction OpenStack and SDN interfaces.

Essentially these experiment claimed features of the network-virtualization solution, including the following [8, 9]. Fig. 2. Fragment of experimental network.

25

 ability to handle overlapping overlapping MAC addresses;

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

IP

ranges,

 the ability to provision QoS configurations on a per-virtual-network basis;  the ability to live-migrate virtual machines across L2 and L3 boundaries;  fault tolerant of controller, and limited impact for faults on data plane (vSwitch, ports, VMs). The results obtained in the execution of these commands for different types of controllers are shown in Table II. The obtained results help to determine the current scope of Open Daylight, RYU, Flood light controllers that TABLE II.

integrated into OpenStack. According to the obtained results, the Open Daylight setup latency and average steady-state latencies are higher than in RUY and Floodlight. However Open Daylight fault tolerance is significantly higher, because the transmission loss of multiple flows were minimal. POX does not support integration with OpenStack, as we obtained in results. Floodlight demonstrated the highest transmission speed of TCP and UDP data flows and the minimum delay time. This suggests that Floodlight has the highest performance of the reviewed controllers.

RESULTS OF EXPERIMENT

Average flow setup latency (ms)

Open Daylight 10,48

RUY 6,9

Flood light 5,4

Average steady-state latencies (ms)

3,3

2,7

1,56

Max TCP unicast through under varying MSS (viz., 90 bytes, 1490 bytes, 9000 bytes)

MSS - 1490: 18 МВps MSS - 9000: 19 МВps

MSS - 1490: 31,2 МВps MSS – 9000: 48,7 МВps

UDP under varying number of parallel sessions

P = 1 session 11,05 МBps, 0% loss P = 3 sessions 1,8 МВps, 2% loss 100 flows per second - 0 % loss 1000 flows per second - 3 % loss 10000 flows per second - 98 % loss

MSS - 1490: 29,1 МВps MSS - 9000: 29,3 МВps P = 1 session 21,9 МВps, 15 % loss P = 3 sessions 2,1 МВps, 10 % loss 100 flows per second - 0% loss 1000 flows per second 7% loss 10000 flows per second 100 % loss

Max allowed TCP flows between known hosts without zero drop

IV. CONCLUSIONS OpenStack technology allows to organize effective establishing, providing and supporting processes in cloud computing network. However, the technology has a number of disadvantages that limit OpenStack widespread use in the convergence with other networks. Lack of flexibility interaction between transport layer protocols leads to restrictions of scalability functions, load balancing, filtering and NAT overcoming. The obtained results show that SDN controllers currently have the capability to handle around 1000 flows per second. A computing node, on the other hand, handles around 100000 flows per second. It clearly indicates a huge gap between the current capabilities of SDN controllers and the demands of cloud computing.

[2]

[3] [4]

[5]

[6]

[7] [8]

REFERENCES [1]

SDN Architecture [Online]. Available: https://www.opennetworking.org/images/stories/downloads/sdnresources/technical-reports/TR_SDN_ARCH_1.0_06062014.pdf.

[9]

P = 1 session 28,4 МВps, 27% loss P = 3 sessions 3,31 МВps, 15% loss 100 flows per second - 0% loss 1000 flows per second - 10% loss 10000 flows per second - 100% loss

Software-Defined Networking: The New Norm for Networks [Online]. Available: https://www.opennetworking.org/ images/stories/downloads/sdn-resources/white-papers/wp-sdnnewnorm.pdf. OpenStack Cloud Administrator Guide [Online]. Available: http://docs.openstack.org/admin-guide-cloud/index.html. A 10 Networks Openstack Integration. Customer Driven Innovation [Online]. Available: https://www.a10networks.com/ sites/default/files/resource-files/A10-SB-19107-EN.pdf. A. Devlic, W. John, and P. Skoldstrom, “A Use-Case Based Analysis of Network Management Functions in the ONF SDN Model”, in European Workshop on Software Defined Networking, 2012, pp. 85 – 90. doi: 10.1109/EWSDN.2012.11. S.Myung-Ki, N.Ki-Hyuk, and K. Hyoung-Jun, “Software-defined networking (SDN): A reference architecture and open APIs”, in International Conference on ICT Convergence (ICTC), 2012, pp. 360-361. doi: 10.1109/ICTC.2012.6386859. Test suite for SDN-based Network Virtualization [Online]. Available: http://sdnhub.org/projects/sdn-test-suite/. OpenDaylight Integration [Online]. Available: http://openstack.redhat.com/OpenDaylight_intergration. Details of Ryu in cooperation with OpenStack Grizzly [Online]. Available: https://github.com/osrg/ryu/wiki/Details-of-Ryu-incooperation-with-OpenStack-Grizzly.

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

26

Research Architecture and Features of Network Processors Chaikovsky Igor

Chaban Ksenia

Department of Telecommunication Lviv Polytechnic National University Lviv, Ukraine

Department of Telecommunication Lviv Polytechnic National University Lviv, Ukraine

Nykonchuk Petro Department of Telecommunication Lviv Polytechnic National University Lviv, Ukraine e-mail: [email protected]

Abstract - Features of the structure, functions and applications of network processors. Summarizes examples of specific architecture and network processors Keywords - network processor information network; router; conveyer processing; parallel processing.

I. INTRODUCTION The increase in the number of users of information networks and the growth of information transmitted through them, have considerably expanded bandwidth requirements of routers, which use specialized network processors. They are designed to provide productivity and flexibility through parallel and programmable architecture. II. HARDWARE DEVELOPMENT DATA PROCESSING IN INFORMATION NETWORK

Information networks have passed several stages of development. The first network (1975-1980r.r.) was characterized by low data rate that processed personal computers installed with special software. The hardware of such computers was built on universal processors. Figure 1 shows a data network device based on PC architecture.

use them in a new generation of so-called specialized network processors. Such devices have a feature a completely specialized architecture that is based on the concept of RISC-architecture, and contains additional functional blocks for processing packets. Thus, network processors combined custom ordered chips and programmable universal processors. [1] III. GENERAL CHARACTERISTICS OF NETWORK PROCESSORS Network processor (NP) - is a specialized programmable device that is intended for use in network equipment. The main requirement is the ability of NP to handle the flow of packets at the speed channel to which it is connected. The architecture of network processors from different manufacturers varies, but they all follow the same basic concepts and structures. [2] The main differences from versatile network processors are: most network instruction set processors based on RISC-architecture; and MP with additional functional blocks that implement packet-processing tasks. Figure 2 shows a general block diagram of a network processor, which reflects its elements: • RISC core • internal buffer • high-speed memory interfaces • PCI interfaces • interface for communication with universal processor

Fig. 1 Network device based on PC architecture

The next step (early 1990s) was a broad introduction to networking equipment of the ASIC (Application-Specific Integrated Circuit) chip. Moreover, came with the useof versatile processors that treated a small percentage of packets associated with managing the network routing and configuration of the device. Due to lack of programming, custom chip processors lacked the flexibility of universal processors where the changes were made at the level of replacement software. Therefore, the development of a network device life cycle was too long. The next step in the development of network devices (since 2000) was to

27

• high-speed transfer interfaces information packages [3] To achieve high performance packet processing in network processors, the following architectural solutions became available: • single-threaded processor with a clock frequency, • parallel processing streams of packets, • pipelining packages, Ingle-threaded architecture productivity growth at the moment is almost exhausted. The peculiarity of the parallel

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

architecture is necessary to select and implement an effective mechanism for hardware input queue management packages. [4] In pipelining architecture, data stream moves through the functional blocks, each of which performs a necessary processing of packages. They can also apply a combination Conveyer-parallel architecture.

1.6 Gb/s -10Gb/s. Architectural consist of StrongARM core and six programmable elementary processors (Microengines), that process information at frequencies 232MHz-1.4GHz. [5] Company EZchip represented by network processors NP-3 [6] and NP-4, providing bandwidth 30Gb – 100 Gb/c, which architecturally include managers of traffic for incoming and outgoing packets, blocks searching, routing, switching and ensuring security policy and hardware implementation protocol OAM (Operation, Administration, and Maintenance), which simplifies administration and support routers. [7] The company LSI produces a family of communications processor ACP (Axxia Sommunication Processors), which combine computing cores-purpose architecture power, and multiple specialized cores for various processing tasks packets. The most productive is APP3100 with bandwidth up to 2Gb/s.

Fig. 2. General block diagram of network processor

IV. USING AREA NETWORK PROCESSORS Network processors are used in the production of various types of network equipment [5]:

NP PMC-Sierra Company Winpath3 with a throughput equal 10Gb/s, clocked at 650MHz. It contains classifications packages blocks, blocks security policies and blocks cryptography. [8] Highly productive NP NFP-6xxx of Netronome with throughput equal 200Gb/s and work with a frequency 1,2GHz/s and can handle more than 3 billion queries. [9]

• routers and switches; • firewalls; • border controller sessions; • intrusion detection devices; • intrusion prevention devices; • monitoring systems network. With the increasing number of network applications, the processor market started dividing into three major areas of network equipment: main units, border devices and access devices. Each of these areas has different target applications and performance. The main device is located within the network because they are the most dependent on performance and least sensitive to flexibility. Examples of such devices are terabit and gigabit routers. Border device located between the core network (main unit) and access devices. Examples of border devices are URL-balancers and firewalls. The equipment presented various devices access to the network (cable modems, wireless access points, etc.). [4] Each level requires different ratios network performance, functionality and value. To effectively meet these needs, network processors should be optimized not only for specific equipment, but for services provided in each segment network infrastructure. V

MARKET NETWORK PROCESSORS

Market network processors represented many companies and corporations. Intel company is a leader in the production with line of network processors IXP. MP 28xx series IXP12xx and equipment are used in access networks in border controller sessions with a capacity of

VI. CONCLUSIONS Network processors aimed at the rapid development of flexible and easily modified hardware solutions for information networks. The purpose of the network processor is an effective combination of flexibility of programmable processors with special high speed circuits. We have accumulated some experience in the design and operation of network processors, but not fully identified the most efficient hardware architecture of network processors. Options processors that are produced from the combined parallel conveyor structures do not always meet the constantly increasing demands on performance. Software development for network processors may have difficulty because for most high performance, network processors require low-level programming. Further work in this direction will be to study software and hardware aspects of the network processors. REFERENCE [1] [2] [3]

[4] [5]

Ladyzhenskii Y.V., "Architecture and development trend of network processors", Donetsk, 2011 –T.1. Grishchenko VI, LadyzhenskiyYu.V. "Modeling routers on multicore network processors", Donetsk, 2010, Article 1. Ahmadi M. Network Processors: Challenges and Trends: In Proceedings of the 17th Annual Workshop on Circuits, Systems and Signal Processing, ProRisc 2006 / M. Ahmadi, S. Wong. − 2006. − P. 223 - 232. Kucheryavy EA Traffic management and quality of service in the Internet. - SPb .: Science and Technology, 2004. – P. 336 .. Zarubin A.A., Microprocessor control software architecture ixa guidelines for practical training 200900 - St. Petersburg 2003, PP. 6-17 NP-3: 30-Gigabit Network Processor with Integrated Traffic Management. Product Brief: EZchip Technologies. – 2010. – P.4.

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

28

Wireless and mobile communication

Research of Factors Influencing the Subchannel Allocation to Subscriber Stations in WiMAX Lemeshko Oleksandr

Hailan Ahmad

Telecommunication Systems Department Kharkiv National University of Radio Electronics Kharkiv, Ukraine

College of Computer Science and Mathematics Thi-Qar University Nasiriya, Iraq e-mail: [email protected]

Haider Dheyaa Kamil Al-Janabi Telecommunication Systems Department Odesa National Academy of Telecommunications named after O.S. Popov Odesa, Ukraine e-mail: [email protected]

Hojayev Oraz Telecommunication Systems Department Kharkiv National University of Radio Electronics Kharkiv, Ukraine e-mail: [email protected] subchannels usage (FUSC, PUSC, OPUSC, OFUSC, and TUSC); total number of the SSs in the network N ; number of subchannels K used depending on the selected channel bandwidth; required transmission rate for service n (Mbps); bandwidth of k -th of the n -th SS Rreq

Abstract — Model of subchannel allocation in WiMAX network based on solution the optimization problem associated with maximizing the lower level allocated bandwidth for each subscriber station according to its quality of service requirements for access rate was investigated. Research of proposed model confirmed the adequacy and effectiveness of solution in terms of providing different types of service level (with and without guaranties) to subscriber stations. Results showed that the number of subchannels allocated to subscriber stations with a deficit of network resource adaptive reduction of provided level of quality of service performed dependently on the type of possible guaranties and requirements regarding the access rate.

subchannel R n , k allocated to the n -th SS.

Keywords — WiMAX; OFDMA; subchannel; allocation; subscriber station

can take four values concerning the length of the useful part of symbol. Capacity of the k -th subchannel allocated

I. INTRODUCTION

to the n -th SS ( R n,k ) represents the number of transmitted bits per time unit (second) and can be calculated according to the formula [2-4]:

WiMAX uses technology of adaptive multiplexing with OFDMA (Orthogonal Frequency-Division Multiple Access) that allows several subscribers to work in one timeslot at different subchannels. Thus, the effectiveness of using WiMAX depends on the quality of problem solution concerned with allocation of time and frequency resources formed at physical and data link layer of OSI model. To each user (Subscriber Station, SS) in WiMAX network according to its quality of service (QoS) requirements in frame structure certain set of bursts are allocated, for which a number of frequency subchannels and timeslots are assigned. Therefore burst is a frequencytime resource in WiMAX technology. But existing methods (schedulers) of frequency-time resource allocation, such as Proportional Fair Scheduling, Round Robin Scheduler, Max C/I Ratio, Best CQI Scheduling [1] do not satisfy the requirements for differentiation or QoS guarantee. In this regard, there is an actual problem associated with improvement of frequency and time resource allocation methods in WiMAX technology. II. MODEL OF SUBCHANNEL ALLOCATION IN WIMAX In the model of subchannel allocation to subscriber station it is assumed that there are known the following inputs: bandwidth of used frequency channel from the range of 1.25 MHz to 20 MHz; selected mode of

Taking into account that the useful part of the symbol has a fixed duration Tb  89,6 µs, the number of symbols in frame will take values 19, 24, 39, 49, 79, 99, 124, 198 according to the indicated size of frame. Moreover, between the symbols there is a guard interval Tg , which

R n, k 

Rcn, k K bn, k K s (1  BLER ) Tb  Tg  TRTG  TTRG

,

(1)

where Rcn,k is the speed of code used at signal coding of the n-th SS; K bn,k is the bit load of symbol of the n-th SS; K s is the number of subcarriers for the data transmission in one subchannel; TRTG  105 µs is the duration of switching interval from receiving to transmission (receive/transmit transition gap, RТG); TTRG  60 µs is the duration of switching interval from transmission to receiving (transmit/receive transition gap, TRG); BLER is the probability of block error obtained at the expense of the Hybrid Automatic Repeat Request mechanism (HARQ) [1].

While solving a problem of subchannel allocation within the represented model it is necessary to provide calculation of the control variable ( xnk ), defining the order of subchannel allocation. According to the physics of problem the following limitation should be over the control variables:

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

31

xnk  {0,1} , ( n  1, N , k  1, K ) ,

(2)

1, if k  th subchannel allocated to the n  th SS; xnk   0, otherwise . Total number of control variables depends on amount of subscriber stations in the network and used subchannels respectively, defined by the expression N  K . Condition of fixing one subchannel only for one subscriber station is defined according to the expression N

 xnk  1 , (k  1, K ) .

(3)

n 1

Condition of scheduling the transmission rate for the n -th subscriber station on the k -th subchannel not exceeding the capacity of subchannel is defined by the expression K

n n ,  R n, k xnk  Rreq

(4)

k 1

1, if for n  th SS service guarantee necessary; 0, otherwise .

n  

For optimal balancing the number of subchannels allocated to each SS, the system introduced additional conditions limitations to the control variables xnk : K

 R n , k x nk

k 1

n ( Pr n  1) R req

  , (n  1, N )

(5)

where Prn is n -th subscriber station priority;  is a control variable too, characterizing lower bound of satisfaction level of QoS requirements to access rate. In general   0 . To improve QoS in WiMAX network in solving the problem of balancing the number of subchannels allocated to SS it is needed to maximize the lower bound meeting QoS requirements to access rate, i.e.

  max .

Depending on the type of provided service the required access rate can differentiate and in some cases even be guaranteed. It is important to note, that the same subchannel will provide a different rate for various SSs (depending on the SNR, selected modulation and coding scheme, priority), while the amount of total available resource depends on the channel bandwidth (the number of subcarriers and subchannels). There are following factors influencing the subchannel allocation indicated during the research: number of SSs; requirements of SS to quality of service (access rate); signal-noise conditions (SNR); channel bandwidth; number of used subchannels; selected mode (FUSC, PUSC, OPUSC, OFUSC, TUSC); service model (with or without access rate guarantee); priorities of SS. Besides, better balancing achieved with the higher channel bandwidth and number of subchannels, and less differentiated QoS requirements. IV. CONCLUSION Model for subchannel allocation in WiMAX network (1)-(6) was presented, where balancing the number of subchannels allocated to subscriber station in WiMAX network based on the solution of optimization problem associated with maximizing the lower level allocated bandwidth for each subscriber station (6) according to its QoS requirements for access rate. As the constraints stated in solving the optimization problem are conditions (1)-(5). Formulated optimization problem belongs to class of mixed-integer linear programming, because some variables of (6) are Boolean, balancing variable (6) is a positive real variable, and objective function (6) and constraints (2)-(5) are linear. Research of proposed model (1)-(5) confirmed the adequacy and effectiveness of solutions as a whole in terms of providing different types of service level (with and without guaranties) to subscriber stations. REFERENCES [1]

[2]

(6)

Thus, the model of subchannel allocation to subscriber station in WiMAX network based on solution of optimization problem associated with maximizing the lower level allocated bandwidth to each subscriber station (6) according to its QoS requirements for access rate. As the constraints stated in solving the optimization problem are conditions (1)-(5). Formulated optimization problem belongs to class of mixed-integer linear programming.

[3]

[4]

S.R. Puranik, M. Vijayalakshmi, and L. Kulkarni, “A Survey and Analysis on Scheduling Algorithms in IEEE 802.16e (WiMAX) Standard,” International Journal of Computer Applications, pp. 110, October 2013. S. Garkusha, Yu. Andrushko, and O. Lemeshko, “Analysis Results of WIMAX Dowlink Traffic Management Model In Congestion Conditions,” Proceedings of World Telecommunications Congress 2014 (WTC 2014), pp. 1-4, June 2014. A.V. Lemeshko, and S.V. Garkusha, “Model time-frequency resource allocation WiMAX aimed at improving the electromagnetic compatibility,” Proceedings of the 2013 IX International Conference on Antenna Theory and Techniques (ICATT), pp. 175-177, September 2013. A.V. Lemeshko, H.D. Al-Janabi, and A.M.K. Al-Dulaimi, “Model progress of subchannel distribution in WIMAX in antennas system,” X Anniversary International Conference On Antenna Theory And Techniques, ICATT’2015, pp. 276-278, April 2015.

III. RESEARCH OF SUBCHANNEL ALLOCATION WITH AND WITHOUT GUARANTEE OF ACCESS RATE The tasks of ensuring QoS in WiMAX networks are very important with the basic QoS indicator access rate.

32

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

The Problem of Processing Digital Signals Format 5G in Mobile Communication and it’s Solution Bubnov N.S., Svetsinskaya E.S., Yaschuk A.S., Sunduchkov K.S. Information and telecommunication networks Institute of Telecommunication Systems, NTUU "KPI Kiev, Ukraine e-mail: [email protected] Abstract—This work is devoted to the basic solutions, which allows to perform processing digital signals format 5G in mobile communication. It is considered two network architecture of delivering signals to each base station and receiver architecture of the mobile terminal at a desired operating frequency band of 4 GHz. Keywords—5G; hihgway; mobile network; services.

I. INTRODUCTION It is known [1-3], that mobile communication when subscribers driving at a speed of 250÷350 km/h, whether on the highway or railway express, based on traditional methods (handover technology) establishing of communication between a base station (BS) and the mobile terminal (MT) for network subscribers having 1000 or more can not simultaneously provide services to each subscriber to a data rate of 10÷20 Mbit/sec. However, there is such a need in the community and scientists continue to storm these heights. Talking about the 5G format in mobile communications first of all must be imagined what it can give mobile subscribers. In the "smart house" is born an abundance of information that, at any moment of time may be requested by the subscriber (including the mobile terminal) for surgical intervention in the mode of «online» to adjust the control program of the house. Extreme situations are not the only reason, which needs arise in high speed data transmission. Solving problems with the use of distributed database, or the problem of parallel computing using multiple computers placed on any particular area have large amounts of data, high speed data transfer, etc. Organization of "direct" channel delivery of requested information by subscribers in the mobile terminals (MT), moving with high speeds up to 250 km/h on the highway and up to 350 km/h in the express trains, in the presence of the network in 1000 ÷ 2000 MT is required distribution networks, as will be shown below, up to several tens of Gbit/s. Today, leading technology in mobile networks is LTE (Long-Term Evolution) of the 3GPP international consortium. LTE network is already deployed in many countries. These literature review show that German automakers has already equip their cars with LTE access points with data rate per subscriber from 23 to 70 Mbit/s, but with limited movement speed of subscribers [1-4].

Golik A.L., Sunduchkov A.K. National Institute of Telecommunications Kiev, Ukraine

Analysis has shown that a further increase of data transfer speed in mobile networks is limited by available radio bandwidth and mobility principles of subscribers using conventional handover procedures. Leading experts from Deutsche Telekom, NTT DoCoMo, Amtel, Samsung, Telefonica, Vodafone, etc. immediately form their vision and technical requirements for next generation mobile communication 5G standard IMT 2020. In particular in [5] are given the following data: Research on 5G technology began in 2012 in France with the achievement rate of over 4 GB/sec. In 2013, in Japan had made a new step towards 5G: hardware company NTT DoCoMo has shown the ability to transfer data from a user with speeds up to 10 Gbit/s (Uplink) at a frequency of 11 GHz with 400 MHz bandwidth. Data transfer carried on the vehicle at a speed of 9 km/h. In the Technical University Dresden opened a modern laboratory 5G at the department Vodafone 5G mobile communication systems. Laboratory 5G includes network hardware and software, computer chips, spectrometers and services to "cloud computing". II. MAIN PART A. Architecture heterogeneous network architecture delivering signals to each base station Architecture heterogeneous network delivering digital signals services to a large number of subscribers (12 thousand subscribers) at 10÷12 Mbit/sec each is to provide the network bandwidth of 24 Gbit/s (2000 subscribers × 12 Mbit/s = 24 Gbit/s) that the QAM-64 (Km = 6) would require the operating frequency band of 4 GHz (24 Gbit/s : 6 → 4 GHz). It determine the nature of distribution networks. To the base stations distribution line is selected in two types: I – hybrid network includes except providers the services provided, channelization center with two lasers, an external optical modulator apparatus channeling and distribution of fiber optic links (FOL). II – heterogeneous network, in which a distribution section consists of satellite radio channels, which are composed of a transponder. Both networks are able to provide the total operating frequency band of 4 GHz or more. Block diagrams of these networks are shown in Fig. 1 А) and В).

______________________________________________ 2015 1st International Conference Advanced Information and Communication Technologies

33

Formation OFDM-symbols into the channel forming apparatus represented in the publications by the authors [3].

IF, 11 - quadrature mixer, 12 - ADC, 13 - digital receiver OFDM. On the frequency axis 15 shows a schematic arrangement of analog signals in the millimeter range 14, a group or a group of groups services 16 and service groups 17, in which there is ordered services. III. CONCLUSION

A.

B. Fig.1. Structural scheme of distribution networks based on fiber-optic (A) and the basis of satellite radio (B).

B. Receiver architecture of the mobile terminal. Features of the network architecture provide simultaneous transmission of all services (for example, 2048 services with bandwidth of each 3÷12 Mbit/s) to all MT. Analog-to-digital converters (ADC) in the receiver are limited by the input frequency of the converted signal. Currently this value is 1 GHz [5]. With QAM-64 modulation signal (2048 services) takes 1 ÷ 4 GHz. Consequently, it is necessary to break up services in the central service station into groups, and each groupforming OFDM-symbols. The task of receiving a wideband signal and distinguishing the necessary services is assigned to the user MT. In the millimeter range to allocate a separate group with the necessary service cannot be due to the limitations of band pass filters. This means that it is necessary to transform all the received signal is converted to an intermediate frequency (IF). This conversion will not be enough, because put signal of band 4 GHz in the range of 0 to 1 GHz fail. We need another step conversion. Increasing the number of conversions to 3 or 4 is inappropriate or technically nor economically. Fig.2 shows the architecture of the MT receiver.

Fig.2. Receiver architecture of the mobile terminal

The input circuit 1 receives a signal in the millimeter range, the amplifier 2 amplifies it to the desired level, the first conversion stage (3-5) carries signal to the first IF, filter 6 selects from band 4 GHz one group or groups of services with a total bandwidth of these signals service