and sin(x) for High-performance Computing

32 downloads 74372 Views 5MB Size Report
Tech. Serife TOZAN RUZGAR. Gediz University (TR). SPONSORS AND ..... A Notification Service for Pervasive Healthcare Applications Using Multiple ANN Engines. 181 ... Model Based Analysis of Lung Mechanics via Respiratory Maneuvers. 188 .... Numerical Investigation on Diesel Spray Formation and Combustion. 413.
June, 1 – 4, 2011, KUSADASI, AYDIN, TURKEY

PROCEEDINGS ABSTRACTS Gediz University Publications www.gediz.edu.tr

2 nd International Symposium on Computing in Science & Engineering

EDITOR Prof. Dr. Mustafa GÜNEŞ EDITORIAL BOARD Asst. Prof. Dr. Ali Kemal ÇINAR Asst. Prof. Dr. İbrahim GÜRLER PUBLICATION COMMITTEE Instr. Mümin ÖZCAN Ress. Asst. Ayşegül GÜNGÖR Ress. Asst. Gülşen ŞENOL Ress. Asst. Mehtap ÖZDEMİR KÖKLÜ Ress. Asst. Dr. Özlem KİREN GÜRLER Ress. Asst. Şerife TOZAN Instr. Yavuz İNCE Ress. Asst. Gökhan AKYOL Ress. Asst. Selma USLUSOY Ress. Asst. Zerife YILDIRIM Ress. Asst. Mehmet ÇAPUTÇU

Gediz University Publication No. GU – 003 Publising Date: May 2011 http://iscse2011.gediz.edu.tr

ISBN: 978 – 605 – 61394 – 2 - 0 © ISCSE 2011 (International Symposium on Computing in Science & Engineering). All Rights Reserved. Copyright Gediz University. No part of this publication may be reproduced, stored in retrieval system or transmitted in any form or by any means, electronically, mechanical, photocopying, recording or otherwise, without the prior written permission of the publisher, Gediz University. No responsibility is assumed by the publisher for any injury and/or damage to persons or property as a matter of products liability, negliance or otherwise or from any use or operation of any methods, products, instructions or ideas contained in the material here in. Printed in Izmir – Turkey.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

ii

2 nd International Symposium on Computing in Science & Engineering

MAIN SPONSORS

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

iii

2 nd International Symposium on Computing in Science & Engineering

PREFACE It was only a year ago, in June 2010, when the Faculty of Engineering and Architecture of Gediz University inaugurated an international symposium where 291 research outputs were presented with the participation of more than 400 academicians and researchers from 106 universities around the globe. The high rate of interest and the success of this 1st Symposium provided us enough motivation and encouragement to reorganize the event in the consecutive year and decide maintaining on a yearly basis. Therefore, this year, The 2nd International Symposium on Computing in Science & Engineering (ISCSE 2011) was held at Kuşadası (located on the western coast of Turkey) during 01 and 04 June 2011. The 629 participants from a total of 27 countries, 60 of whom were postgraduate students, had opportunities for scientific discussions and exchanges that greatly contributed to the success of the symposium. The participating countries included Algeria, Albenia, Australia, Bahamas, Belgium, Bosnia Herzegovina, Burkina Faso, Cyprus, France, Gambia, Germany, Ghana, Indonesia, Iran, Iraq, Ireland, Italy, Jordan, Northern Cyprus, Malaysia, Mali, Qatar, Saudi Arabia, Syria, Sweden, Tunisia, Turkey and USA. The main objective of ISCSE 2011 is to bring together national and international researchers and practitioners to discuss and exchange scientific theories and engineering applications over a rather wide spectrum of topics. The symposium coverage was intentionally kept as comprehensive as possible in order to help create platforms of cooperation for science people and promote and facilitate future interdisciplinary researches. With these aims in mind, the Symposium Organizing Committee determined xx topics including, for instance, Quantum Computation, Robotics and Automation, Artificial Intelligence, DNA Computing, Fuzzy Logic and Discrete Optimization etc. ISCSE 2011 received the total of 300 full-paper and 42 posters submissions. The full-paper submissions were sent to the members of the Scientific Committee and additional reviewers for evaluation. Though the full list would exceed the space allotted here, we would like to thank the 70 reviewers for their hard work and dedication in helping us to design such a highquality program of talks. Besides the symposium talks and poster presentations, the program included 5 plenary lectures covering various fields of the symposium, given by the invited speakers: Prof. Dr. Vesa A. NISKANEN from the University of Helsinki on ‘Aspects of Approximate Probability’, Prof. Dr. Burhan TURKSEN from ETU University of Economics and Technology on Review of Fuzzy System Model: Philosophical Grounding, Rule Bases and Fuzzy Function, Prof. Ibrahim A. Ahmad and Associate Prof. Dr. Dursun DELEN from the Oklahoma State University on Reversed Hazard rate and Mean Idle Time: Two New Notions in Quantifications Aging Derived from Stochastic Orderings and Applications, and ‘Business Intelligence and Advanced Analytics for Real World Problems’ and Prof. Dr. Mike NACHTEGAEL from Ghent University on The Application of Fuzzy Logic for Noise Reduction in Image Processing. Their invaluable contribution and illuminating public talks are acknowledged and greatly appreciated. Without their participation, the symposium would have been incomplete. Knowing that the organization of such an international event needs great effort and time spent, we would like to extent our appreciation to the Rector of the Gediz University, Prof. Dr. Seyfullah Çevik and the members of the organizing committee, including Mustafa Akdag, Haluk Gumuskaya, Serdar Korukoğlu, Vedat Pazarlioglu, Mehmet Pakdemirli, Fuat Okumus, Ozlem Erkarslan, Ibrahim Gurler, Aziz Kolkiran, Ozan Cakir, Fatih Mümtaz Duran, Hadi Zareie, Haidar Sharif, Seza Filiz, Ali Kemal Cinar, Yavuz Bayam, Selim Solmaz, Ugur Turkan, Mehmet Aksarayli, Emin Ozyilmaz, Gokhan Cayli, Mumin Ozcan, Salahattin Yildiz, M. Rıdvan Özel, Ozlem Kiren Gurler, Aysegul Gungor, Mehtap Ozdemir Koklu, Gulsen Senol, Ibrahim Cakirlar, Mehmet Çaputçu, Zerife Yildirim and Serife Tozan Ruzgar. Lastly, but not least, The Symposium would not have been possible without the generous financial and in-kind support provided by our sponsors. The contributions received from Tubitak, Turksat, BTK (Information and Communication Technologies Authority of Turkey), Petkim, Kavuklar Group, Orkide Group, MEGA International Communication, Emarket Turkey, Derici Group, Cisco and Hamle Digital are greatly appreciated. Given the enourmous speed with which science and engineering applications has been advancing in the areas covered by ISCSE, we expect that the next symposium will be as exciting and stimulating as this year’s one was, as seen through the pages of this proceedings volume. Please stay tuned to the web site of the Gediz University for updates and details. Looking forward to seeing you all and new researchers in ISCSE 2012. Prof. Dr. Mustafa GÜNEŞ Symposium Coordinator. Prof. Dr. Mustafa Guneş

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

iv

2 nd International Symposium on Computing in Science & Engineering

HONARY CHAIR Prof. Dr. Seyfullah CEVIK, Gediz University, Rector SYMPOSIUM CHAIR Prof. Dr. Mustafa GUNES, Gediz University, Dean of Faculty of Engineering and Architecture SYMPOSIUM ORGANIZING COMMITTEE Prof. Dr. Mustafa GUNES Prof. Dr. Mustafa AKDAG Prof. Dr. Haluk GUMUSKAYA Prof. Dr. Lotfi Askar ZADEH Prof. Dr. Mehmet SEZER Prof. Dr. Serdar KORUKOĞLU Prof. Dr. M. Vedat PAZARLIOGLU Prof. Dr. Mehmet PAKDEMIRLI Prof. Dr. Serkan ERYILMAZ Assc. Prof. Dr. Fuat OKUMUS Assc. Prof. Dr. Ozlem ERKARSLAN Asst. Prof. Dr. Ibrahim GURLER Asst. Prof. Dr. Aziz KOLKIRAN Asst. Prof. Dr. Ozan CAKIR Asst. Prof. Dr. Fatih Mümtaz DURAN Asst. Prof. Dr. Hadi ZAREIE Asst. Prof. Dr. Md. Haidar SHARIF Asst. Prof. Dr. Seza FILIZ Asst. Prof. Dr. Ali Kemal CINAR Asst. Prof. Dr. Yavuz BAYAM Asst. Prof. Dr. Selim SOLMAZ Asst. Prof. Dr. Ugur TURKAN Asst. Prof. Dr. Mehmet AKSARAYLI Asst. Prof. Dr. Emin OZYILMAZ Instr. Dr. Gokhan CAYLI Instr. Mumin OZCAN Instr. Salahattin YILDIZ Ress. Asst. Dr. Ozlem KIREN GURLER Ress. Asst. Aysegul GUNGOR Ress. Asst. Mehtap OZDEMIR KOKLU Ress. Asst. Gulsen SENOL Ress. Asst. Ibrahim CAKIRLAR Ress. Asst. Zerife YILDIRIM Lab. Tech. Serife TOZAN RUZGAR

Gediz University (TR) Gediz University (TR) Gediz University (TR) UC Berkeley (USA) Mugla University (TR) Ege University (TR) Dokuz Eylul University (TR) Celal Bayar University (TR) Atilim University of (TR) Gediz University (TR) Gediz University (TR) Gediz University (TR) Gediz University (TR) Gediz University (TR) Gediz University (TR) Gediz University (TR) Gediz University (TR) Gediz University (TR) Gediz University (TR) Gediz University (TR) Gediz University (TR) Gediz University (TR) Dokuz Eylul University (TR) Ege University (TR) Gediz University (TR) Gediz University (TR) Gediz University (TR) Dokuz Eylul University (TR) Gediz University (TR) Gediz University (TR) Gediz University (TR) Gediz University (TR) Dokuz Eylul University (TR) Gediz University (TR)

SPONSORS AND SUPPORTERS Turksat Satellite Communication, Cable TV and Operation Inc. Co (TÜRKSAT) Information and Communication Technologies Authorithy of Türkiye (BTK) Scientific and Technological Research Council of Türkiye (TÜBİTAK) Petrochemical Holding A.S (PETKIM) Kavuklar Group Orkide Group Mega International Communication Emarket Turkey Derici Group CISCO Hamle Tasarım

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

v

2 nd International Symposium on Computing in Science & Engineering

SCIENTIFIC COMMITTEE Prof. Dr. Ronald R. YAGER Prof. Dr. Reza LANGARI Prof. Dr. Yalcın CEBI Prof. Dr. Mustafa GUDEN Prof. Dr. Ali CALISKAN Prof. Dr. Mustafa GUNES Prof. Dr. Serdar KURT Prof. Dr. Orhan GUVENEN Prof. Dr. Mehmet SEZER Prof. Dr. Daniel LANE Prof. Dr. Ali SURMEN Prof. Dr. Tuncer OREN Prof. Dr. Salih OKUR Prof. Dr. Robert SHORTEN Prof. Dr. Mustafa AYTAC Prof. Dr. Nihat BOZDAĞ Prof. Dr. Ahmet GOKÇEN Prof. Dr. Etienne E. KERRE Prof. Dr. Saban EREN Prof. Dr. Osman BALCI Prof. Dr. İrfan ALAN Prof. Dr. Martin CORLESS Prof. Dr. Mehmet Cudi OKUR Prof. Dr. Gultekim CETINER Prof. Dr. J.N.K.RAO Prof. Dr. Serkan ERYILMAZ Prof. Dr. Said Ali HASAN Prof. Dr. Senay UÇDOGRUK Prof. Dr. Orhan TORKUL Prof. Dr. Levent SENYAY Prof. Dr. Gulhan ASLIM Prof. Dr. Masoud NIKRAVESH Prof. Dr. Cemali DINCER Prof. Dr. Erdal CELIK Prof. Dr. Ismihan BAYRAMOGLU Prof. Dr. Jorg FLIEGE Prof. Dr. Mike NACHTEGAEL Prof. Dr. Talip ALP Prof. Dr. Ibrahim A. AHMAD Prof. Dr. Selim ZAIM Prof. Dr. Yavuz AKBAS Prof. Dr. Turan BATAR Prof. Dr. Harun TASKIN Prof. Dr. Efendi NASIBOGLU Prof. Dr. Gulen CAGDAS Prof. Dr. Ugur CAM Prof. Dr. Gokmen TAYFUR Prof. Dr. Murat SOYGENIS Prof. Dr. Yunus CENGEL Prof. Dr. İrfan ALAN Prof. Dr. Haluk GUMUŞKAYA Assc. Prof. Dr. Nurullah UMARUSMAN Assc. Prof. Osman I. TAYLAN Assc. Prof. İbrahim DARRAB Assc. Prof. Dr. Dursun DELEN Assc. Prof. Dr. Mustafa TOPARLI

Iona College (USA) Texas A&M University (USA) Dokuz Eylül University (TR) Izmir Institute of Technology (TR) Ege University (TR) Gediz University (TR) Dokuz Eylül University (TR) Bilkent University (TR) Mugla University (TR) University of Ottowa (CA) Bursa Technical University (TR) University of Ottowa (CA) Izmir Institute of Technology (TR) National University of Ireland-Maynooth (IRL) Uludag University (TR) Gazi University (TR) Istanbul University (TR) Ghent University (BE) Yasar University (TR) University of Virginia (USA) Ege University (TR) Purdue University (USA) Yaşar University (TR) Yalova University (TR) Carleton University (CA) Izmir University of Economics (TR) King Abd. University (KSA) Dokuz Eylül University (TR) Sakarya University (TR) Dokuz Eylül University (TR) Ege University (TR) UC Berkeley (USA) Izmir University of Economics (TR) Dokuz Eylul University (TR) Izmir University of Economics (TR) The University of Southampton Ghent University (BE) Yalova University (TR) Oklahoma State University Fatih University (TR) Ege University (TR) Dokuz Eylul University (TR) Sakarya University (TR) Dokuz Eylül University (TR) Istanbul Technical University (TR) Dokuz Eylül University (TR) Izmir Institute of Technology (TR) Yıldız Technical University (TR) Yıldız Technical University (TR) Ege University (TR) Gediz University (TR) Aksaray University (TR) King Abd. University (KSA) King Abd. University (KSA) Oklahoma State University (US) Dokuz Eylül University (TR)

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

vi

2 nd International Symposium on Computing in Science & Engineering

SCIENTIFIC COMMITTEE Assc. Prof. Dr. Musa ALÇI Assc. Prof. Dr. Sinan SENER Assc. Prof. Dr. Yusuf OYSAL Assc. Prof. Dr. Arzu GONENC SORGUC Assc. Prof. Dr. Can BAYKAN Assc. Prof. Dr. Ali Ihsan NESLITURK Assc. Prof. Dr. Gamze TANOGLU Asst. Prof. Dr. Muhammed CINSDIKICI Asst. Prof. Dr. Hadi ZAREIE Asst. Prof. Dr. Selim SOLMAZ Asst. Prof. Dr. Murat CAKIRLAR Asst. Prof. Dr. Ozdemir CETIN Asst. Prof. Dr. A.Turan OZCERIT Asst. Prof. Dr. Uğur TURKAN Asst. Prof. Dr. Yavuz BAYAM Asst. Prof. Dr. Koray KORKMAZ Asst. Prof. Dr. Nurcan BAYKUS Asst. Prof. Dr. Haldun SARNEL Asst. Prof. Dr. Cuneyt AKINLAR Asst. Prof. Dr. Ahmed FREEWAN Asst. Prof. Dr. Mustafa Emre İLAL Asst. Prof. Dr. Ahmet ZENGİN Asst. Prof. Dr. Kadir ERKAN Asst. Prof. Dr. H. Secil ARTEM Asst. Prof. Dr. Osman Caglar AKIN Asst. Prof. Dr. Alpaslan DUYSAK Asst. Prof. Dr. Jens ALLMER Asst. Prof. Dr. Yenal AKGUN Asst. Prof. Dr. Ayce DOSEMECILER Asst. Prof. Dr. Sabri ALPER Asst. Prof. Dr. MD Haidar SHARIF Asst. Prof. Dr. Hurevren KILIÇ Asst. Prof. Dr. Aysegul ALAYBEYOĞLU Asst. Prof. Dr. Ahmet Turan OZCERIT Asst. Prof. Dr. Ibrahim GURLER Asst. Prof. Dr. Sahin UYAVER Asst. Prof. Dr. Fahrettin ELDEMIR Asst. Prof. Dr. Ozan CAKIR Assc. Prof. Dr. Mustafa TOPARLI Assc. Prof. Dr. Fuat OKUMUS Asst. Prof. Dr. Sinan KOKSAL Asst. Prof. Dr. Ahmet Afsin KULAKSIZ Assc. Prof. Dr. Bulent EKICI Assc. Prof. Dr. Allaberen ASHYRALYEV Assc. Prof. Dr. Ipek DEVECI Asst. Prof. Dr. Emel KURUOĞLU Asst. Prof. Dr. Mehmet AKSARAYLI Asst. Prof. Dr. Istem KESER Asst. Prof. Dr. Nahit EMANET Dr. Mehmet Emre GULER Dr. Emre ERCAN Dr. Gokhan CAYLI Dr. Murat TANIK Dr. Ozlem KIREN GURLER Dr. Efe SARIBAY

Ege University (TR) Istanbul Technical University (TR) Anadolu University(TR) Middle East Technical University (TR) Middle East Technical University (TR) Izmir Institute of Technology (TR) Izmir Institute of Technology (TR) Ege University (TR) Gediz University (TR) Gediz University (TR) Sakarya University (TR) Sakarya University (TR) Sakarya University (TR) Gediz University (TR) Gediz University (TR) Izmir Institute of Technology (TR) Dokuz Eylul University (TR) Dokuz Eylul University (TR) Anadolu University (TR) University of Jordan (JO) Izmir Institute of Technology(TR) Sakarya University (TR) Yıldız Technical University (TR) Izmir Institute of Technology (TR) Fatih University (TR) Dumlupınar University (TR) Izmir Institute of Technology (TR) Gediz University (TR) Gediz University (TR) Gediz University (TR) Gediz University (TR) Atılım University (TR) Celal Bayar University (TR) Sakarya University (TR) Gediz University (TR) Istanbul Commerce University (TR) Fatih University (TR) Gediz University (TR) Dokuz Eylul University (TR) Gediz University (TR) Celal Bayar University (TR) Selcuk University (TR) Marmara University (TR) Fatih University (TR) Dokuz Eylul University (TR) Dokuz Eylul University (TR) Dokuz Eylul University (TR) Dokuz Eylul University (TR) Fatih University (TR) Yuzuncu Yil University (TR) Ege University (TR) Gediz University (TR) Dokuz Eylul University (TR) Dokuz Eylul University (TR) Dokuz Eylul University (TR)

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

vii

2 nd International Symposium on Computing in Science & Engineering

CONTENTS Proceeding Number: 100/01 ISeeMP: A Benchmark Software for Multithreading Performance of Image Segmentation Using Clustering and Thresholding Techniques Proceeding Number: 100/03 Estimation of Dominant Motion Direction on Video Sequences Proceeding Number: 100/05 Optimization of cos(x) and sin(x) for High-performance Computing Proceeding Number: 100/06 A Comprehensive Two Level Description of Turkmen Morphology Proceeding Number: 100/07 An Overview of Two Level Finite State Kyrgyz Morphology

1

4

7

10

13

Proceeding Number: 100/09 Examining the Impacts of Stemming Techniques on Turkish Search Results by Using Search Engine for Turkish Proceeding Number: 100/11 Development of New VO2max Prediction Models by Using Artificial Neural Networks Proceeding Number: 100/12 Vulnerability Assessment of IMS SIP Servers with TVRA Methodology

16

20

24

Proceeding Number: 100/13 Predicting the Performance Measures of a Message Passing Multiprocessor Architecture by Using Artificial Neural Networks Proceeding Number: 100/14 Symbolic Computation of Perturbation-Iteration Solutions For Differential Equations Proceeding Number: 100/15 A Lightweight Parser for Extracting Useful Contents from Web Pages

27

30

33

Proceeding Number: 100/16 Applying Incremental Landmark Isomap Algorithm to Improving Detection Rate in Intrusion Detection System Proceeding Number: 100/18 Prime Numbers for Secure Cryptosystems and Primality Testing on MultiCore Architectures Proceeding Number: 100/19

36

40

43

Rule Base Representation in XML Proceeding Number: 100/21 Effectiveness of Standard Deviation of Fundamental Frequency for the Diagnosis of Parkinson's Disease

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

i

46

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/22

49

A New Modified Modular Exponentiation Algorithm Proceeding Number: 100/24

52

Improving PMG Mechanism for SPIT Detection Proceeding Number: 100/27 Prediction Of Warp-Weft Densities In Textile Fabrics By Image Processing Proceeding Number: 100/30 Software Automated Testing: Best Technique for Preventing Defects

56

59

Proceeding Number: 100/31 Comparing Classification Accuracy of Supervised Classification Methods Applied on High-Resolution Satellite Images

63

Proceeding Number: 100/35 The Relationship Between the Angle of Repose and Shape Properties of Granular Materials Using Image Analysis Proceeding Number: 100/36

67

71

An Approach to Part of Speech Tagging for Turkish Proceeding Number: 100/37 Face Modeling and Synthesis Using 3-Dimensional Facial Feature Points Proceeding Number: 100/43 Performance Analysis of Eigenfaces Method in Face Recognition System Proceeding Number: 100/46 Support Vector Machines with the COIL-20 Image Library Classification Practice Proceeding Number: 100/50 Learning Management System Design And Network Access Security With RFID

75

78

81

85

Proceeding Number: 100/51 PSO based Trajectory Tracking PID Controller for unchattering control of Triga Mark-II Nuclear Reactor Power Level Proceeding Number: 100/54 Person Dependent Model Based Facial Expression Recognition Proceeding Number: 100/55 Privacy Aspects of Newly Emerging Networking Environments: An Evaluation

89

92

94

Proceeding Number: 100/56 Realization of Campus Automation Web Information System in Context of Service Unity Architecture Proceeding Number: 100/57 Application Specific Cluster-Based Architecture for Wireless Sensor Networks

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

ii

97

103

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/58 Scalability Evaluation of the Wireless AD-HOC Routing Protocols in ns-2 Network Simulator Proceeding Number: 100/59 A Design for Practical Fault Tolerance in Java Accessing Native Code Proceeding Number: 100/60 On the Cache Performance of Time-Efficient Sorting Algorithms Proceeding Number: 100/61 Performance Comparison of A Homogeneous Linux Cluster and A Heterogeneous Windows Cluster Proceeding Number: 100/62 Author Identification Feature Selection by Genetic Algorithm Proceeding Number: 100/64 A Real-Time TTL based Downlink Scheduler for IEEE 802.16 WiMAX Proceeding Number: 100/65 Morphological Disambiguation via Conditional Random Fields

108

111

114

117

120

124

127

Proceeding Number: 100/67 Classification of Alcoholic Subjects using Multi Channel ERPs based on Channel Optimization Algorithm Proceeding Number: 100/68 E-Learning Content Authoring Tools and Introducing a Standard Content Constructor Engine

130

133

Proceeding Number: 100/69 Investigating Optimum Resource Allocation in University Course Timetabling using Tabu Search: An Incremental Strategy Proceeding Number: 100/70 PSOMDM: Faster Parallel Self Organizing Map by Using Division Method Proceeding Number: 100/71 A Real-Time Generation and Modification of GIS-based Height Maps Proceeding Number: 100/73 GIS Application Design for Public Health Care System Using Dijkstra Algorithms Proceeding Number: 100/74 A Case Study about Being a CMMI-Level 3 Awarded Organization in One-Year Time

136

139

142

146

149

Proceeding Number: 100/77 A Comparative Analysis of Evolution of Neural Network Topologies for the Double Pole Balancing Problem

154

Proceeding Number: 100/80 Metrics Threshold Values vs. Machine Learners: A Preliminary Study of Cross-Company Data in Detecting Defective Modules

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

iii

157

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/82

161

Complexity of Extremal Set Problem Proceeding Number: 100/83 Simulating Annealing Based Parameter Selection in Watermarking Algorithms Proceeding Number: 100/84

165

168

M-SVD Based Image and Video Quality Measures Proceeding Number: 100/87 Farsi / Arabic Printed Page Segmentation Using Connected Component and Clustering Proceeding Number: 100/88 Investigation of Several Parameters on Goldbach Partitions Proceeding Number: 100/90 P2P Architecture for Secure Message Transmission System Proceeding Number: 100/92 A Notification Service for Pervasive Healthcare Applications Using Multiple ANN Engines Proceeding Number: 100/95 Decision Tree Algorithms for Chronic Illnesses Diagnosis and Reporting Proceeding Number: 100/96 Model Based Analysis of Lung Mechanics via Respiratory Maneuvers Proceeding Number: 100/97 Genetic Algorithm Based Energy Efficient Clustering for Wireless Sensor Networks Proceeding Number: 100/98 Using Binary Classifiers for Information Security Risk Analysis: A Case Study Proceeding Number: 100/101 Test Based Software Development and a Sample Application-Full Text Proceeding Number: 100/103 A Lightweight Wireless Protocol Based on IEEE 802.11 for Embedded Telerobotics Systems Proceeding Number: 100/104 Effort Estimation Using Use-Case Points Method for Object-Oriented Software Projects Proceeding Number: 200/01 Broadband Impedance Matching via Genetic Algorithm Proceeding Number: 200/02 An Explicit Model Predictive Controller Design for a Chaotic Chua Circuit

171

174

178

181

185

188

192

195

198

204

209

211

213

Proceeding Number: 200/03 Adaptive Model Predictive Temperature Control of an Exothermic Hybrid Batch Reactor with Discrete Inputs Based on Genetic Algorithm

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

iv

215

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 200/04 A New Key Management Scheme for SCADA Networks Proceeding Number: 200/05 FPGA Based Wireless Multi-Node Transceiver and Monitoring System Proceeding Number: 200/07 2-D High Resolution Image Tiling: A Parallel Processing Phase-only-correlation Approach Proceeding Number: 200/08 Modeling Multivariate Time Series by Charge System Search

218

220

222

224

Proceeding Number: 200/10 Enhanced Power Amplifier Design with Active biasing for 2.4 GHz ISM band RF Front-End Modules in Wireless Communication Systems Proceeding Number: 200/11 Hardware Implementation of Spectral Modular Multiplication on Fpgas Proceeding Number: 200/14 Efficient SoC design for accelerator of Message Authentication and Data Integrity on FPGAs Proceeding Number: 200/18 Application Oriented Cross Layer Framework for WSNs Proceeding Number: 200/20 Inertia Weight for Evolutionary Programming with Levy Distribution Function Proceeding Number: 200/21

226

228

230

232

235

237

The Memcapacitor-Capacitor Problem Proceeding Number: 200/22 Adaptive Feedback Linearizing Tracking Control of a Quadrotor Helicopter Prepared for ISCSE 2011

241

Proceeding Number: 200/28 The Adaptive Front Lighting System Based on Road Image Processing and Lane Detection Prepared for ISCSE 2011 Proceeding Number: 200/30 Realtime Parameter Estimation, Calibration and Simulation of a DC Motor

244

246

Proceeding Number: 200/33 A Comparative Study on Optimum PID Tuning with Differential Evolution and Particle Swarm Optimization Proceeding Number: 200/35 Telemeter Application: Online Monitor and Control of an Open-Field Converter over GPRS Proceeding Number: 200/36 Design of a Low-Power CMOS Analog Neural Network

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

v

249

251

254

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 200/38 Added AGC PA Design For 2.45 Ghz ISM Band Low Power Transceiver Systems

256

Proceeding Number:200/39 Design of CMOS Analog Operational Amplifiers using EKV Transistor Model and Multi-Objective Optimization Proceeding Number: 200/41

258

260

Bullet Matching Using Profile Clustering Proceeding Number:200/42 Knowledge-Based Design of Low-Power Analog Integrated Circuits Proceeding Number: 200/43 Developing Tracking System and Monitoring for a Model Glider Proceeding Number: 200/45 Adaptive Trajectory Tracking Control of Wheeled Mobile Robot with an Inverted Pendulum

262

264

266

Proceeding Number: 200/46 I-PD Controller Design for A 4-Pole Hybrid Electromagnet on the Basis of Coefficient Diagram Method (CDM)

268

Proceeding Number: 300/01 A Fuzzy Inference System for Outsourcing Arrangements Based on the Degree of Outsourcing and Ownership

270

Proceeding Number: 300/02 Provider Selection and Task Allocation Problems under Fuzzy Quality of Service Constraints and Volume Discount Pricing Policy for Telecommunication Network Proceeding Number: 300/03

274

277

A Fuzzy Multicriteria SWOT Analysis Proceeding Number: 300/06 Forecasting Daily Returns Of Istanbul Stock Exchange National 100 Index Using Expert Systems Proceeding Number: 300/07

280

284

Fuzzy QFD Approach for Product Development Proceeding Number: 300/11 Forecasting Intermittent Demand with Neural Networks Proceeding Number: 300/16

288

291

A Review Of Simulation Tools For Hospital Systems Proceeding Number: 300/17 Production Scheduling Using Sımulation Method Faborg-Sim With Priority Rules Proceeding Number: 300/23 A Novel Approach of Graph Coloring for Solving University Course Timetabling Problem

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

vi

295

300

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 300/25 A DEMATEL Method to Evaluate the Dimensions of Electronic Service Quality: An Application of Internet Banking Proceeding Number: 300/29 Effect of Nozzle Injector Angle on Airflow Character and Fluid Variables

303

306

Proceeding Number: 300/30 Performance Analysis of a Successive Approximation Classifier for Road Extraction in Satellite Images

309

Proceeding Number: 300/32 An Overview of Data Mining Tools for Quality Improvement in Manufacturing Prepared for ISCSE 2011 Proceeding Number: 300/33 Exploring E-commerce Use in Meerschaum Products Marketing: A Case Study Proceeding Number: 300/35 Automated Negotiation with Issue Trade-Offs: Modified Even-Swaps for Bargaining

312

315

319

Proceeding Number: 300/37 Scenario Analysis of Scheduling Load-Haul-Dump Vehicles in Underground Mines Using Simulation Proceeding Number: 300/39 A Proposed Model for Web Service-Oriented Content Management System Proceeding Number: 300/41 Assembly Line Balancing With Axiomatic Design Approach

322

324

328

Proceeding Number: 300/42 Embedded Hybrid Approaches for the Capacitated Lot Sizing Problem with Set-up Carryover and Backordering Proceeding Number: 300/43 A Minimum Spanning Tree Based Heuristic for Clustering High Throughput Biological Data Proceeding Number: 300/45 Solving Vehicle Deployment Planning Problem by using Agent Based Simulation Modeling Proceeding Number: 300/46 A Simulation Model to Improve Customer Satisfaction for Sales Points in Mobile Company Proceeding Number: 300/50 Neural Network and Nonlinear Modeling of Daily Milk Yields

331

335

338

341

343

Proceeding Number: 400/01 Estimation of Wind Energy Potential Based on Frequency Distributions in the Mediterranean Sea Region of Turkey

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

vii

347

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 400/04 Performance Evaluation of a Reinforced Concrete Structure According to Turkish Earthquake Code-2007 and Fema-356 Proceeding Number: 400/05 Design and Manufacturing of a Turkish Tirkeş Bow by Composite Material Proceeding Number: 400/06 Analysis of Reinforced Concrete Structures Under the Effect of Various Surface Accelerations Proceeding Number: 400/07 On New Symplectic Approach for Primary Resonance of Beams Carrying a Concentrated Mass

349

352

354

356

Proceeding Number: 400/08 Artificial Neural Networks Analysis of Reinforced Concrete Sections According to Curvature Ductility Proceeding Number: 400/09

358

360

Web-based Simulation of a Lathe using Java 3D API Proceeding Number: 400/10 Effect Of Slice Thickness Variation On Free Vibration Properties of Micro-ct Based Trabecular Bone Models

362

Proceeding Number: 400/12 Integrated Decentralized Automotive Dynamics Tracking Controllers that Account for Structural Uncertainty Proceeding Number: 400/13 Development of a Vehicle Dynamics Prototyping Platform based on a Remote Control Model Car Proceeding Number: 400/14 Comparison of Metaheuristic Search Techniques in Finding Solution of Optimization Problems Proceeding Number: 400/15 Ballistic Behavior of Perforated Armor Plates Against 7,62 mm Armor Piercing Projectile Proceeding Number: 400/16 Noise Level Optimisation of a Midibus Air Intake Pipe by Using Numerical & Analytical Methods Proceeding Number: 400/18 Dynamic Analysis of a Singular Steel Pile due to Wave Loads Proceeding Number: 400/19

365

367

369

372

375

377

380

Equal Channel Angular Extrusion (ECAE) Die Proceeding Number: 400/20 The Role of Finite Element Method in the Stent Design Methodology

384

Proceeding Number: 400/21 A New Finite Element Formulation for Dynamic Response of a Rectangular Thin Plate Under an Accelerating Moving Mass

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

viii

386

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 400/24 Soil-Structure Interaction of RC Structures in Manisa by Linear Time History Analysis Proceeding Number: 400/25 Direct Perturbation Analyses of Nonlinear Free Vibrations of Kelvin-Voigt Viscoelastic Beams Proceeding Number: 400/27

388

390

393

Methodology of Crashworthiness Simulations Proceeding Number: 400/28 Investigation of Variation Depend on Vibration of Mechanical Properties of Composite Materials Proceeding Number: 400/29 Effect of the Masonry Wall Stuccos to the Seismic Behavior of the Steel Structures

396

397

Proceeding Number: 400/36 Soil-Structure Interaction of RC Structures in Different Soils by Nonlinear Static and Dynamic Analyses

399

Proceeding Number: 400/37 Simulation of Lid-driven Cavity Flow by Parallel Implementation of Lattice Boltzmann Method on GPUs

402

Proceeding Number: 400/39 Weight Optimization of the Laminated Composites as an Aircraft Structure Material Using Different Failure Criteria Prepared for ISCSE 2011 Proceeding Number: 400/41 Consideration of Heat Treatment Stages in the Simulation of the Formability of AA2024 T3 Alloys Proceeding Number: 400/42 Numerical Modeling of Dynamic Behavior of Subsea Pipes Acting on Internal and External Flow Proceeding Number: 400/45 Determination of Impact Behavior Depending On Span of Fin in Aluminum Composites Proceeding Number: 400/47 Numerical Investigation on Diesel Spray Formation and Combustion

403

407

409

411

413

Proceeding Number: 400/48 Investigation of Calcination Conditions on Kirka Tincal Ore As Opposed to Wet Concentration Methods Proceeding Number: 400/51

415

417

Nonlinear Vibration of Fractional Visco-elastic String Proceeding Number: 400/52 Stress Evaluations During Endodontic Treatment by Using Three Dimensional Finite Element Method

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

ix

420

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 400/53 Application of Surface Response Method for The Bioactivity of Alkali Treated Ti6Al4V Open Cell Foam

423

Proceeding Number: 400/54 Boundary Layer Equations and Lie Group Analysis of a Sisko Fluid of the Paper Prepared for ISCSE 2011

425

Proceeding Number: 400/55 Numerical Solution of the Single Degree of Freedom System by a Practical Collocation Method Based on the Chebyshev Polynomials Proceeding Number: 400/56 An Investigation on Implant Supported Fully Edentulous Mandible with the Use of 3D FEA Proceeding Number: 400/57 Artificial Intelligent Applications for the Forecasting of Annual Wind Energy Output

427

430

433

Proceeding Number: 400/59 Comparison of Perturbation – Iteration Method with Homotopy Perturbation Method and Explicit Analysis for Poisson – Boltzmann Equation of the Paper Prepared for ISCSE 2011 Proceeding Number: 400/60 A Generalized Solution Algorithm for Cubic Nonlinear Vibration Model with Parametric Excitation

436

438

Proceeding Number: 400/61 ANN-based Investigation of Performance and Emissions of a Diesel Engine Using Diesel and Biodiesel Proceeding Number: 400/62 Feedback to Machine Design – Assuring Maintenance Data to Machine Design

440

444

Proceeding Number: 400/63 Estimation of Incoming Ocean Waves using Kalman Filter for use in Adaptive Ocean Power Converter

446

Proceeding Number: 400/64 Parametric Mass Optimization of Vehicle Suspension System Component under Different Load Cases for ISCSE 2011 Proceeding Number: 400/65 Evaluation of Resilient Modulus of Some Fine-Grained Subgrade Soils Proceeding Number: 400/66 Optimization of Plate Heat Exchanger Design Considering Heat Transfer and Mechanical Strength Proceeding Number: 400/67 Comparison of Turbulence Models for a Heavy Duty CI Engine

448

449

452

453

Proceeding Number: 400/68 Elastic Plastic and Residual Stress Analysis of Simply Supported Thermoplastic Composite Beams Under a Transverse Uniformly Distributed Load

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

x

456

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 500/01 Design of Stable Takagi-Sugeno Fuzzy Control System via LMIs with Constraint on the Input/Output & Initial State Independence

458

Proceeding Number: 500/02 Fuzzy Logic Control Design for a 2-Link Robot Manipulator in MATLAB/Simulink via Robotics Toolbox Proceeding Number: 500/03 Neural Network Based Fuzzy Time Series Approach by Using C Programming Proceeding Number: 500/04 Parameter Selection of Fuzzy Nonparametric Local Polynomial Regression Proceeding Number: 500/05 Optimization of Fuzzy Membership Functions of TTFLC using PSO Proceeding Number: 500/07 Performance Analysis of Industrial Enterprises via Data Mining Approach Proceeding Number: 500/08 Trajectory Tracking Speed Control of Hydraulic Motor with Fuzzy Logic and PID Algorithms Proceeding Number: 500/09

461

464

467

470

472

475

477

The Notion of Fuzzy Soft Function and a Result Proceeding Number: 500/10 A Fuzzy Weighted PSSM for Splice Sites' Identification Proceeding Number: 500/13 Fuzzy Logic User Interface Software for Evaluation of Wooden Material Combustion Performance

479

483

Proceeding Number: 500/14 Reconstructing Non-Tidal Component of Historical Water Level Data with Artificial Neural Network Proceeding Number: 600/01

486

488

New Level of Precision in Architecture Proceeding Number: 600/02 A Novel Mechanism to Increase the Form Flexibility of Deployable Scissor Structures

490

Proceeding Number: 600/06 Study on Number Theoretical Construction and Prediction of Two Dimensional Acoustic Diffusers for Architectural Applications

492

Proceeding Number: 600/07 A CAD-based Modeling for Dynamic Visualization of Urban Environments in Piecemeal (Incremental) Growth Proceeding Number: 600/09 Analysis of Work Space Functional Relationship Diagrams: A Case Study at Gediz University

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

xi

495

497

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/02 Cubic B-Spline Differential Quadrature Methods and Stability for Burgers' Equation

499

Proceeding Number: 700/03 HYDROTAM: A Three-Dimensional Numerical Model to Simulate Coastal Flow and Transportation Mechanism Proceeding Number: 700/04 Layer Thickness of the Breakwaters in Marine Transportation

501

503

Proceeding Number: 700/05 Positive Solutions for Third-Order M-Point Boundary Value Problems for an Increasing Homeomorphism and Homomorphism with Sign Changing Nonlinearity on Time Scales

505

Proceeding Number: 700/06 A Collocation Method for Solving System of Linear Fredholm-Volterra Integral Equations with Variable Coefficients Proceeding Number: 700/07 Monte Carlo Simulation of Thin Film Growth With Crystallization Prepared for ISCSE 2011 Proceeding Number: 700/08 Iterative Operator Splitting Method to Solute Transport Model: Analysis and Application Proceeding Number: 700/09

507

509

511

513

Transition from Line Congruence to Screw System Proceeding Number: 700/12 Theoretical Investigation of the Solution of the Thin Boundary Layer Problem on the Half Plane Proceeding Number: 700/13

515

517

The Dual Drawing Method of the Helicoid Proceeding Number: 700/14 Finite Difference Method for Multidimensional Elliptic Equations with Bitsadze Samarskii Dirichlet Conditions

519

Proceeding Number: 700/15 Taylor Polynomial Solution of Hyperbolic Type Partial Differential Equation with Variable Coefficients Proceeding Number: 700/16 Approximate Solutions of a Parabolic Inverse Problem with Dirichlet Condition Proceeding Number: 700/20

521

523

525

A Novel Optimized Generating the Subsets of a Set Proceeding Number: 700/21 A Trial Equation Method and Its Applications to Nonlinear Equations

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

xii

527

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/22 On the Numerical Solution of Fractional Parabolic Partial Differential Equations with the Dirichlet Condition

529

Proceeding Number: 700/24 On the Modified Crank-Nicholson Difference Schemes for Parabolic Equation Arising in Determination of a Control Parameter Proceeding Number: 700/26 On the Approximate Solution of Ultra Parabolic Equation Proceeding Number: 700/27

531

533

535

Some Numerical Methods on Multiplicative Calculus Proceeding Number: 700/30 Cubic B-spline Collocation Method for Space-splitted one- dimensional Burgers’ Equations

537

Proceeding Number: 700/31 A New Hermite Collocation Method for Solving Differential Equations of Lane-Emden Type Prepared for ISCSE 2011

540

Proceeding Number: 700/33 The Analytical and a Higher-Accuracy Numerical Solution of a Free Boundary Problem in a Class of Discontinuous Functions

542

Proceeding Number: 700/35 A Note on the Difference Scheme of Multipoint Nonlocal Boundary Value Problems for EllipticParabolic Equations

544

Proceeding Number: 700/38 Testing the Validity of Babinet's Principle in the Realm of Quantum Mechanics with a Numerical Case Study of an Obstacle

546

Proceeding Number: 700/39 A Computational Study of the Linear and Nonlinear Opotical Properties of Aminopyridines, Aminopyrimidines and Aminopyrazines Proceeding Number: 700/40 An Application Of An Analytical Technique For Solving Nonlinear Evolution Equations Proceeding Number: 700/41 Optimum Design of Open Canals by Using Bees Algorithm Proceeding Number: 700/43 NBVP with Two Integral Conditions for Hyperbolic Equations Proceeding Number: 700/44 Reflection Investigation In Metamaterial Slab Waveguides Proceeding Number: 700/46 Theoretical Implementation of Three Qubit Hadamard Gate for SI(S=3/2 , I=1/2) Spin System

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

xiii

548

551

553

556

558

560

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/47 Data Partitioning Through Piecewise Based Generalized HDMR: Univariate Case

562

Proceeding Number: 700/48 The Solution of Heat Problem With Collocation Method Using Cubic B-Splines Finite Element of the Paper Prepared for ISCSE 2011 Proceeding Number: 700/49 A Node Optimization in Piecewise High Dimensional Model Representation

565

567

Proceeding Number: 700/50 On the structure of Fractional Spaces Generated by the Positive Difference Operators in a Banach Space Proceeding Number: 700/51

570

573

Discrete Fractional Calculus with Nabla Operator Proceeding Number: 700/52 Deterministic and Stochastic Bellman's Optimality Principles on Isolated Time Domains and Their Applications in Finance Proceeding Number: 700/53 The Rational Expectations: A New Formulation of the Single-Equation Model Proceeding Number: 700/55 Evaluation and Comparison of Diagnostic Test Performance Based on Information Theory Proceeding Number: 700/56 Spin Polarized Transport Properties of Disordered Systems Proceeding Number: 700/58 Iterative Splitting Methods for Schrödinger Equation with Time-dependent Potential

575

577

579

581

583

Proceeding Number: 700/59 Taylor Polynomial Solution of Difference Equation with Constant Coefficients via Using Time Scale Extension Proceeding Number: 700/60

585

587

Sudoku Puzzle Solving with Bees Algorithm Proceeding Number: 700/61 Construction Simulation for Cost and Duration Estimation Proceeding Number: 700/62 On Morgan-Voyce Polynomials Approximation For Linear Fredholm Integro-Differential Equations Proceeding Number: 700/63 Numerical Solution of the Quasilinear Parabolic Problem with Periodic Boundary Condition Proceeding Number: 700/65

589

591

593

596

On Bernstein-Schoenberg Operator

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

xiv

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/66 A Matrix Method for Approximate Solution of Pantograph Equations in Terms of Boubaker Polynomials Proceeding Number: 700/67

598

600

Fiber Bundles in Digital Images Proceeding Number: 700/68 On Numerical Solution of Multipoint Nonlocal Hyperbolic-Parabolic Equations with Neumann Condition Proceeding Number: 700/69

602

604

On de Casteljau Type Algorithms Proceeding Number: 700/70 Numerical Solution of the Inverse Problem of Finding the Time-dependent Diffusion Coefficient of the Heat Equation from Integral Overdetermination Data

606

Proceeding Number: 700/71 Finite Difference and Iteration Methods for Fractional Hyperbolic Partial Differential Equations with the Neumann Condition Proceeding Number: 700/73 Half Quadratic Biased Molecular Dynamics on DNA Rotations

608

611

Proceeding Number: 700/74 Computational Studies on Identifying Pharmacophore for the Inhibition of Cellular Protein and DNA Synthesis by a Series of Thiosemicarbazone and Thiosemicarbazide Derivatives Proceeding Number: 700/76

614

616

Monte Carlo Simulation of the Methanol Trimer Proceeding Number: 800/03

618

Optimization of Izmir Alsancak Port Stock Yard Proceeding Number: 800/04 Diffusion Bridge Method in Inference of Complex Biochemical Systems Proceeding Number: 800/07

621

623

Investigating Zipf’s Laws on Turkish Proceeding Number: 800/08 A Novel Objective Function Embedded Genetic Algorithm for Adaptive IIR Filtering and System Identification Proceeding Number: 800/09 Comparison of Various Distribution-Free Control Charts with Respect to FAR Values Proceeding Number: 800/10 Multivariate Regression Splines and their Bayesian Approaches in Nonparametric Regression

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

xv

625

628

630

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 800/11 Forecasting via MinMaxEnt Modeling: An Application on the Unemployment Rate

633

Proceeding Number: 800/12 Comparison of Simplified Bishop and Simplified Janbu Methods in the Determination of the Factor of Safety of Three Different Slopes Subjected to Earthquake Forces Proceeding Number: 800/13 Comparison of MaxMaxEnt and MinMaxEnt Distributions for Time Series in the Sense of Entropy

635

637

Proceeding Number: 800/14 Examining EEG Signals with Parametric and Non- Parametric Analyses Methods in Migraine Patients and Migraine Patients during Pregnancy Proceeding Number: 800/15 The Statistical Analysis of Highly Correlated Gaussian Noise in a Double Well Potential Proceeding Number: 800/16 A Novel Sentiment Classification Approach Based on Support Vector Machines

639

641

643

Proceeding Number: 800/18 Use of A Combination of Statistical Computing Methods in Determining Traffic Safety Risk Factors on Low-Volume Rural Roads in Iowa, USA Proceeding Number: 800/19 Detecting Similarities of EEG Responses in Dichotic Listening Proceeding Number: 800/22 Applying Decision Tree on Incident Management Data Base for Service Level Agreement

646

649

651

Proceeding Number: 900/02 An Alternative Approach to Promoting Traffic Safety Culture in Communities: Traffic Safety Data Service – The Iowa Case Proceeding Number: 900/03

654

658

Finite Element Analyses of Composite Steel Beams Proceeding Number: 900/04 Digital Analysis of Historical City Maps by Using Space Syntax Techniques: Izmir Case Proceeding Number: 900/05 Knowledge Representation by Geo-Information Retrieval in City Planning

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

xvi

659

661

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 300/28 Parameter Design of Iterative Feedback Tuning Method Using Analysis of Variance for First Order Plus Dead Time Models

663

Proceeding Number: 400/58 Computational Solution of the Velocity and Wall Shear Stress Distribution Inside a Coronary By-Pass Graft to Artery Connection Under Steady Flow Conditions Proceeding Number: 700/11 Computational Study of Isomerization in 4-substituted Stilbenes Proceeding Number: 800/23 Analysis of Highway Crash Data by Negative Binomial and Poisson Regression Models

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

xvii

665

668

669

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/01

ISeeMP: A Benchmark Software for Multithreading Performance of Image Segmentation using Clustering and Thresholding Techniques Bahadir KARASULU Canakkale Onsekiz Mart University, Computer Engineering Dept., Engineering and Architecture Faculty, Terzioglu Kampusu, 17020, Canakkale, Turkey. [email protected], [email protected]

Keywords : Image processing, image segmentation, distributed and parallel processing, high performance computing

INTRODUCTION

Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system. There are certain important considerations when using multi-core processors for improvement of single program performance of color or grayscale image filtering or segmentation (FoS) process: the optimal application performance and the decreasing of execution time. The optimal application performance will be achieved by effectively using threads to partition software workloads [1]. The software developers should be aware of optimal use of resources when using image FoS techniques to coding thread-level parallelism (TLP) based applications to segment images which have high-definition (HD) resolution. In my study, a benchmark software is developed to evaluate TLP performance of four well-known image segmentation techniques. My software benchmarks single- and multi-thread performance of related FoS techniques using OpenMP (Open MultiProcessing) [2], [3] and OpenCV (Open Source Computer Vision Library) [4] infrastructure, and it plots performance results via its built-in graphic viewer. My software is called ISeeMP, which is an acronym for Image Segmentation Extended via Multi-Processing. The software has an original approach for mutual performance comparison of well-known image FoS techniques.

LITERATURE REVIEW

In parallel execution of the programs, the multi-core processors offers support to execute threads in parallel [1]. Therefore, the cost of communication is very less. The computationally intensive processes such as image FoS can be executed more efficiently in the systems based on multi-core processors. OpenMP (Open Multi-

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

1

2 nd International Symposium on Computing in Science & Engineering

Processing) is an application program interface (API) used to explicitly direct multi-threaded (i.e.,TLP) and shared memory parallelism [2], [3]. Image segmentation is a low-level computer vision task. There are some clustering techniques for image segmentation such as k-means [5] and mean-shift [6] clustering. Thresholding is a simplest segmentation process. Otsu’s- and Entropy method are optimal thresholding techniques [7], [8]. Similar benchmark systems and/or softwares are published in the literature. Venkata et al. [9] presented the San Diego Vision Benchmark Suite (SD-VBS). There are other benchmark systems/softwares such as MediaBench [10] and Spec2000 [11] suites which include different algorithms from different research areas.

METHODS

I developed a benchmark software for four different segmentation techniques which are implemented and tested at same platform via same way. I used home-made image dataset to test my software. This dataset involves 10 images which are 24 bit 2048 x 1536 pixels color images [12]. From five test executions, I computed sequential and parallel execution times for each image segmentation method via my benchmark software, and then I compared these results via related plots which covers simple models for speedup and efficiency of related segmentation method’s performance for the TLP. The speedup is the ratio between sequential execution time and parallel execution time of related segmentation algorithm: (1) In my study, the efficiency of a parallel implementation of an image segmentation method is a measure of processor utilization. I defined this efficiency to be speedup divided by the number of processors (or number of threads) used: (2) I analyzed and discussed these results via given related tables in my paper. FINDINGS & CONCLUSION

My testbed hardware suite involves two different computer equipped different kind of CPUs (i.e., one computer with single-core and one computer with dual-core). For some images and/or some methods, the test results obtained via dual-core equipped hardware are better than the results of single-core hardware. In overall, most successful parallelization is made for the mean-shift method that its performance results are better than other methods. The total result is seen from related speedup and efficiency plots and result tables, as well.

REFERENCES

[1] Akther S. and Roberts J. (2006). Multi-Core Programing. Increasing Performance through Software Multithreading.Intel Press.ISBN: 0-9764832-4-6. USA. [2] Slabaugh, G.; Boyes, R.; and Yang X. (2010). Multicore Image Processing with OpenMP. IEEE Signal Processing Magazine [Applications Corner] 27(2): 134-138. March 2010. [3] Packirisamy V. and Barathvajasankar H. (2008). OpenMP in multicore architectures. Technical report, University of Minnesota. http://www.cs.umn.edu/~harish/reports/openMP.pdf

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

2

2 nd International Symposium on Computing in Science & Engineering

[4] Bradski G. and Kaehler A. (2008). Learning OpenCV: computer vision with the OpenCV library. O’Reilly Media, Inc. Publication, 1005 Gravenstein Highway North, Sebastopol, CA 95472. ISBN: 978-0-596-51613-0. [5] Ng H.P., Ong S.H., Foong K.W.C., Goh P.S. and Nowinski W.L. (2006). Medical Image Segmentation Using K-Means Clustering and Improved Watershed Algorithm. In Proceedings of IEEE Southwest Symposium on Image Analysis and Interpretation 61-65. [6] Comaniciu, D. and Meer, P. (2002). Mean shift: A robust approach toward feature space analysis. IEEE Transactions on Pattern Analysis and Mach. Intelligence 24(5): 603–619. [7] Otsu N. (1979). A threshold selection method from gray-level histograms. IEEE Transactions on Syst., Man and Cybern. 9(1): 62-66. [8] Pun, T. (1980). A new method for grey level picture thresholding using the entropy of the histogram, Signal Processing 2. 223–237. [9] Venkata, S. K., Ahn. I., Jeon, D., Gupta, A., Louie, C., Garcia, S., Belongie, S. and Taylor B. M. (2009) SDVBS: The San Diego Vision Benchmark Suite. In Proceddings of the IEEE International Symposium on Workload Characterization, IISWC 2009. pp.55-64, 4-6 Oct. 2009. [10] Lee C., Potkonjak M. and Mangione-Smith W. H. (1997) Mediabench: a tool for evaluating and synthesizing multimedia and communicatons systems. In MICRO 30: Proceedings of the 30th annual ACM/IEEE international symposium on Microarchitecture. pp. 330–335. Washington, DC, USA. [11] SPEC (2000). SPEC CPU 2000 benchmark specifications. http://www.spec.org/cpu2000/. Accessed on 01 March 2011. [12] ISeeMP Image dataset and Application WebSite (2011), http://efe.ege.edu.tr/~karasulu/iseemp/. Accessed on 01 March 2011.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

3

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/03

Estimation of Dominant Motion Direction on Video Sequences Salih GÖRGÜNOĞLU,Karabuk University,Department of Computer Engineering,Karabuk,Turkey,[email protected] ,Şafak ALTAY,Karabuk University,Department of Electronics and Computer Education,Karabuk,Turkey,[email protected],Baha ŞEN,Karabuk University,Department of Computer Engineering,Karabuk,Turkey,[email protected]

Keywords :

Motion Estimation,K-means Clustering,Video Processing

INTRODUCTION

Motion estimation is widely used especially for tracking human movements. Tracking human movements is one of the important issues in security. Checking regularly singular movements of a person and group movements is necessary in environments where providing security is important. In this study, a system which is the estimation of dominant motion direction on video sequences based on k-means clustering method was developed. Video sequences which are included within the scope of this study consist of human movements. People are moving singular or in groups on video sequences. The aims of the system are determination of motion vectors accurately on video sequences and utilisation of clustering process properly. Therefore, dominant motion direction is estimated accurately. Estimation of dominant motion direction software was developed by C# which is a visual programming language. User can select any video, start and stop motion analysis and also see motion vectors and times of processes on interface of the software.

LITERATURE REVIEW

A lot of academic studies were made on motion analysis and motion estimation. Some of them searched only motion of people. One of them is nonparametric density estimation with adaptive, anisotropic kernels for human motion tracking [1]. In this paper, author suggests a model priors on human motion by means of nonparametric kernel densities. Some studies searched groups of people and evaluated motion of people in groups. One of them is tracking groups of people [2].

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

4

2 nd International Symposium on Computing in Science & Engineering

Moreover, there are studies which deal with estimation of traffic density by watching motion of vehicle in traffic. They find regions where have much traffic density. One of them is estimation of vehicle velocity and traffic intensity using rectified images [3]. Other one is vehicle detection and counting by using real time traffic flux through differential technique and performance evaluation [4]. Automatic density estimation and vehicle classification for traffic surveillance systems using neural networks is a study which used artificial intelligence [5]. Motion estimation is widely used in medicine. There are a lot of studies on medicine. One of them is correlation analysis of ultrasonic speckle signals for correlation based motion estimation [6].The other one is motion estimation for abdominal MR [7]. Studies were made on different block matching algorithms such as spiral search algorithm, diamond search algorithm, adaptive search algorithm and orthogonal search algorithm [8-12]. In this study, full search algorithm which is a block matching algorithm is used for finding motion vectors and kmeans clustering method is used for clustering motion vectors.

METHODS

Video sequences consist of frames and consecutive frames are compared for motion estimation. Two frames for comparison are selected with five frames different from each other. Various pre-processes were applied to the frames that are going to be compared after determination. After processes, two frames are compared for finding moving region. A new image is formed; in this form, the regions are white with a white coloured first frame and black coloured second frame. And the rest of them are black. Regions of white pixel in new image show moving regions. Motion vectors are found by using Full Search algorithm which is one of the Block Matching algorithms after the end of pre-processes. Firstly, frames are separated into 16x16 pixels blocks for Full Search algorithm. Blocks of first frame’s moving regions which are determined after pre-processes are matched on 3x3 block region of second frame by scanning. Region of 16x16 pixels which has lowest Sum of Absolute Differences and block are matched. Motion vector is formed according to distance difference between block and matched region. Motion vectors are clustered by k-means method for estimation of dominant motion direction. In the system developed, user can separate motion vectors into 1,2 or 3 groups and see centres of clusters.

FINDINGS & CONCLUSION

As a result, a system which is estimation of dominant motion direction on video sequences based on k-means clustering method was developed within this study. Firstly, pre-processes were applied to video sequences. Later, motion vectors were formed by Full Search algorithm. Finally, dominant motion direction was estimated by K-means algorithm. Pre-processes and algorithms were analysed and evaluated in terms of time. Estimation of dominant motion direction software was developed by C# which is a programming language for implementation of all processes. User can run and analyse all algorithms by this software and also see results on

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

5

2 nd International Symposium on Computing in Science & Engineering

interface. Analysis showed that total process time of algorithms is changes between 300-700 ms according to intensity of motion.

REFERENCES 1. Brox, T.; Rosenhahn, B.; Cremers, D.; and Seidel, H. P. (2007). Nonparametric density estimation with adaptive, anisotropic kernels for human motion tracking. Workshop on Human Motion. Rio de Janeir. Brazil. 152-165. 2. McKenna, S. J.; Jabri, S.; Duric, Z.; Wechsler, H.; and Rosenfeld, A. (2000). Tracking groups of people. Computer Vision and Image Understanding 80(1): 42-56. 3. Maduro, C.; Batista, K.; Peixoto, P.; and Batista, J. (2009). Estimation of vehicle velocity and traffic intensity using rectifield images. IbPRIA 4th Iberian Conference on Pattern Recognition and Image Analysis. Povoa de Varzim. Portugal. 64-71. 4. Mohana, H. S.; Ashwathakumar, M.; and Shivakumar, G. (2009). Vehicle detection and counting by using real time traffic flux through differential technique and performance evaluation. ICACC International Conference on Advanced Computer Control. Singapore. 791-795. 5. Ozkurt, C. and Camci, F. (2009). Automatic density estimation and vehicle classification for traffic surveillance systems using neural networks. Mathematical and Computational Applications An International Journal 14(3): 187-196. 6. Bilge, H. Ş. (1997). Yapay açıklığa dayalı ultrasonik görüntüleme için hareket tahmini. Yüksek Lisans Tezi. Kırıkkale Üniversitesi Fen Bilimleri Enstitüsü. Kırıkkale. 1-25. 7. Şimşek, Yıldırım, M. (2007). Abdominal mr görüntülerinde hareket tahmini. Yüksek Lisans Tezi. Gebze Yüksek Teknoloji Enstitüsü Mühendislik ve Fen Bilimleri Enstitüsü. Gebze. 15-31. 8. Kroupis, N.; Dasygenis, M.; Soudris, D.; and Thanailakis, A. (2005). A modified spiral search algorithm and its embedded hardware implementation. IEC. Prague. 375-378. 9. Zhu, S. and Ma, K. K. (2000). A new diamond search algorithm for fast block matching motion estimation. IEEE Transactions on Image Processing 9(2): 287-290. 10. Koh, Y. J. and Yang, S. B. (1999). An adaptive search algorithm for finding motion vectors. IEEE TENCON. Cheju Island. South Korea. 186-189. 11. Metkar, S. P. and Talbar, S. N. (2010). Fast motion estimation using modified orthogonal search algorithm for video compression. Signal, Image and Video Processing 4(1): 123-128. 12. Ezhilarasan, M. and Thambidurai, P. (2008). Simplified block matching algorithm for fast motion estimation in video compression. Journal of Computer Science 4(4): 282-289.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

6

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/05

Optimization of cos(x) and sin(x) for High-performance Computing Md. Haris Udin Sharif, University of Asia Pacific, Bangladesh, [email protected] Sahin Uyaver (Corresponding Author), TC Istanbul Commerce University, Istanbul, Turkey, [email protected] Md. Haidar Sharif, Faculty of Engineering and Architecture, TC Gediz University, Izmir, Turkey, [email protected]

Keywords :

High performance computing architecture

ABSTRACT

At the present day, high performance computing (HPC) architectures are designed to resolve heterogeneous sophisticated scientific as well as engineering problems across an ever expanding number of HPC and professional workloads. The computation of the fundamental functions sin(x) and cos(x) is not so frequent but somewhat time consuming task in high-performance numerical simulations. In this paper, we have addressed the problem of high-performance numerical computing of sin(x) and cos(x) pair for specific processor IA-64 and optimized it for a vector of input arguments xi where the vector length i must be an integer multiple of 4. We have showed that the processor micro-architecture and the manual optimization techniques improve the computing performance significantly as compared to the standard math library functions with compiler optimizing options.

INTRODUCTION

The computation of the fundamental trigonometric functions sin(x) and cos(x) are not so frequent but some what very time consuming task in numerical simulations. For examples, to simulate scattering at the Coulomb potential at Rutherford model needs to calculate tens of millions of fundamental functions e.g., sin(x), cos(x), etc. (see Figure 1). This function can be computed accurately and efficiently by calling math library routine which will also deal with exceptional case e.g., input argument x=0. But standard routine is often incapable of achieving the demanding performance of high-performance computing in simulation. Our current trend in constructing high-performance numerical computing for specific processor IA-64 (Intel Architecture-64) has been optimized for 1/x with a vector of input arguments x which works more or less simultaneously.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

7

2 nd International Symposium on Computing in Science & Engineering

In 2000, Strebel [1] optimized sin(x) and cos(x) functions along with others for the static scheduling architecture e.g., Alpha 21164. The Alpha 21164 microprocessor was designed for in-order (static) scheduling [1,2]. Nevertheless, the optimization is not workable for the processors like Itanium or even Alpha 21264. Thus, we have lined up our efforts towards Itanium and hence our work would be considered as an incremental improvement of the optimization result presented by him. We have optimized sin(x) and cos(x) functions for Itanium processor (IA-64) with an 1000MHz clock. There are mainly 2 types of scheduling may observe in Itanium: (i) code scheduled to minimize the number of bundles; and (ii) code scheduled to minimize the number of clock cycles. We have emphasized the first scheduling and the most common bundles MMF (Template value 14 and 15 [3]) throughout our optimizations. The Itanium processor core is capable of up to 6 issues per clock, with up to three branches and two memory references. The memory hierarchy consists of a three-level cache. The first level uses split instruction and data caches; floatingpoint data are not placed in the first-level cache. The second and third levels are unified caches, with the third level being an off-chip cache placed in the same container as the Itanium die. However, the IA-64 is a unique combination of innovative features such as explicit parallelism, predication, speculation and more. The architecture is designed to be highly scalable to fill the increasing performance requirements of various server and workstation market segments. In the optimized implementations it has been used an eminent aspect that one add and one multiply are issued each cycle and the result will be available after result latencies for both add and multiply. It has been extensively used the standard manual optimization techniques e.g., loop unrolling and software pipelining to compute sin(x) and cos(x) efficiently. The Loop unrolling technique combines several iterations of a loop into one basic block. The technique of software pipelining is a way to overlap the execution of different instances of loop body in a systematic way. Loop unrolling and software pipelining are combined by first unrolling the loop and therefore increasing the size of the loop body to some extent and then software pipelining the resulting codes which increases the potential for instruction level parallelism a bit more. @font-face { font-family: "Times"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 10pt; font-size: 12pt; font-family: "Times New Roman"; }p { margin-right: 0cm; margin-left: 0cm; font-size: 10pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }

METHOD

For small |x| the trigonometric functions sin(x) and cos(x) can be efficiently computed using a couple of terms of the Taylor series. Instead of using the formulas for larger values x as well, we may use the addition theorems of trigonometric functions and use table lookup for some x_t ~x. Since x_s is small we have cos(x_s) so it is a good idea not to compute cos(x_s) directly, but work with u_s=cos(x_s)-1 instead. A schematic representation of the code which calculates pair of sin(x) and cos(x) is given below. For simplicity, we assume that the number of bits for the table lookup is b=7. In this algorithm, the way of computing y_h is only possible if floating point numbers are always rounded to 53 bits in the mantissa.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

8

2 nd International Symposium on Computing in Science & Engineering

Note that the lookup table does not have to consist of 2\times2^b entries, but we may exploit identities like cos(2\pi-x)=cos(x) and sin(\pi/2-x)=cos(x). Our implementation uses a lookup table for cos(x_t) for the range 0\leq x_t \leq \pi containing 2^{b-1}+1 entries.

RESULTS

To analyze the efficiency of the optimized pair of sin(x) and cos(x) for varying vector length n the only barely adequate time required for compute a certain task with t_0=16ns. The parameter c_1 will be determined by the overhead which occurs for every iteration of the innermost loop such as a taken branch penalty, as opposed to the parameter c_2 measures the overhead which occurs once per function call or once per vector. A tiny portion of the overhead c_2 are for example function call and return, register save and return to its original or usable and functioning condition, and the penalty due to non-optimal pre-loop and post-loop codes.

REFERENCES

[1] R. Strebel. Pieces of software for the Coulombic m body problem. Diss. ETH No. 13504, 2000. [2] C. C. Corporation. Alpha Architecture Handbook. Compaq Computer Corporation, 4 edition, 1998. {3] D. Patterson and J. Hennessy. Computer Architecture A Quantitative Approach. Morgan Kaufmann Publishers, Inc., 3rd edition, 2003.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

9

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/06

A Comprehensive Two Level Description of Turkmen Morphology Mehmet Kara,Istanbul University,Contemporary Turkic Languages and Literatures Dept,Istanbul,Turkey,[email protected],Maxim Shylov,Fatih University,Computer Eng Dept,Istanbul,Turkey,[email protected],Atakan Kurt,Fatih University,Computer Eng Dept,Istanbul,Turkey,[email protected]

Keywords :

Turkmen, phonology, orthography, two level morphology, machine translation, applied

linguistics

INTRODUCTION

In this paper we present a two level description of Turkmen Language. Turkmen is a Turkic language and the official language of Turkmenistan. It is spoken by more than 6 million people mostly in Central Asia. We describe the Turkmen orthography using two level rules of Koskenniemi. These orthographic rules governing the phonology of the language during word formation is essential to morphological parsing and generation. We then represent the Turkmen morphotactics using finite state machines. Turkmen like Turkish is an agglituvative language with a rich set of inflectional and derivational morphemes. Words are formed by affixing these morphemes to the root words successively. The FSMs for nominal, verbal and adverbial morphotactics describe in detail how the words of the language can be formed. The orthographic rules and morphotactics are implemented in the Dilmac Machine Translation Framework by encoding them in XML files. We have created a lexicon of root words in Turkmen to test the morphological parsing. We present a number of nominal, verbal and adverbial word formation examples to demonstrate the systems. The Turkmen language is the official language of Turkmenistan. The Turkmen language is one of the Turkic languages, belonging to the Oghuz group. In the past Arabic script, Unified Turkish Latin Alphabet (UTLA), and Cyrillic script was used, until 1995, the "Täze Elipbiÿi" or New Alphabet was formally introduced and officially came into use in 1996. Like the rest of the Turkic languages, Turkmen is agglutinative, meaning that most grammatical functions are pointed out by attaching suffixes to the stems of words. One of the most notable features of the Turkmen language is the vowel harmony. All vowels can be classified as front vowels or back vowels. In the Turkmen

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

10

2 nd International Symposium on Computing in Science & Engineering

language, if there is a back vowel in the first syllable of the word, back vowels are also used in the following syllables. The same can be said for the front vowels. In this study we present a description of the Turkmen morphology in the two level morphological model of Koskenniemi. Our purpose is to describe the Turkmen phonology and morphology in a formal and precise manner in order to develop a machine translation system between Turkmen and Turkish. Since both languages belong to the same language family and have similar morphology and grammar, a morphological machine translation is possible between these languages.

RELATED WORK

Machine translation is one of the major problems of natural language processing. Machine translation between related languages is feasable beacuse of the similarities and commonalities in mophology, syntax and lexicon. Machine translation between Czech to Slovak, Czeck to Polish, and Spanish to Catalan have recencly been developed. These projects are practical proof that successful translation between related languages can be built with not too much efford. In a similar afford machine translation between Turkic languages can be possible. They obey the same morphologic, syntactic structures to a great degree. Turkish mophology and syntax from computational perspective

is studied in depth by Oflazer and other scholars making it possible to build

morpgological machine translation systems. Initial work on Azerbaijani, Uygur, Crimean Tatar languages are produced in different studies.

METHODS

Like other Turkic Languages Turkmen is also agglutinative and employs vowel harmony. It has a rich set of inflectional and derivational suffixes and is able to generate huge number of words. A word is formed by affixing inflectional and derivational morphemes to roots or stems successively as described by the Turkmen morphotactics. During the formation of words phonological changes frequently occur. These changes are governed by the phonological or orthographic rules of Turkmen. These two level orthographic rules define how and when many phonological events such as vowel harmony occur. The surface level word corresponds to the written form of a word, after affixing morphemes to the root at the lexical level. We present a comprehensive list two level rules covering most of the Turkmen phonology. An example is given below: k:g ð _ +:0 (@:0)V which states that the last k of a word becomes g whenever a morpheme starting with a vowel is affixed to it. An application of this rule would be Lexical: kirjimek+nH

dirty + Acc

Surface: kirjimeg00i

kirjimegi

Then we move onto describing Turkmen morphology using finite state machines (FSM). A finite state machine, which in principal is a directed graph, consists of a set of states and a set of transitions among these states as shown in the fıgure below. Transitions such as +lAr, +Hm, +CI are the edges of graph labeled with inflectional

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

11

2 nd International Symposium on Computing in Science & Engineering

or derivational morphemes defining in what order those morphemes can be affixed to a word. The immediate states such as [Plural], in a way, represent partial worlds and their part of speech tagging. The initial states such as [Noun] in the figure represent the roots words from a lexicon and their part of speech such as noun, verb, adverb, adjective, etc. The final states such as [1 Prs Possesive] represent full words created by starting with a root word in an initial state and affixing morphemes on the transitions to the partial words in each intermediate state. We define the nominal, verbal and adverbial morphotactics of the language using this FSM model. FINDINGS & CONCLUSION Finally we provide a number of examples from Turkmen demonstrating the expressive power of two level rules and finite state machines. We have implemented almost all of the Turkmen morphology in Dilmac Machine Translation Framework. We have formulized 41 orthographic rules and covered about 250 morphemes in morphotactics which is done by encoding the morphology in a specific XML format. This is to our knowledge the most comprehensive study on Turkmen language which is the first precondition for a succesful machine translation between Turkmen and Turkish. We provided parsing examples using Dilmac. Dilmac software framework primarily developed for morphological machine translation between Turkic languages. Our ultimately goal in Dilmac project is to translate most Turkic languages to Turkish and vice versa.

REFERENCES Altıntaş K. & Çiçekli İ., "A Morphological Analyzer for Crimean Tatar", Proceedings of the 10th Turkish Symposium on Artificial Intelligence and Neural Networks, TAINN, pp. 180-189, North Cyprus, 2001 2. Canals-Marote R., Esteve-Guillén A., Garrido-Alenda A., Guardiola--Savall M.I., Iturraspe-Bellver A., Montserrat-Buendia S., Pérez-Antón-Rojas P., Ortiz-Pina S., Pastor-Antón H. & Forcada M.L., "interNOSTRUM: a Spanish-Catalan Machine Translation System", Machine Translation Review, Vol.11, pp. 21-25, 2000 3. Dvořák B., Homola P. & Kuboň V., "Exploiting similarity in the MT into a minority language", LREC-2006: Fifth International Conference on Language Resources and Evaluation, Genoa, Italy, 2006 4. Garrido-Alenda A., Gilabert-Zarco P., Pérez-Ortiz J.A., Pertusa-Ibáñez A., Ramírez-Sánchez G., SánchezMartínez F., Scalco M.A. & Forcada M.L., "Shallow Parsing for Portuguese-Spanish Machine Translation", TASHA 2003: Workshop on Tagging and Shallow Processing of Portuguese, Lisbon, Portugal, 2003 5. Jurafsky D. & Martin J. H., Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition, Prentice Hall, New Jersey, 2000 6. Hajič J., Hric J. & Kuboň V., “Machine translation of very close languages”, Proceedings of the sixth conference on Applied natural language processing, pp. 7-12, 2000 7. Hajič J., Homola P. & Kuboň V., "A simple multilingual machine translation system" MT Summit IX, New Orleans, USA, 2003 8. Kara M., Türkmen Türkçesi Grameri, Ankara, 2005 9. Koskenniemi K., “Two – level morphology: A general computational model of word-form recognition and production”, Tech. rep. Publication, No. 11, Department of General Linguistics, University of Helsinki, 1983. 10. Oflazer K., “Two – level Description of Turkish Morphology”, Literary and Linguistic Computing, Vol. 9, No. 2, 1994 11. Tantuğ A. C., Adalı, E. & Oflazer K., “Computer Analysis of the Turkmen Language Morphology”, in T. Salakoski (Eds.), FinTAL 2006, Lecture Notes in Computer Science, pp. 186-193, Springer, 2006 12. Tantuğ A. C., Adalı, E. & Oflazer K., A MT System from Turkmen to Turkish Employing Finite State and Statistical Methods, Proceedings of MT Summit XI, 2007 1.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

12

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/07

An Overview of Two Level Finite State Kyrgyz Morphology Zeliha GÖRMEZ,Istanbul University,Computer Eng Dept,Istanbul,Turkey,[email protected], Atakan KURT,Computer Eng Dept,Fatih University,Istanbul,Turkey,[email protected], Mehmet KARA,Contemporary Turkic Languages and Literatures Dept,Istanbul University,Istanbul,Turkey,[email protected], Kalmamat KULAMSHAEV,Contemporary Turkic Languages and Literatures Dept,Istanbul,Turkey,[email protected]

Keywords :

Kyrgyz, morphology, phonology, orthographic rules, two level morphology, finite state

machine, machine translation

INTRODUCTION

In this paper we present an overview of finite state Kyrgyz morphology using Koskenniemi’s two level morphology [Koskenniemi 1983], along with a set of orthographic rules modeling the phonological dynamics during word formation. Morphology is one of the fundamental building blocks of any natural language application. This is more so, especially for agglutinative languages in which most words are formed by joining morphemes. Accurate morphological parsing is required for a proper syntactic and semantic analysis of sentences upon which many applied linquistics applications are built. Developing a comprehensive Kyrgyz morphology is part of our ongoing Kyrgyz-Turkish Machine Translation Project. We first describe the Kyrgyz orthography using two level rules. Orthographic rules are formulated for the correct spelling during word formation is essential to morphological parsing and generation. We then represent the Kyrgyz morphotactics using finite state machines (FSM). The FSMs for nominal, verbal and adverbial morphotactics describe how the words of the language can be formed in detail. We have created a lexicon of root words in Kyrgyz to test the morphological parsing. We present a number of nominal, verbal and adverbial word formation examples to demonstrate the system.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

13

2 nd International Symposium on Computing in Science & Engineering

RELATED WORK

Machine translation is one of the major problems of natural language processing. Machine translation between related languages is feasable beacuse of the similarities and commonalities in mophology, syntax and lexicon. Machine translation between Czech to Slovak, Czeck to Polish, and Spanish to Catalan have recencly been developed. These projects are practical proof that successful translation between related languages can be built with not too much efford. In a similar afford machine translation between Turkic languages can be possible. They obey the same morphologic, syntactic structures to a great degree. Turkish mophology and syntax from computational perspective

is studied in depth by Oflazer and other scholars making it possible to build

morpgological machine translation systems. Initial work on Azerbaijani, Uygur languages are produced in different studies.

METHODS

Kyrgyz is one of the major Turkic languages. It is the official language of the independent state of Kyrgyzistan. It is spoken by more than 4 million people in Central Asia. Kyrgyz is written in a modified version of Cyrillic script. Like other Turkic Languages Kyrgyz is also agglutinative. It has a rich set of inflectional and derivational suffixes. A word is formed by affixing inflectional and derivational morphemes to roots successively. During the formation of words some phonological rules is to be followed. These orthographic rules are expressed in a certain formalism as shown here. L : t => Cs +:0_ This rule states that the letters in lexical L turn into letter t if the last letter of the word is one of the letters in Cs. Lexical letters such as L and Cs are actually set of letters of the language. Cs={f,s,t,k,ç,ş,h,p}, L={l,d,t} An application of the above rule is given in the following example. Lexical : tiş+LA Surface: tiş0te

(dişle) (tişte)

In this example the surface word tişte is formed by affixing +LA morpheme to the root tiş. l in the lexical level becomes t in the surface obeying the rule above. We describe Kyrgyz morphology using finite state machines (FSM). A finite state machine as shown below is in principal a directed graph. It consists of a set of states and a set of transitions among these states.

Transitions are the edges of graph labeled with inflectional or derivational morphemes defining in what order those morphemes can be affixed to a word. The immediate states [Plural], in a way, represent partial worlds and their part of speech tagging. The initial states [Noun] represent the roots words from a lexicon and their part of speech such as noun, verb, adverb, adjective, etc. The final states [Noun][Plural][1 PrsPossesive] represent full words created by starting with a root word in an initial state and affixing morphemes on the transitions to the

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

14

2 nd International Symposium on Computing in Science & Engineering

partial words in each intermediate state such as Noun+lAr or Noun+lAr+Hm. We define the nominal, verbal and adverbial morphotactics of Kyrgyz language using this finite state model.

FINDINGS & CONCLUSION

We have implemented most of Kyrgyz morphology in Dilmac Machine Translation Framework. We have defined 37 orthographic rules and covered about 90 derivational and 50 inflectional morphemes. This is to our knowledge the first study on Kyrgyz language which is the first precondition for a succesful machine translation between Kyrgyz and Turkish. As a result of this study we now have a morphological parser and a morphological generator for Kyrgyz. We provide a number of examples from Kyrgyz demonstrating the expressive power of two level rules and finite state machines. Dilmac framework is primarily developed for morphological machine translation between Turkic languages. Our ultimately goal in this project is to develop a Kyrgyz-Turkish machine translation system using Dilmac.

REFERENCES Kasapoğlu Çengel H., Kırgız Türkçesi, in Türk Lehçeleri Grameri [Prof. Dr. Ahmet Ercilasun], 481-542, Ankara, Akçağ, 2007 2. Altıntaş K. & Çiçekli İ., "A Morphological Analyzer for Crimean Tatar", Proceedings of the 10th Turkish Symposium on Artificial Intelligence and Neural Networks, TAINN, pp. 180-189, North Cyprus, 2001 3. Ilker Hamzaoglu.. Machine translation from Turkish to other Turkic languages and an implementation for the Azeri language. MSc Thesis, Bogazici University, Istanbul, 1993 4. Koskenniemi K., “Two – level morphology: A general computational model of word-form recognition and production”, Tech. rep. Publication, No. 11, Department of General Linguistics, University of Helsinki, 1983. 5. Hajič J., Hric J. & Kuboň V., “Machine translation of very close languages”, Proceedings of the sixth conference on Applied natural language processing, pp. 7-12, 2000 6. Oflazer K., “Two – level Description of Turkish Morphology”, Literary and Linguistic Computing, Vol. 9, No. 2, 1994 7. Tantuğ A. C., Adalı, E. & Oflazer K., “Computer Analysis of the Turkmen Language Morphology”, in T. Salakoski (Eds.), FinTAL 2006, Lecture Notes in Computer Science, pp. 186-193, Springer, 2006 8. Tantuğ A. C., Adalı, E. & Oflazer K., A MT System from Turkmen to Turkish Employing Finite State and Statistical Methods, Proceedings of MT Summit XI, 2007 9. Shylov M., “Two Level Turkmen morphology and Dilmac Machine Translation Framework”, MS Thesis, Computer Eng Dept Fatih University, 2008 10. Orhun M, Adalı E, Tantuğ C, Uygur Dili ve Makineli Çeviri, Akademik Bilişim, 2008 1.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

15

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/09

Examining the Impacts of Stemming Techniques on Turkish Search Results by Using Search Engine for Turkish Erdinc UZUN,Namik Kemal University,Computer Engineering Department,Corlu/Tekirdag,Turkey,[email protected]

Keywords :

Turkish Web IR, Stemming Methods, Agglutinative Languages

INTRODUCTION

The major aim of search engines is actually a question of deciding which documents in a text collection should be obtained from search algorithm to satisfy user’s need for information. They use any stemming techniques and various different techniques to provide the best result first. For example, Google heavily relies on links when it comes to determine the ranking of a web site. Moreover, the ranking process varies widely from one engine to another. In addition to these, we don’t know which the stemming technique is used by search engines for Turkish. Furthermore, different stemming techniques don’t support by commercial search engines. Hence, we can’t examine all possible cases in terms of the information retrieval for Turkish. Therefore, we develop a search engine for Turkish (SET - Classes of SET developed in C# programming language are open source and available via the web page http://bilgmuh.nku.edu.tr/SET/.). To the best of our knowledge, this project is the first online project for researches on Turkish IR (Information Retrieval). With developing a custom search engine, we can examine the effects of stemming techniques in Turkish IR.

LITERATURE REVIEW

In Turkish, Ekmekcioglu and Willett [1] compare the effectiveness of information retrieval by employing stemmed and non-stemmed query word terms using Turkish news articles of size 6,289 and 50 queries. Sever and Bitirim [2] evaluate the effectiveness of a new stemming algorithm, namely FINDSTEM, which employs inflectional and derivational stemmers. Their algorithm provides 25% retrieval precision improvement with respect to no-stemming. Pembe and Say [3] investigate the question of whether NLP techniques can improve the effectiveness of information retrieval. Can et al. [4] compare the effects of four different stemming options by

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

16

2 nd International Symposium on Computing in Science & Engineering

using a large-scale collection. They used 72 ad hoc queries. All these studies are local. To the best our knowledge, this study is the first online project in Turkish IR. SET has four main modules: a crawler, an indexer, a ranking and a searching. The crawler module downloads Milliyet news and automatically creates XML files. The indexer module prepares index to enable a rapid search. The searching module displays the search results that are sorted by the ranking module. In Turkish, only a very limited number of test collections are available for IR studies. We suffered from similar limitations in creating dataset at our previous studies [5], [6]. For this study, we have developed a crawler that collects news specifically from Milliyet. Using this crawler, we were able to access news of Milliyet stretching from 2004 to 2007 and create our test collection. The indexer produces an inverted file that stores information about words and files to enable a rapid search to be made. Before storing words to the inverted file, the most widely used technique is the stemming that is the process for reducing inflected or derived words to their stem. The SET uses no-stemming (NS), stemming of Zemberek (http://code.google.com/p/zemberek/) and word truncation technique (WT). No-stemming uses original words for indexing. Zemberek (Turkish Stemmer - TS) is morphological analyzer that can be removed both inflectional and derivational suffixes of words. In word truncation technique, first n character of each word is its stem and words with n characters are with no truncation. In Turkish IR researches, Sever and Tonta [7] also proposed the use of values 5, 6 and 7 for n. However, their proposition is intuitive and based on their observation that truncated and actual Turkish words display similar frequency distributions. Can et al. [4] indicate that n=5 is appropriate for Turkish IR. Hence, we select n=5 in this study. In the ranking, the tf–idf weight, which is a statistical measure used to evaluate how important a word is to a document in a text collection, is used. This weight has two main components. First one, the term frequency (tf) component, should depend upon the frequency with which a query terms occurs in a given document. The other, the document frequency component, should depend upon how frequency the term occurs in all documents. In fact, we are really interested in inverse document frequency (idf), which measures the relative rarity of a term.

METHODS

Test collection that is collected in this study contains 240,364 documents and the size of this collection is around 511 MB in UTF-8. In all documents, this collection contains 94% alphabetic, 4% numeric and 2% alphanumeric characters. Each article contains 309 words (tokens) on the average without stopword elimination. The average length of a word is 6.60 characters. The search module in SET is a web interface for specifying queries. Then most widely Boolean used retrieval model was used in SET. This means you can use Boolean logic operator (AND, OR, NOT) and parenthesis in searching query. The evaluator module is a web interface for examining search results. In this evaluator, we concentrate on the precision at 10 documents retrieved (P@10), precision at 20 documents retrieved (P@20), and mean uninterpolated average precision (MAP). P@10 and P@20 are simple and intuitive. However, MAP is based on a much wider set of information than P@10 and P@20. MAP is the mean of the precision scores obtained after

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

17

2 nd International Symposium on Computing in Science & Engineering

each relevant document is retrieved. As MAP is a more reliable measure for IR effectiveness [8], we use MAP for comparison in this study.

FINDINGS & CONCLUSION

110 ad hoc queries are prepared to evaluate the effectiveness of stemming techniques. These query terms are divided into three sub groups by using the MAP values of NS and TS.

• 42 negative cases that stemming techniques are ineffective • 46 positive cases that stemming techniques are effective • 16 equal cases that the MAP values are equal

While evaluating the average performance on 110 query terms, stemming techniques seems to be slightly better than the other techniques. Improvements in search results of TS, WT5 are 6.68%, 13.16% better than NS, respectively. The average count of retrieved search results show that NS can be used to narrow the search results when comparing other methods. In the other hand, TS is an appropriate technique for obtaining more search results than the others. In this study, we describe an online Turkish search engine and evaluate the effects of the stemming techniques on Turkish search results obtained from this search engine. Stemming techniques usually gives satisfactory results in IR [9], [10]. Still, our study indicates that there are some cases that stemming is proved to be inadequate. Suffixes that are used in user’s query term can be critical in some cases. However, stemming techniques eliminates importance of these suffixes. Regular agglutinative languages such as Turkish encode more information with suffixes than the other languages. Because of that, not only stems but also suffixes are important in agglutinative language based IR.

REFERENCES

1. Ekmekcioglu, F.C. and Willett, P. (2000). Effectiveness of stemming for Turkish text retrieval. Program, 34(2), pp. 195-200. 2. Sever, H. and Bitirimi Y. (2003). FindStem: Analysis and evaluation of a Turkish stemming algorithm. Lecture Notes in Computer Science, 2857, pp. 238-251. 3. Pembe F. C., and Say ACC, (2004). A linguistically motivated information retrieval system for Turkish. Lecture Notes in Computer Science, 3280, pp. 741-750. 4. Can, F., Kocberber, S., Balcik, E., Kaynak, C., Ocalan, H. C., Vursavas, O. M., (2008). "Information retrieval on Turkish texts." Journal of the American Society for Information Science and Technology. Vol. 59, No. 3, pp. 407-421. 5. Uzun, E., Kılıçaslan, Y., H.Agun, V. and Ucar, E., (2008). Web-based Acquisition of Subcategorization Frames for Turkish, ICAISC 2008, Editors: Rutkowski L. et.al., ISBN 978-83-60434-50-5, pp. 599-607.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

18

2 nd International Symposium on Computing in Science & Engineering

6. Uzun, E., (2007). An internet-based automatic learning system supported by information retrieval [İnternet tabanlı bilgi erişimi destekli bir otomatik öğrenme sistemi], Ph.D., Department of Computer Engineering, Trakya University, Edirne, Turkey 7. Sever, H. and Tonta, Y. (2006). Truncation of Content Terms for Turkish, CICLing 2006, Mexico City, Mexico. 8. Sanderson, M. and Zobel J. Information retrieval system evaluation: Effort, sensitivity, and reliability, ACM SIGIR’ 05, 2005, pp. 162-169. 9. Krovetz, R. (1993). Viewing morphology as an inference process, ACM SIGIR’93, Pittsburgh: ACM, 1993, pp. 191-202. 10. Savoy, J. (2006). Light stemming approaches for the French, Portuguese, German and Hungarian languages, ACM SAC’ 06, pp. 1031-1035.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

19

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/11

Development of New VO2max Prediction Models by Using Artificial Neural Networks M. Fatih AKAY,Cukurova University,Dept. of Computer Engineering,Adana,TURKEY,[email protected] Noushin Shokrollahi,Cukurova University,Dept. of Computer Engineering,Adana,TURKEY,[email protected] Erman Aktürk,Cukurova University,Dept. of Computer Engineering,Adana,TURKEY,[email protected] James D. George,Brigham Young University,Dept. of Exercise Sciences,Provo,UT,USA,[email protected]

Keywords :

neural networks, maximal oxygen uptake

INTRODUCTION

Cardiorespiratory fitness (CRF) is the ability to perform dynamic, moderate-to-high intensity exercise using the large muscle groups for long periods of time [1]. CRF depends on the respiratory, cardiovascular, and skeletal muscle systems and, therefore, is an important component of health and physical fitness [2]. The standard test for determining CRF is the measurement of maximal oxygen uptake (VO2max) during maximal graded exercise test [3]. VO2max is the most accurate way to assess CRF, however maximal tests need expensive gas analysis and ventilation equipment [4]. In situations where these equipment are not available, the subject can still do the maximal exercise test and his/her VO2max can be predicted by using various demographic (age, gender), biometric (body mass) information and maximal test variables (heart rate, rating of perceived exertion, treadmill speed and grade). There are only a few studies in literature that used maximal variables to predict VO2max. Most of these studies used multilayer linear regression (MLR) for developing VO2max prediction equations. Also, cross validation was not used in any of these studies, which raises a question about the reliability of the presented results. The purpose of this paper is twofold. The first is to develop new VO2max prediction models based on maximal variables by using artificial neural networks (ANN). The second is to enhance the prediction accuracy of the developed models by including questionnaire variables in the models. To the best of our knowledge, such hybrid models (i.e. models that include maximal test variables and questionnaire data) do not appear in literature.

LITERATURE REVIEW

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

20

2 nd International Symposium on Computing in Science & Engineering

Froelicher et. al. [5] investigated the hypothesis that an individual’s maximal oxygen consumption can be realistically predicted by the maximal time achieved in the Balke treadmill protocol. The oxygen consumption in the final minute of exercise of 1025 healthy men who performed a maximal effort in the Balke protocol were linearly regressed on their maximal treadmill time using a least-squares fit technique. Their results for standard error of estimate (SEE) and multiple correlation coefficient (R) were 4.26 and 0.72, respectively. Foster et. al. [6] used the time variable and tried to show nonlinear regression could produce more accurate results than linear regression for VO2max prediction. SEE and R of their models were reported to be 3.35 and 0.97, respectively. Patton et. al. [7] used a maximal predictive cycle ergometer test for estimating VO2max. The test consisted of pedaling a cycle ergometer (Monark) at 75 rev/min, beginning at an intensity of 37.5 watts and increasing this amount each minute until the subject could no longer maintain pedal rate. The highest work rate achieved was recorded as the endpoint of the test and used to construct regression equations to predict VO2max. This was compared with two direct measures of VO2max (an interrupted treadmill run and an interrupted cycle ergometer procedure at 60 rev/min) and with the submaximal predictive test. The R of their models was 0.85, however they did not report any results for SEE. Storer et. al. [8] hypothesized that cycle ergometer VO2max could be accurately predicted due to its more direct relationship with watts. Therefore they developed an equation including the variables watts, weight and age. They reported that the R of their model was 0.94.

METHODS

The dataset that is used in this study includes 100 (50 females and 50 males) healthy volunteers ranging in age from 18 to 65 years. All subjects were recruited from the Y-Be-Fit Wellness Program at Brigham Young University (USA) and employees from the LDS Hospital in Salt Lake City, Utah. All subjects performed a maximal test using a modified version of the Arizona State University maximal protocol [9], with only a slight change in the warm up to assess VO2max. This dataset contains the maximal test variables, heart rate, grade and self-reported rating of perceived exertion from treadmill test and also non-exercise variables gender, age, body mass index (BMI) perceived functional ability (PFA) and physical activity rating (PA-R) questionnaires. Multilayer feed forward ANN [10] are used to develop several VO2max prediction models based on maximal and non-exercise variables. The inputs and the output of the network are normalized so that the values range from -1 to +1. A tansigmoid function is used in the hidden layer, and a purelinear transfer function is used in the output layer. The Levenberg-Marquardt algorithm is utilized for training the network. Using 10-fold cross validation on the dataset, SEE and R values are calculated.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

21

2 nd International Symposium on Computing in Science & Engineering

FINDINGS & CONCLUSION

Six different VO2max prediction models have been developed. Table 1 shows the variables used in each model and also the SEE and R values. Considering Table 1, the following conclusions can be reached: •

ANN performs better than MLR for prediction of VO2max. The reported values of R for all ANN-based

models are higher than those reported for MLR-based models. Also, the reported values of SEE for all ANNbased models are lower than those reported for MLR-based models. •

The accuracy of the VO2max prediction models based on maximal test variables can be improved by

including questionnaire variables in the models. Adding the questionnaire variables PFA and PA-R in models 1, 2 and 3 yielded 37.1%, 37.88% and 27.62% decrements in their SEE values. •

The most accurate VO2max prediction model is the one that includes the variables gender, age, BMI,

heart rate, PFA and PA-R. Table 1: Results for ANN-based and MLR-based models Models

Regression Method

Model no

ANN

MLR

Variables

SEE

R

SEE

R

1

Gender, Age, BMI, RPE 3.72

0.86

5.44

0.65

2

Gender, Age, BMI, HR

3.59

0.86

5.43

0.65

3

Gender, Age, BMI, GRD 3.15

0.89

5.09

0.72

4

Gender, Age, BMI, RPE, PFA, PA-R

2.34

0.93

3.82

0.86

5

Gender, Age, BMI, HR, PFA, PA-R

2.23

0.94

3.77

0.86

6

Gender, Age, BMI, GRD, PFA, PA-R

2.28

0.94

3.73

0.87

RPE, self-reported rating of perceived exertion from treadmill test; HR, heart rate; GRD, grade

REFERENCES

1. 2.

J. H. Wilmore, Physiology of Sport and Exercise, 4th ed. USA: Human Kinetics, 2008. American College of Sports Medicine, ACSM’s guidelines for exercise testing and prescription 6th ed.

Philadelphia: Lippincott Williams & Wilkins, 2000. 3.

3. Jackson et al., “Prediction of functional aerobic capacity without exercise testing,” Medicine and

Science in Sports and Exercise, vol. 22, no. 6,pp. 863-870, 1990. 4.

S. Chatterjee et al., “Prediction of maximal oxygen consumption from body mass, height and body

surface area in young sedentary subjects,” Indian J Physiol Pharmacol, vol. 50, no. 2, pp. 181–186, 2006. 5.

V. F. Froelicher and M. C. Lancaster, “The prediction of maximal oxygen consumption from a

continuous exercise treadmill protocol,” Am Heart J, vol. 87, pp. 445-450, 1974. 6.

C. Foster et al., “Generalized equations for predicting functional capacity from treadmill performance,”

Am Heart J, vol. 107, pp. 1229-2134, 1984.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

22

2 nd International Symposium on Computing in Science & Engineering

7.

J. F. Patton et al., “Evaluation of a maximal predictive cycle ergometer test of aerobic power,” Eur J

Appl Physiol, vol. 49, pp. 131-140, 1982. 8.

T. W. Storer et al., “Accurate prediction of VO2max in cycle ergometry,” Med Sci Sports Exerc, vol.

22, pp. 704-712, 1990. 9.

J. D. George, “Alternative approach to maximal exercise testing and VO2max prediction in college

students.” Research Quarterly for Exercise and Sport, vol. 67, pp. 452-457, 1996. 10. E. Alpaydin, Introduction to Machine Learning, 2nd ed. London, England: The MIT Press, 2010.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

23

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/12

Vulnerability Assessment of IMS SIP Servers with TVRA Methodology Afsaneh Madani,Iran University of Science and Technology,Virtual Center,Tehran,Iran,[email protected] Hassan Asgharian,Iran University of Science and Technology,Computer Engineering Department,Tehran,Iran,[email protected] Ahmad Akbari,Iran University of Science and Technology,Computer Engineering Department,Tehran,Iran,[email protected]

Keywords :

ETSI TVRA Method, IMS SIP Server, Treat, Risk Analysis

INTRODUCTION

IMS (IP Multimedia Subsystem) is considered as an NGN (Next Generation Network) core networks by ETSI. Decomposition of IMS core network has resulted in a rapid increase of control and signaling messages that makes security a required capability for IMS commercialization. The control messages are transmitted using SIP (Session Initiation Protocol) which is an application layer protocol. IMS networks are more secure that other typical SIP-based networks like VoIP networks because of its mandatory authentication at registration time. This paper study the security of main SIP servers of IMS (x-CSCF) based on the ETSI Threat Vulnerability and Risk Analysis (TVRA) method. This method is used as a tool to identify potential risks to a system based upon the likelihood of an attack and the impact that such an attack would have on the system. After identifying the assets and weaknesses of IMS SIP servers and finding out the vulnerabilities of these hardware and software components, we proposed some security hints that can be used for secure deployment of IMS SIP servers.

LITERATURE REVIEW

Importance of system security and assets has led different methods to assess risks and threats analysis. There are some differences between these methodologies in simulation and implementation standpoints but the final goal of all of them appropriates response system requirements, vulnerabilities, threats and activities could prevent destroying of system assets. Providing suggestions for protection of system privacy, integrity and availability are the main outputs of vulnerability assessment. TVRA is standard by ETSI organization as a method of systematic modeling to identify threats, specify protection requisites to reduce risks and enhance system security. With a

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

24

2 nd International Symposium on Computing in Science & Engineering

TVRA modeled system, weaknesses, threats, security objectives and the overall security arrangements will be obtained. The TVRA models a system consisting of assets. An asset may be physical, human or logical. Assets have weaknesses that may be attacked by threats and may lead to an unwanted incident with breaking certain pre-defined security objectives. Vulnerability, consistent with the definition given in ISO/IEC 13335, is modeled as the combination of a weakness that can be exploited by threats. The goal of secure design is to ensure reduction of unwanted incidents and attacks. Unwanted incident is dependent upon the weakness in an asset [1]. In TVRA method, modeling is completed with the following steps [2]: 1)

Security objective & requirement setting

2)

Assets and values determination

3)

weaknesses identification

4)

Attack and threat classification

5)

Security countermeasure definition

Vulnerability assessment determines necessary countermeasure items in implementation. This paper has been trying to use the behaviour TVRA methodology and identify IMS SIP Server vulnerability. At the first, an IMS network modelled by TVRA is explained, as a related similar work and then the model of the IMS SIP server is given. The final provided table is used to balance between designing of SIP servers with suitable countermeasures to reduce attack potential. IMS TVRA modelling is one of the successful examples in this field. The vulnerabilities have been prepared and introduced in the components of TVRA modelling [3]. It introduces IMS as a system with comprehensive analysis of the vulnerabilities associated with the weaknesses and every asset has classified in the IMS with associated weakness, threatening to attack and exploited security objectives when an attack is occurred. The final results of this model shows the most IMS Network investments is meeting in SIP sessions information and signalling messages. Choosing of weak authentication methods and transmitting message with no confidentiality mechanism are the most IMS vulnerability. HTTP digest authentication is less accurate authentication method than IMS AKA authentication, but it is more common. IMS AKA is a more secure but needs heavy authentication method [3]. These results are useful in implementation time of IMS network. The optional cases in standards may be necessary when vulnerability analysis results are completed in ETSI TVRA modelling. Other samples of components modelling like IPTV or NGN services are modelled in [4], [5] ETSI standards.

METHODS

System security modeling in TVRA design is based on system requirements and application types. This method suggests seven design steps that could be combined with each other. There are three different CSCF entities in IMS network core: proxy (P-CSCF), interrogating (I-CSCF) and serving (S-CSCF). P-CSCF is the entrance gateway to IMS system that connects users. S-CSCF is actually SIP server and I-CSCF allocated S-CSCF to each user. We perform vulnerability analysis of IMS SIP servers by using the TVRA in major steps that would be described in detail. We customized the TVRA operational steps for our application by re-defining it in our application for IMS SIP proxies.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

25

2 nd International Symposium on Computing in Science & Engineering

FINDINGS & CONCLUSION System modeling before its final implementation is led to determination of weaknesses, limitations and vulnerabilities, risks and vulnerability analysis, cost optimization and system security improvement. It is recognized that without a good understanding of the threats, the appropriate selection of countermeasures is very hard. Within ETSI, TVRA is used to identify risk and vulnerability to the system based upon the product of the likelihood of an attack. Special consideration is needed on IMS SIP servers which may consume additional system resource to perform the needed signaling deep inspection. It is suggested to add Message validation and overload control mechanisms in IMS P-CSCF server and request prioritization mechanism in S-CSCF server at the overload condition. After analyzing the vulnerabilities in this study, we will try to draw use case diagram and object views of model in UML method to complete the system modeling process. The following table shows the result of deploying TVRA in IMS SIP servers: Attack Weakness

Security Objective

Asset

DOS ( Register, Invite flooding ) Long processing time of ''Register'', ''Invite'' Availability

SIP

message

method Interception

Signal without confidentiality

Confidentiality

Eavesdropping

Weak authentication methods (Early IMS Auth.)

SIP message header Confidentiality

User

identity

information DOS ( sending of illegal ''Bye''/''Cancel'' msg. with session eavesdropping ) authenticate, DOS

Availability

''Bye'',

''Cancel''

don't

SIP Session information

Using of Heavy algorithms in SigComp

Availability

P-CSCF Server processing time

REFERENCES [1] ETSI TS 102 165-1, (TISPAN); Methods and protocols; Part 1: Method and proforma for Threat, Risk, Vulnerability Analysis,2006 [2] ETSI TR 187 002, (TISPAN); TISPAN NGN Security (NGN_SEC); Threat, Vulnerability and Risk Analysis V3.0.2,2010 [3] D. Wang, C. Liu, Model-based Vulnerability Analysis of IMS Network, Journal of Networks, VOL. 4, NO. 4, 2009 [4] ETSI TR 187 014, (TISPAN); eSecurity; User Guide to eTVRA web-database,2009 [5] ETSI TR 187 011, (TISPAN); NGN Security; Application of ISO-15408-2 requirements to ETSI standards -guide, method and application with examples, 2008 [6] 3GPP TS 23.228, 3GPP; Technical Specification Group Services and System Aspects; IP Multimedia Subsystem (IMS); age 2 (Release 10). [7] 3GPP TS 33.203 V10.0.0 (2010-06), 3GPP; Technical Specification Group Services and System Aspects; 3G Security; Access security for IP-based services (releases 10), 2010 [8] ITU-T Standardization Sector, “Security architecture for systems providing end-to-end communications”, ITU_T X.805, 2003. [9] S. Ehlert, D. Geneiatakis, T.Magedanz, ''Survey of network security systems to counter SIP-based Denial-of-service attacks'',Elsevier,2009. [10]

Y. Rebahi, M. Sher, T. Magedanz,'' Detecting flooding attacks against IP Multimedia Subsystem (IMS) networks '',

IEEE Conference [11] D. Sisalem, J. Floroiu, J. Kuthan, U. Abend, H. Schulzrinne, SIP Security, published by Jhon Wiley,2009

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

26

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/13

Predicting the Performance Measures of a Message Passing Multiprocessor Architecture by Using Artificial Neural Networks M. Fatih AKAY,Cukurova University,Dept. of Computer Engineering,Adana,Turkey,[email protected] Elrasheed I.M. ZAYID, Cukurova University,Dept. of Electrical and Electronics Engineering,Adana,Turkey,[email protected]

Keywords :

Multiprocessor architectures, neural networks

INTRODUCTION

The demand for even more computing power has never stopped. A number of important problems have been identified in the areas of defense, aerospace, automotive applications and science, whose solutions require a tremendous amount of computational power [1]. Parallel computers with multiple processors are opening the door to teraflops computing performance to meet the increasing demand of computational power [2]. Message Passing (MP) is a popular programming model that is supported by parallel computers [3]. The performance analysis of a multiprocessor architecture employing the MP protocol is an important factor in designing such architectures. Statistical simulation is a powerful tool in evaluating the performance measures of a multiprocessor architecture. In such simulation, the architecture is modeled by using specific probability distributions [4, 5]. The simulation can be time consuming especially when the architecture to be modeled has many parameters and these parameters have to be tested with different values or probability distributions. The purpose of this paper is to develop artificial neural network (ANN) models to predict the performance measures of a multiprocessor architecture employing the message passing protocol. The architecture to be used is the Simultaneous Optical Multiprocessor Exchange Bus (SOME-Bus) [6, 7], which is a high performance optical multiprocessor architecture.

LITERATURE REVIEW

There is only a single study in literature that shows artificial intelligence techniques could be used to predict the performance measures of a multiprocessor architecture [4]. In that study, a broadcast-based multiprocessor

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

27

2 nd International Symposium on Computing in Science & Engineering

architecture called the SOME-Bus employing the distributed shared memory programming model was considered. The statistical simulation of the architecture was carried out to generate the dataset. The dataset contained the following variables: ratio of the mean message channel transfer time to the mean thread run time, probability that a block can be found in modified state, probability that a data message is due to a write miss, probability that a cache is full and probability of having an upgrade ownership request. Support vector regression was used to build prediction models for estimating average network response time (i.e. the time interval between the instant when a cache miss causes a message to be enqueued in the output channel until the instant when the corresponding data or acknowledge message arrives at the input queue), average channel waiting time (i.e. the time interval between the instant when a packet is enqueued in the output channel until the instant when the packet goes under service) and average processor utilization (i.e. average fraction of time that threads are executing). It was concluded that support vector regression model is a promising tool for obtaining the performance measures of a distributed shared memory multiprocessor.

METHODS

In this paper, Opnet Modeler [8] is used to simulate the SOME-Bus architecture employing the message passing protocol. Each node contains a processor station in which the incoming messages are stored and processed, and also a channel station in which the outgoing messages are stored before transferring them onto the network. The important parameters of the simulation are the number of nodes (selected as 16, 32, and 64), the number of the threads executed by each processor (ranging from 1 to 6), ratio of the mean thread run time to channel transfer time (ranging from 0.05 to 1), thread run time (exponentially distributed with a mean value of 100), and pattern of the destination node selection (uniform and hot region). The dataset obtained as a result of the simulation contains four input and five output variables. The input variables are: T/R, node number, thread number, traffic pattern. The output variables are: average processor utilization, average network response time, average channel waiting time, average input waiting time, average channel utilization. A multistage feed forward ANN [9] is used for developing the prediction models. Levenberg-Marquardt back propagation algorithm is used for training the networks. A tansigmoid transfer function is used in the hidden layer and a pure linear transfer function is used in the in the output layer.

FINDINGS & CONCLUSION

The following five error metrics are used for evaluating the accuracy of the prediction models: mean absolute error (MAE), root mean squared error (RMSE), relative absolute error (RAE) and root relative squared error (RRSE) and correlation coefficient (r) [10]: Table 1 shows the performance of the prediction models. As is clearly seen from Table 1, except for the average network response time, ANN performs a good job in predicting the performance measures of a multiprocessor architecture employing the message passing protocol.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

28

2 nd International Symposium on Computing in Science & Engineering

Table 1. Summary of regression models to estimate MP architecture performance.

Performance Measures

R

RMSE MAE

RAE

RRSE

Average channel waiting time

0.99

1.63

1.04

0.20

0.05

Average channel utilization

0.99

0.01

0.00

0.16

0.03

Average network response time

0.99

14.63

10.98

0.26

0.08

Average processor utilization

0.99

0.09

0.05

0.24

0.06

Average input waiting time

0.99

8.26

5.80

0.27

0.09

REFERENCES

1)

J. Duato, S. Yalamanchili and L. Ni, Interconnection Networks: An Engineering Approach, Morgan

Kaufmann, 2003. 2)

D.E. Guller, J.P. Singh and A. Gupta, Parallel Computer Architecture: A Hardware/Software Approach,

Morgan Kaufmann Publishers, 1999. 3)

F. Chan, J. Cao and Y. Sun, “High-level abstractions for message-passing parallel programming”, Parallel

Computing, pp. 1589-1621, 2003. 4)

M.F. Akay and Ý. Abasýkeleþ, “Predicting the performance measures of an optical distributed shared

memory multiprocessor by using support vector regression”, Expert Systems with Applications, vol. 37, pp. 6293-6301, 2010. 5)

T.F. Wenisch et al, “Statistical sampling of microarchitecture simulation”, in Proceedings of the 20th

Parallel and distributed Processing Symposium, 2006, pp. 327-331. 6)

C. Katsinis, “Performance analysis of the simultaneous optical multi-processor exchange bus”, Parallel

Computing, vol. 27, pp.1079-1115, 2001. 7)

C. Katsinis and D. Hecht, “Fault-tolerant DSM on the SOME-Bus multiprocessor architecture with

message combining”, in Proceedings of the 18th International Parallel and Distributed Processing Symposium (IPDPS’04), 2004. 8)

OPNET Modeler, OPNET University program, http://www.opnet.com/university_program.

9)

E. Alpaydýn, Introduction To Machine Learning, 2nd Ed, London, England, MIT press, 2010.

10)

I.H. Witten and E. Frank, Data mining: Practical Machine Learning Tools and Techniques, Morgan

Kaufmann, 2005.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

29

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/14

Symbolic Computation of Perturbation-Iteration Solutions For Differential Equations Ünal GÖKTAŞ,Turgut Özal University,Department of Computer Engineering,Ankara,Turkey,[email protected]

Keywords :

Perturbation, iteration, differential equation, Mathematica, symbolic computation

INTRODUCTION

The new perturbation-iteration algorithms (PIAs) in [1] for solving differential equations involve lengthy algebraic computations which are hard to do by hand. Computer algebra systems can be very useful for carrying out these computations. The PIAs in [1] are implemented in Mathematica, a leading computer algebra system. The Mathematica package PerturbationIteration.m automatically carries out all the steps of the algorithms. A number of examples including the Bratu type equations will be demonstrated using the package PerturbationIteration.m.

LITERATURE REVIEW

While studying nonlinear mathematical models in physics and engineering we often have to deal with either a differential equation, a difference equation, or an integro-differential equation. Perturbation methods are widely used to approximately solve these equations [2]. Due to the small parameter restriction of these methods, the solutions are valid for weakly nonlinear systems.

The modified and multiple-scale Lindstedt-Poincaré methods [3, 4], the linearized perturbation method [5] and the homotopy perturbation method [6] are some of the methods which can be used for strongly nonlinear systems.

Iteration-perturbation methods [7-9], where the nonlinear terms are linearized using the results from the previous iteration, can also be used for strongly nonlinear systems. However, usually, in iteration-perturbation methods,

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

30

2 nd International Symposium on Computing in Science & Engineering

one has to use non-standard pre-transformations and transform the equations into a new form before applying the iteration procedure. Some of the algorithms can only work with specific assumptions.

As an extension of the PIAs for algebraic equations [10-12], the PIAs in [1] are applicable to a wide range of differential equations, and they do not require special transformations or initial assumptions.

METHODS

Three PIAs; PIA(1, 1), PIA(1, 2) and PIA(1, 3) are developed in [1] by taking one correction term in the perturbation expansion and respectively correction terms up to first, second and third derivatives in the Taylor series expansion.

FINDINGS & CONCLUSION

The three PIAs described in [1] are all implemented in the package PerturbationIteration.m. The package is demonstrated on a number of differential equations including the Bratu type equations.

Generalizations towards the algorithms PIA(n, m) (n: number of correction terms in the perturbation expansion; m: order of derivatives in the Taylor series expansions; n ≤ m) is for future work.

REFERENCES

[1] Y. Aksoy and M. Pakdemirli, (2010). New perturbation-iteration solutions for Bratu-type equations, Computers and Mathematics with Applications, 59, 2802-2808. [2] A. H. Nayfeh, 1981. Introduction to Perturbation Techniques, John Wiley and Sons, New York. [3] H. Hu, (2004). A classical perturbation technique which is valid for large parameters, Journal of Sound and Vibration, 269, 409-412. [4] M. Pakdemirli, M. M. F. Karahan and H. Boyacý, (2009). A new perturbation algorithm with better convergence properties: Multiple Scales Lindstedt Poincaré method, Mathematical and Computational Applications, 14, 31-44. [5] J. H. He, (2003). Linearized perturbation technique and its applications to strongly nonlinear oscillators, Computers and Mathematics with Applications, 45, 1-8. [6] J. H. He, (2003). Homotopy perturbation method: A new nonlinear analytical technique, Applied Mathematics and Computation, 135, 73-7. [7] J. H. He, (2006). Some asymptotic mehods for strongly nonlinear equations, International Journal of Modern Physics B, 20, 1141-1199. [8] J. H. He, (2001). Iteration perturbation method for strongly nonlinear oscillations, Journal of Vibration and Control, 7, 631-642.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

31

2 nd International Symposium on Computing in Science & Engineering

[9] T. Öziþ and A. Yýldýrým, (2009). Generating the periodic solutions for forcing van der Pol oscillators by the iteration perturbation method, Nonlinear Analysis: Real World Applications, 10, 184-1989. [10] M. Pakdemirli and H. Boyacý, (2007). Generation of root finding algorithms via perturbation theory and some formulas, Applied Mathematics and Computation, 184, 783-788. [11] M. Pakdemirli, H. Boyacý and H. A. Yurtsever, (2007). Perturbative derivation and comparisons of rootfinding algorithms with fourth order derivatives, Mathematical and Computational Applications, 12, 117-124. [12] M. Pakdemirli, H. Boyacý and H. A. Yurtsever, (2008). A root finding algorithm with fifth order derivatives, Mathematical and Computational Applications, 13, 123-128.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

32

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/15

A Lightweight Parser for Extracting Useful Contents from Web Pages Erdinc UZUN,Namik Kemal University,Computer Engineering Department,Corlu/Tekirdag,Turkey,[email protected] Tarik YERLIKAYA,Trakya University,Computer Engineering Department,Edirne,Turkey,[email protected] Meltem KURT,Trakya University,Computer Engineering Department,Edirne,Turkey,[email protected]

Keywords :

Parsers, DOM, Web Content Extraction, Time and Memory Consumption

INTRODUCTION

In many web content extraction applications, parsing is a crucial issue for obtaining the necessary information from web pages. Web pages have a hierarchy of informational units called nodes. DOM (Document Object Model) is a way of describing those nodes and the relationships between them. Especially, a DOM parser is the preferred way in web content extraction [1], [2], [3], [4]. However, major problems with using a DOM parser are time and memory consumption. This parser is an inefficient solution for applications which need web content extraction. In search engine for Turkish (SET - http://bilgmuh.nku.edu.tr/SET/) Project, when we developed an intelligent crawler that automatically extracts relevant contents, we encountered this inefficiency. Therefore, we developed a lightweight parser (namely SET Parser) which utilizes regular expressions and string functions for extracting between tags for reducing these consumptions. In this study, we describe this parser and examine the improvements in time and memory. This examination indicates that SET Parser is more useful for extracting contents from web pages.

LITERATURE REVIEW

Web pages often contain irrelevant contents such as pop up ads, advertisements, banners, unnecessary images, extraneous links etc. around the body of an article that distracts users from actual content. There are lots of applications about extraction of useful and relevant content from web pages including text summarization, cell phone and pda browsing, speech rendering for the visually impaired [5], [6]. For extraction of relevant content, unnecessary tags and contents in Web pages should be removed. In general, extracting content of text in conventional techniques required knowledge on structure of the text [7]. Web pages are semi-structured documents mostly written in HTML that defines tags. These tags can be structured with DOM parser.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

33

2 nd International Symposium on Computing in Science & Engineering

The DOM is a W3C (World Wide Web Consortium) standard. The DOM defines a standard for accessing and parsing documents. In DOM, everything is an object that consists of element names and contents of them. A DOM parser presents a document as a tree-structure in memory. However, the size of DOM tree created from a web document is larger than the size of the original web document [8]. This means that memory and time consumption will increase constantly. For exceptionally large documents, parsing and loading the entire document can be slow and resource intensive. It is not an efficient solution in content extraction, because of A DOM parser takes all tags into consideration. Cai et al. [9] proposed that only some tags (table, tbody, tr, td, p, ul and li) are very important and commonly used in web pages. Yerlikaya and Uzun [10] use the layout html tags, div, table and ul, for obtaining relevant contents. To parse these specific tags, regular expressions can be adapted instead of DOM. Regular expression is a language that can be used to edit texts or obtain sub-texts that based on a set of defined rules. Regular expressions can be utilizing for obtaining information such as links and images from web pages. However, regular expressions have been a bottleneck for the unknown nested structure that is a layout stored within the structure of another layout. Therefore, we write an algorithm for obtaining inner contents from layout tags. In this algorithm, for example for extracting div class="content" tag, start tag as "div" and end tag "/div" are created as appropriate to the tag. Start position and end positions are found by using these tags. "indexof" string function can be used for this task. "substring" can be utilized for extracting a proposed content by using these positions. Afterwards, the count of start tag and end tag are calculated whether the proposed content is correct or not. If the count of start and end tags are equal, this content is added to the result array. While implementing this algorithm, it is clear that the last control operation increases the response time in contents which have a lot of inner tags. Therefore, we examine the effects of the count of inner tags in the experiments.

METHODS

In experiments, test collection obtained from Milliyet (www.milliyet.com.tr) and Sabah (www.sabah.com.tr) contains 2000 web pages. 1000 pages are crawled for each newspaper. The total sizes of these collections are around 160.47 MB in UTF-8 format. This collection contains tags that are used to design web pages and contents that present between tags. A web document in this collection may take approximately 82.16 KB. A crude observation on the text collections reveals that text collection of Sabah contains more unnecessary tags and contents than Milliyet when considering that the sizes of actual contents are approximately similar. While doing experiments, we also take into account the effect of file size and the number of used tags. While structuring a web document, a DOM object takes all tags into consideration. Therefore, two functions, which are used regular expressions in SET Parser, are coded for extracting links and images. Nevertheless, unfortunately regular expressions don’t give accurate results in the nested structure of tags. Therefore, we devise a function to the SET Parser for obtaining inner contents from tags.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

34

2 nd International Symposium on Computing in Science & Engineering

FINDINGS & CONCLUSION

Four analysis are performed for comparing DOM and SET Parsers. In the first analysis, we compare the parsing time of links and images. The second analysis is on examining the parsing time of layout tags which have nested structure. In the third analysis, the impact of the size of web pages on parsing time is examined. The last analysis is for investigating the memory consumption of DOM and SET Parsers. These analyses indicate that DOM Parser needs longer time to parse when compared with SET Parser. For example; the count of the inner tags in div_class="pad10lrb_print_hidden"_style="" of Sabah is approximately 3.00 and the parsing time is 5.54 milliseconds. Nevertheless, the count of the inner tags in div_id="container_body" of Sabah goes up to 424.24 and also the parsing time increases to 228.02 milliseconds. But still, the SET Parser is more suitable solution for obtaining contents between tags. The average parsing time of SET Parser is 254.4 and 18.7 times shorter than the average parsing time of DOM parser, respectively in Milliyet and Sabah. The average size of a DOM tree created from these web pages is as large as 9.58 times of the average size of the web pages. On the other hand, SET Parser improves the memory usage with %87.00. These analyze show that the use of SET Parser is more suitable solution for extracting contents than the use of DOM. That is, a lightweight parser like the one in this study can be utilized for this task.

REFERENCES

1. Álvarez-Sabucedo L. M., Anido-Rifón L. E., Santos J. M. (2009). Reusing web contents: a DOM approach. Software: Practice and Experience 2009, 39(3): 299–314. 2. Kaasinen E., Aaltonen M., Kolari J., Melakoski S., Laakko T. (2000). Two Approaches to Bringing Internet Services to WAP Devices. WWW9; 231-246. 3. Wong W. and Fu A. W. (2000). Finding Structure and Characteristics of Web Documents for Classification. In ACM SIGMOD 2000; Dallas, TX., USA 4. Zheng Y., Cheng X., Chen K. (2008). Filtering noise in Web pages based on parsing tree. The Journal of China Universities of Posts and Telecommunications; 15, 46-50. 5. Buyukkokten O., Garcia-Molina H., Paepcke. A. (2001). Accordion Summarization for End-Game Browsing on PDAs and Cellular Phones. In Proc. of Conf. on Human Factors in Computing Systems (CHI'01) 6. Buyukkokten, O., Garcia-Molina, H., Paepcke. A. (2001). Seeing the Whole in Parts: Text Summarization for Web Browsing on Handheld Devices, In Proc. of 10th Int. World-Wide Web Conf.. 7. Gupta S., Kaiser G. E., Peter G. (2005). Chiang M F, Starren J. Automating Content Extraction of HTML Documents. World Wide Web: Internet and Web Information Systems; 8, 179-224. 8. Wang, F., Li, J., Homayounfar, H. (2007). A space efficient XML DOM parser. Data Knowl. Eng.185-207 9. Cai D., Yu S., Wen J. R., Ma W. Y. (2003). Extracting content structure for web pages based on visual representation. APWeb'03 Proceedings of the 5th Asia-Pacific web conference on Web technologies and applications. 10. Yerlikaya T. and UZUN E. (2010) Ýnternet Sayfalarýnda Asýl Ýçeriði Gösterebilen Akýllý Bir Tarayýcý. ASYU2010; 21-24 Haziran, Kayseri & Kapadokya, ISBN: 978-975-6478-60-8, 53-57

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

35

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/16

Applying Incremental Landmark Isomap Algorithm to Improving Detection Rate in Intrusion Detection System Seyed Mehdi Iranmanesh,Iran University of Science and Technology, School of computer engineering,Tehran,Iran,[email protected] Ahmad Akbari,Iran University of Science and Technology, School of computer engineering,Tehran,Iran,[email protected]

Keywords :

Intrusion Detection System, Manifold Learning, Landmark, Incremental Landmark Isomap,

Dimension Reduction.

INTRODUCTION

In recent years, intrusion detection has emerged as an important technique for network security. Most of intrusion detection systems use primary and raw features which are extracted from network connection without any preprocessing on the extracted features . Dimensionality reduction is crucial when data mining techniques are applied for intrusion detection to transform features .Many data mining algorithms have been used for this purpose. Manifold learning is an emerging and promising approach in nonlinear dimension reduction. Isometric feature mapping (Isomap) is one of the most prominent manifold learning methods that have been investigated in this domain. Researchers successfully applied Isomap on intrusion detection system as a nonlinear dimension reduction method. But manifold learning algorithms have some problems such as operation on batch mode. Therefore they cannot be applied efficiently for a data stream. Incremental Landmark Isomap which is an increment version of classical Landmark Isomap can handle the problem of new data points. This method is applied on NSLKDD dataset and also some UCI datasets. The results demonstrate higher detection rate for this method, comparing to classical Landmark Isomap.

LITERATURE REVIEW

With rapidly development of communication, there is no restriction of real distance for people to contact with each other. It helps people in many areas, such as business, entertainment, education, etc. In particular, the Internet has been used as an important component of business models [1]. Besides the improvement and

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

36

2 nd International Symposium on Computing in Science & Engineering

efficiency of the network, it is distinguished that unauthorized activities by external attacker or internal sources are increased dramatically. So the intrusion detection system which was introduced by Anderson [2] is an important problem nowadays. The aim of the intrusion detection system is to classify benign users from intruders and attackers with high accuracy. One of the important factors to achieve this aim is input data and features, which are extracted from the input data. Whenever input features have more information about the benign and intruder users, the intrusion detection system can better separate different type of users, and the security of the protected system/network is improved. From this point of view, preprocessing on the raw input features has an important role in intrusion detection systems. Feature preprocessing methods are divided into two main groups: feature selection and feature transformation. We investigate feature transformation in this study. Typical datasets for intrusion detection are generally very large and multidimensional. With the growth of high speed networks and distributed network based data intensive applications storing, processing, transmitting, visualizing and understanding the data is becoming more complex and expensive. To tackle the problem of highdimensional datasets, researchers have developed some methods such as PCA [3] and MDS [4]. These methods fail to act properly on real-world datasets, because these datasets are non-linear. Non- linear manifold learning algorithms such as LLE [5] and Isomap [6] have been used for nonlinear dimension reduction. Using non-linear dimension reduction instead of linear dimension reduction for mapping manifolds to their intuitive dimensions is more rational. Non-linear manifold learning methods are divided in two main categories. First category includes methods that try to maintain globally structure of dataset while second category maintains locally structures. For instance, Isomap is based on maintaining globally geometry of dataset. Landmark Isomap [7] is a variation of Isomap, which preserve all of attractive attributes, but is more efficient. It selects some data points as landmarks to construct the map. Landmark points are chosen, randomly. An important fact that exists in data mining domain is that sometimes the information should be collected sequentially through dataflow. Manifold learning algorithms operate in “batch” mode. It means that all data should be available during training and they cannot be applied on dataflow. Incremental manifold learning has been invented for this purpose. Martin law et all in [8] proposed incremental Isomap and incremental L-Isomap that can solve this problem. Their method can handle unseen data, and it can be applied on data stream too. In this paper, we use the incremental version of L-Isomap. This method could handle the problem of data streams, and, because of landmark part of this algorithm, it also can solve bottlenecks of classical Isomap.

METHODS

Isomap is a generalization of MDS [4]. The algorithm contains three steps: 1.

Construct graph that shows manifold of data points in high dimensional space. This graph is built on

input data according to certain rules which should reflect the structure in the neighborhood of data points. 2.

Compute pairwise distance matrix D with the Floyd or Dijkstra algorithm.

3.

Apply MDS on D.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

37

2 nd International Symposium on Computing in Science & Engineering

Landmark Isomap is a variation of Isomap, which preserve all of the attractive attributes, but is more efficient. Landmark Isomap saves three bottlenecks that exist in classical Isomap algorithm: storage, calculation of distance matrix, and the eigenvalue problem. Suppose that data points for exist. Batch Isomap algorithm can calculate for , so that is embedded form of original data points in low dimensional space. Then, the new data point

arrives. The goal of incremental

Isomap [8] is to update the transformed data points so that to best preserve the updated geodesic distances. This is done in three stages: 1.

Updating the Geodesic Distance for the original n vertices.

2.

Updating the embedded data points regarding to the new Geodesic Distance matrix.

3.

Calculating the transformed instances of new data point .

FINDINGS & CONCLUSION

We applied incremental L-Isomap on NSLKDD dataset and also some UCI datasets to transform data points from high dimensional space to low dimensional space. NSL-KDD is a dataset suggested to solve the inherent problems of the KDD'99 dataset mentioned in [9,10]. However, this new version of the KDD dataset may not be a perfect representative of existing real networks. Because of the lack of public datasets for network-based IDS, we believe that it still can be considered as a suitable benchmark. As mentioned before, using incremental manifold learning method instead of classical manifold learning can handle problem of data streams. We reduced number of features for each data point to half. The number of neighbors to construct manifold, for each dataset is determined, too. After feature reduction with incremental L-Isomap, we classified data points with decision tree classifier. Results show higher accuracy for incremental L-Isomap, in comparison with classical L-Isomap. Besides, original datasets without applying manifold learning algorithms are classified. It can be seen that incremental L-Isomap improves accuracy of classifying. This shows that some features are irrelevant or noise and using incremental L-Isomap can omit them. The use of Incremental L-Isomap improves the overall performance of IDS mainly due to three reasons. Firstly, it can handle the problem of new data points, secondly it reduces

the dimension, thereby making the

computational efforts less. Third reason is the reduction of noise in the data. By reducing the noise, we can expect a better classification of normal and abnormal data.

REFERENCES

1.

Shon T, Moon J. "A hybrid machine learning approach to network anomaly detection", Information

Sciences, 2007, p. 3799-3821. 2.

J.P. Anderson, “Computer Security Threat Monitoring and Surveillance”, James P. Anderson Co., Fort

Washington, PA, Tech. Rep. 79F296400, Apr. 1980, p. 1-56. 3.

I.T. Jolliffe, "Principal Component Analysis", Springer-Verlag, New York, 1986.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

38

2 nd International Symposium on Computing in Science & Engineering

4. 5.

T.F. Cox, and M.A.A. Cox., "Multidimensional Scaling", Chapman &Hall, London, 1994. S.T. Roweis, L.K. Saul, “Nonlinear Dimensionality Reduction by Locally Linear Embedding”, Science,

vol. 290, December 2000, p. 2323-2326. 6.

J.B. Tenenbaum, V. de Silva, and J.C. Langford, “A Global Geometric Framework for Nonlinear

Dimensionality Reduction”, Science, vol. 290, 2000, p. 2319-2323. 7.

Vin De Silva and Joshua B. Tenenbaum. "Global Versus Local Methods in Nonlinear Dimensionality

Reduction" , In Advances in Neural Information Processing Systems 15, MIT Press, 2003, p. 705–712. 8.

Law, M., Zhang, N., Jain, A.: "Nonlinear manifold learning for data stream", In Berry, M., Dayal, U.,

Kamath, C., Skillicorn, D., eds.: Proc. of the 4th SIAM International Conference on Data Mining, Lake Buena Vista, Florida, USA. 2004, p. 33–44. 9.

Mahbod T, Ebrahim B, Wei L, Ali A.G, "A Detailed Analysis of the KDD CUP 99 Data Set", In

proceeding of Computational Intelligence in Security and Defense Application (CISDA 2009). 10.

McHugh J. "Testing intrusion detection systems: a critique of the 1998 and 1999 darpa intrusion

detection system evaluations as performed by lincoln laboratory", ACM Transactions on Information and System Security, 2000, vol 3, p. 262–294.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

39

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/18

Prime Numbers for Secure Cryptosystems and Primality Testing on MultiCore Architectures Mustafa ALİOĞLU,Izmir Institute of Technology,Department of Computer Engineering,Izmir,Turkey,[email protected] Uğur GÖĞEBAKAN,Izmir Institute of Technology,Department of Computer Engineering,Izmir,Turkey,[email protected] Asst. Prof. Serap ŞAHİN,Ph.D.,Izmir Institute of Technology,Department of Computer Engineering,Izmir,Turkey,[email protected]

Keywords :

Primality testing, multicore architectures, parallel and distributed algorithms

INTRODUCTION & LITERATURE REVIEW

Cryptography (or cryptology; from Greek kryptos, “hidden, secret”; and, gráphin, “writing”, or logia, “study”, respectively) is the practice and study of hiding information. Modern cryptography intersects the disciplines of mathematics, computer science, and electrical engineering. Some of the applications of cryptography include the security of ATM cards, computer passwords, and electronic commerce. In cryptology, prime numbers are used to obtain security. All cryptosystems use finite group structures to establish the one of the domain parameters which is built up by a prime number. For instance; two prime numbers are used to create a key in RSA cryptosystems and prime numbers are used as a base for DiffieHellman key exchange algorithm too. Crypto algorithms have higher security levels by using bigger prime numbers. By producing and using a big prime; it is ensured that breaking a system will become harder. Therefore the usage of big and strong primes and their testing for primality is an important study area for cryptosystems. The primality testing requires processor power, time and memory. Traditionally the primality testing used to be done on high performance computers. Nevertheless, the contemporary processor power and memory capacities of desktop PCs, and many mobile types of equipment have equivalent or even surpassing processor speeds and memory capacities of these old systems. Therefore, these old problems can now be solved by the new computer architectures which are collectively and generically named as “multicore architectures”. This provides us a new opportunity to evaluate the primality testing methods and their hybrid modes on those new environments. The purpose of this project is to do the primality testing for big numbers on these new computer architectures by parallelization with main purpose to find the lowest time complexity.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

40

2 nd International Symposium on Computing in Science & Engineering

In the study, the multi-precision numbers are one of our targets. Due to architecture of contemporary computers, it is already hard to work with numbers bigger than 64 bit. And this is where the multi-precision libraries come into the usage. Although they are easy to implement they require a lot of CPU time hence the increase time complexity. Therefore, multi-threading algorithms should be used for parallelization. It is another one of the aims of this project that to find ways to calculate big prime numbers in lower time complexities.

METHODS

In the article, deterministic Lucas-Lehmer-Riesel, probabilistic Chinese Hypothesis and Miller-Rabin Theorem primality testing methods and their hybrid implementations will be presented with their execution speeds for multi-precision numbers on multi-core processor architectures. The reason why Chinese Primality Testing Theorem selected is it is faster than other non-deterministic algorithms. The Reasons Miller-Rabin Primality Testing Theorem selected are (i) strictly strong test in the sense for every composite number, (ii) Strong Pseudo prime test, (iii) development state of Solovay-Strassen primality method, and (iv) no need to calculate Jacobi Symbol. Since the fastest deterministic algorithm known for numbers of N = k2n – 1 form is Lucas-Lehmer-Riesel Primality testing, it is also selected to be tried upon as to fulfill aforementioned purposes. Additionally, another algorithm by using Elliptic Curve approaches will also be compared with a hybrid code. Finally, by comparing all the algorithms above, respect to their execution times, most suitable one to parallelism will be selected and implemented on distributed memory architectures by MPI.

FINDINGS & CONCLUSION

The main purpose of this project is to decrease the time needed to find a big prime in acceptable time interval. By doing that, creating instant keys for example to use in RSA systems, will became much easier, therefore crypto systems will be more secure. If the time spent to create a key; that is the cost to create a key, becomes lower, life-time of a key gets smaller. When and if a key is cracked, thanks to this system, it would have been already changed. The article will present the results of this study.

REFERENCES

[1] R.Crandall, C. Pomerance, “Prime Numbers, A Computational Perspective”, 2001 [2] D.Bressoud, S.Wagon, “A Course in Computational Number Theory”, 2000 [3] S.S.Wagstaff, Jr., “Cryptanalysis of Number Theoretic Ciphers”, 2003 [4] http://www.madsci.org/posts/archives/1998-05/893442660.Cs.r.html [5] http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Prime_numbers.html [6] http://www.factmonster.com/math/numbers/prime.html [7] http://www.math.unipd.it/~languasc/lavoripdf/R8eng.pdf

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

41

2 nd International Symposium on Computing in Science & Engineering

[8] http://mathworld.wolfram.com/ChineseHypothesis.html [9] http://en.wikipedia.org/wiki/Chinese_hypothesis [10] http://en.wikipedia.org/wiki/Chinese_hypothesis [11] http://msdn.microsoft.com/tr-tr/library/dd831853.aspx [12] http://en.wikipedia.org/wiki/Microsoft_Visual_Studio [13] http://openmp.org/wp/about-openmp/ [14] http://en.wikipedia.org/wiki/OpenMP [15] http://en.wikipedia.org/wiki/Arbitrary-precision_arithmetic [16] http://www.di-mgt.com.au/bigdigits.html [17] http://gmplib.org/#WHAT [18] https://www.cosic.esat.kuleuven.be/nessie/call/mplibs.html [19]T.Rouber, G.Rünger, “Parallel Programming for Multicore and Cluster systems”, 2010

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

42

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/19

Rule Base Representation in XML Didem ÖKTEM,Ege University,Tire Kutsan Vocational School,İzmir,Turkey,[email protected] Şen ÇAKIR,Dokuz Eylül University,Department of Computer Engineering, İzmir,Turkey,[email protected] Keywords :

Genetic Algorithms, Rule Base, XML

INTRODUCTION

Expert Systems (ES) are intelligent computer programs, which use knowledge and inference procedures to solve problems in the way that a human-expert should do [1]. These systems are composed of mainly three components mutually attached to each other. These components are “user interface” (UI), “inference engine” and the “rule base”, which usually occur in the form of a set of IF-THEN rules [2]. The inference engine uses the IFTHEN statements of the rule base to make decisions. For some ES shells, there exists some certain rule base syntax but in some cases, if the ES is prepared from scratch, the software developer may feel himself free about representing rule base statements. Correspondingly, an independent rule base representation can be used not only in an ES, but also in some other problem solving methods like Genetic Algorithms (GA). GAs are introduced in 1970’s by John Holland [3]. They are considered as stochastic search algorithms and in GAs, the probable solutions to a problem are represented as chromosome – like strings. In this study, Extensible Markup Language (XML) is used as a rule base representation technique in a software, which finds optimum solutions to the curriculum planning problem of in-service education of corporations by using the genetic algorithm and expert system mechanisms together. The education is composed of different “modules” and each module may have prerequisite modules. The rules of the system clarify the relations of the modules with each other from the prerequisite notion point of view.

LITERATURE REVIEW

Timetabling problems are one of the most popular optimization problems which the researchers have been seeking for various methods to solve them in an efficient manner. Evolutionary algorithms are the most efficient and convenient way to solve these kind of problems in short run time periods. Optimization can be done in many application areas like education, flight traffic control in airports, nurse rostering or operating room timetable in hospitals [4]. There are several studies on timetabling problem which seek for optimum solutions especially for educational corporations. The researchers have studied on high school timetables and obtained quite effective

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

43

2 nd International Symposium on Computing in Science & Engineering

results when compared to the timetables prepared manually. Petrovic and Burke [15] and Qu et al. [16] are some of the most popular studies on this application area. Pillay and Banzhaf [17] have studied on examination timetabling problem and have implemented a heuristic system to handle the problem. While handling optimization problems, there exist many studies in which the ES methods and GAs are used together as in the studies of Tseng [5] for land-cover classification, Choi [6] for managing the auction mechanisms and Nguyen [7] for compressor scheduling. The problem solving techniques of evolutionary algorithms meet with the requirements of the expert system software in many different application areas like product design [8], cost management [9], unmanned air vehicles [10], decision making tools in fashion [11], composite laminate design [12] and problems like determining substation locations [13] can be mentioned as the application areas that GAs and ESs are used together.

METHODS

In the study, the IF-THEN statements of a rule base are converted to the chromosomes of the initial population of the GA. In order to represent the IF – THEN statements as permutation encoded chromosomes, the rule base data is first converted to a sparse matrix, in which each row of the matrix corresponds to one of the rules. The 1s in the ith row of the sparse matrix represent the prerequisite modules of the ith module. The rules of the modules are inserted into the system via the UI of the software. The number of prerequisite modules depends on the importance of the module in the in-service education package. When the rules about the modules are inserted into the system, an XML file, including the rules among the modules is prepared. The XML file has some tags related to the rule information of each module. The prerequisite module numbers for each module are represented between the tags and . In the optimization phase of the software, the initial population of the GA is constructed from the XML file. XML file format provides a generic and platform independent data storage environment for the rule base data.

FINDINGS & CONCLUSION

The GA developed in this study uses the rule base statements of the education modules saved in XML file format. XML is a simple text-based format for representing structured information: documents, data, configuration, books or transactions [14]. It is not for only Web use, but also it can be used to transfer data between different platforms of Windows applications. In the software, the rule base statements are first saved in the database and then transformed into the XML format. The GA uses the rules as the input of the optimization phase. The chromosomes of the initial population of the GA are produced by reading the XML file. XML is easier to convert to a mathematical representation than the IF-THEN sentences. Since the GA in the study uses permutation encoding for preparing the initial population, parsing the XML tags is much effective than using logical statements directly to obtain the GA chromosomes.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

44

2 nd International Symposium on Computing in Science & Engineering

REFERENCES

[1] Giarratano J., “Expert Systems: Principles and Programming”, Riley, Fourth Edition, (2004). [2] Sutrisna M., Potts K. and Buckley K., 2003, “An Expert System As a Potential Decision – Making Tool In the Valuation of Variations”, Proceedings of the RICS Foundation Constructionand BuildingResearch Conference, COBRA Press, ISBN: 1-84219-148-9, pp: 314-326. [3] Holland H.J., “Adaptation in Natural and Artificial Systems”, Cambridge, MIT Press, (1975). [4] Cardoen B. et al., “Operating Room Planning and Scheduling: A Literature Review”, European Journal of Operational Research 201, Elsevier, pp: 921-932 (2010). [5] Tseng M-H. et al., “A Genetic Algorithm Rule-Based Approach for Land-Cover Classification”, ISPRS Journal of Photogrammetry & Remote Sensing 63, Elsevier, pp: 202-212 (2008). [6] Choi J.H. et al., “Utility Based Double Auction Mechanism Using Genetic Algorithms”, Expert Systems with Applications 34, Elsevier, pp: 150-158 (2008). [7] Nguyen H.H. et al., “A Comparison of Automation Techniques for Optimization of Compressor Scheduling”, Advances in Engineering Software 39, Elsevier, pp: 178-188 (2008). [8] Chaoan L., “The Expert System of Product Design Based on CBR and GA”, International Conference on Computational Intelligence and Security Workshops, IEEE, pp: 144-147 (2007). [9] Chou J-S., “Generalised Linear Model-Based Expert System for Estimating the Cost of Transportation Projects”, Expert Systems with Applications 36, Elsevier, pp: 4253-4267 (2009). [10] Kuroki Y. et al., “UAV Navigation by an Expert System for Contaminant Mapping with a Genetic Algoritm”, Expert System with Applications, Elsevier, (In press) (2010). [11] Wong W.K. et al., “A Decision Support Tool for Apparel Coordination Through Integrating the Knowledge-Based Attribute Evaluation Expert System and the T-S Fuzzy Neural Network”, Expert Systems with Applications 36, Elsevier, pp: 2377-2390 (2009). [12] Kim J-S., “Development of a User-Friendly Expert System for Composite Laminate Design”, Composite Structures 79, Elsevier, pp: 76-83 (2007). [13] Chakravorty S. ve Thukral M., “Choosing Distribution Sub Station Location Using Soft Computing Technique”, International Conference on Advances in Computing, Communication and Control (ICAC3’09), ACM, pp: 53-55 (2009). [14] Online: http://www.w3.org/standards/xml/core [15] Petrovic S. and Burke E.K., “University Timetabling, Handbook of Scheduling: Algorithms, Models and Performance Analysis”, CRC Press, Boca Raton, Chapter 45 (2004). [16] Qu R. et al., “A Survey of Search Methodologies and Automated Approaches for Examination Timetabling”, Journal of Scheduling 12 (1), pp: 55-89 (2009). [17] Pillay N. and Banzhaf W., “An Informed Genetic Algorithm for the Examination Timetabling Problem”, Applied Soft Computing 10, Elsevier, pp: 457-467 (2010).

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

45

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/21

Effectiveness of Standard Deviation of Fundamental Frequency for the Diagnosis of Parkinson's Disease Yasemin SAHIN, Fatih Univesity, Computer Engineering, Istanbul,Turkey , [email protected] Nahit EMANET, Fatih University, Computer Engiineering, Istanbul, Turkey, [email protected]

Keywords :

Pattern Recognition, Parkinson's Disease, Diagnosing, ANN, SVM

INTRODUCTION

Neurological diseases are very common all over the world. Parkinson’s disease (PD) is one of the most common neurological disease [1] which impairs motor skills, speech, and other functions such as mood, behaviour, and talking [2, 3, 4]. Moreover, statistics show that each year, approximately 50,000 Americans are diagnosed with Parkinson’s disease. In fact, all studies show that age is the mainspring effect of PD, which increases steeply after age 50 [5, 6]. Although there is currently no cure found for PD, medication is used to increase the quality of life of the patients if it is diagnosed at the early stages of the disease [1, 7]. Most of the people with Parkinson’s disease have vocal impairment such as dysphonia [8, 9]. Vocal impairment may also be one of the earliest indicators for the onset of the illness. Since the measurement of voice is noninvasive and simple to administer, it is often used to detect Parkinson’s disease. The purpose of this study is to develop a decision support system to diagnose PD by using patients’ voice records. We used pattern recognition methods for classification based on Artificial Neural Networks (ANN) and Support Vector machines (SVM) which are widely used for pattern recognition systems. These two classifiers have been tested and their performance comparisons have been done.

LITERATURE REVIEW

There are extensive studies about general voice disorders [10-16] by using traditional speech measurement methods which include measurement of fundamental frequency F0, absolute sound pressure level, time variation of periodic signal that known as jitter, and shimmer. All these measurements have been used for discriminating healthy people from person with Parkinson’s disease. More recently, the studies have been concentrated on machine learning tools for acoustic measurement to detect voice disorders [2, 10]. Many interdependent vocal features can be extracted by using both traditional and novel methods such as linear discriminating analysis (LDA), Wavelet Decomposition, Fast Fourier Transformations,

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

46

2 nd International Symposium on Computing in Science & Engineering

Entropy. All these features have been used to diagnose PD at the early stages by Artificial Neural Networks (ANN) and Support Vector machines (SVM) [2, 17, 18].

METHODS

The dataset for this study consists of voice records which are obtained from 31 male and female, 23 with Parkinson’s disease. The ages of the subjects ranged from 46 to 85 years. The records sampled at 44.1 kHz, with 16 bit resolution. This dataset is available at UCI Machine Learning Repository. Since we have the actual raw voice recordings of this dataset in our lab, we have added a new feature that is the standard deviation of fundamental frequency (stevF0) to increase the accuracy of whole classification system. We used Matlab and Praat to analyze voice records. Artificial Neural Networks (ANNs) and Support Vector Machines (SVMs) are very powerful tools which can be utilized for pattern recognition. They are being used in a large array of different areas including medicine. Neural networks as data analysis tool is more robust than statistical methods because of their capability to handle small variations of parameters and noise. In this study, we have used back propagation algorithm which is a supervised and multi layer artificial neural networks. Support Vector Machines (SVMs) are also a set of related supervised learning methods that analyze data. The SVM is a classification tool that takes a set of input data in order to predict them. Our choice of kernel type for the SVM is radial basis function (RBF) which is recommended first for modeling a nonlinear transformation.

CONCLUSION

In this study we have evaluated the performance of ANNs and SVMs algorithms for the classification of PD. They are very efficient tools for pattern recognition and they can be successfully applied in neurological applications. Thus, the goal is not only to establish a comparison between two of them but also to benefit from the highest accuracies of each classifier. The classification accuracy of the ANN and SVM are very good and they show a high degree of certainty. Therefore, ANN and SVM with the addition of new feature increase the discernibility of Parkinson’s disease. Our proposed system can be beneficial for physicians who do not have enough experience about Parkinson’s disease.

Finally, our classification system can be implemented as an application which can be utilized at the medical industry for early diagnosis of Parkinson’s disease. In the future, we will try to apply the Random forest method not only to increase the classification rate of Parkinson’s disease, but to find the most efficient parameters in the classification of Parkinson’s disease.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

47

2 nd International Symposium on Computing in Science & Engineering

REFERENCES 1.

de Rijk, M.C. et al. Prevalence of Parkinson’s disease in Europe: a collaborative study of population-based cohorts.

Neurology. 54, 21–23, (2000) 2.

Little, M. A., McSharry, P. E., Hunter, E. J., Ramig, L. O., Suitability of dysphonia measurements for

telemonitoring of Parkinson’s disease. IEEE Trans. Biomed. Eng. 2009.doi:10.1109/TBME.2008.2005954. 3.

Ishihara, L., and Brayne, C., A systematic review of depression and mental illness preceding Parkinson’s disease.

Acta Neurol. Scand. 113 (4)211–220, 2006. doi:10.1111/j.1600-0404.2006.00579.x. 4.

Jankovic, J., Parkinson’s disease: clinical features and diagnosis. J. Neurol. Neurosurg. Psychiatry. 79:368–376,

2008. doi:10.1136/jnnp.2007.131045. 5.

Huse, D. M., Schulman, K., Orsini, L., Castelli-Haley, J., Kennedy, S., and Lenhart, G., Burden of illness in

Parkinson’s disease. Mov. Disord. 20:1449–1454, 2005. doi:10.1002/mds.20609. 6.

Elbaz, A. et al. Risk tables for parkinsonism and Parkinson’s disease. Journal of Clinical Epidemiology. 55, 25-31,

2002 7.

N. Singh, V. Pillay, and Y. E. Choonara, "Advances in the treatment of Parkinson's disease," Progr Neurobiol, vol.

81, pp. 29-44, 2007. 8.

Hanson, D., Gerratt, B. and Ward, P. Cinegraphic observations of laryngeal function in Parkinson’s disease.

Laryngoscope 94, 348-353, 1984 9.

Ho, A., Iansek, R., Marigliani, C., Bradshaw, J., Gates, S. Speech impairment in a large sample of patients with

Parkinson’s disease. Behavioral Neurology 11, 131-37, 1998 10.

Little M. A., McSharry P.E., Roberts S.J., Costello D.A.E., Moroz I.M. (2007), Exploiting Nonlinear Recurrence

and Fractal Scaling Properties for Voice Disorder Detection, BioMedical Engineering OnLine, 6:23 (26 June 2007). 11.

J. Alonso, J. de Leon, I. Alonso, and M. Ferrer, "Automatic detection of pathologies in the voice by HOS based

parameters," EURASIP J Appl Sig Proc, vol. 4, pp. 275-284, 2001. 12.

M. Little, P. McSharry, I. Moroz, and S. Roberts, "Nonlinear, biophysically-informed speech pathology detection,"

in Proc ICASSP 2006. New York: IEEE Publishers, 2006. 13.

J. I. Godino-Llorente and P. Gomez-Vilda, "Automatic detection of voice impairments by means of short-term

cepstral parameters and neural network based detectors," IEEE Trans Biomed Eng, vol. 51, pp. 380-384, 2004. 14.

S. Hadjitodorov, B. Boyanov, and B. Teston, "Laryngeal pathology detection by means of class-specific neural

maps," IEEE Trans Inf Technol Biomed, vol. 4, pp. 68-73, 2000. 15.

B. Boyanov and S. Hadjitodorov, "Acoustic analysis of pathological voices," IEEE Eng Med Biol Mag, vol. 16, pp.

74-82, 1997. 16.

J. H. L. Hansen, L. Gavidia-Ceballos, and J. F. Kaiser, "A nonlinear operator-based speech feature analysis method

with application to vocal fold pathology assessment," IEEE Trans Biomed Eng, vol. 45, pp. 300-313, 1998. 17.

C. O. Sakar, O. Kursun, ”Telediagnosis of Parkinson’s Disease Using Measurements of Dysphonia,” Journal of

Med Syst, vol. 34, pp. 591-599, 2009 18.

D. Gil, M. Johnson, “Diagnosing Parkinson by using Artificial Neural Networks and Support Vector Machines,”

Global Journal of Computer Science and Technology, vol. 9, no 4, 2009 19.

A.

Asuncion

and

D.J.

Newman.

UCI

machine

learning

repository,

2010.

URL

http://archive.ics.uci.edu/ml/machine-learning-databases/parkinsons/parkinsons.data 20.

Boersma P:Praat, a system for doing phonetics by computer. Glot Int 2001;5:341-345

21.

Boersma p, Weenink D: Praat: doing phonetics by computer (version 4.3.14); computer program. Amsterdam,

Institute of Phonetics Sciences. http://www.praat.org, 2010

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

48

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/22

A New Modified Modular Exponentiation Algorithm Abdalhossein Rezai, Semnan University, Electrical and Computer Engineering Faculty, Semnan, Iran, [email protected] Parviz Keshavarzi, Semnan University, Electrical and Computer Engineering Faculty, Semnan, Iran, [email protected]

Keywords :

Public key cryptography, modular exponentiation, modular multiplication, common-

multiplicand-multiplication method ,Signed-digit representation

INTRODUCTION

The security plays a major role in the computer network and electronic communication. The core technology used for system security is cryptography. Public-key cryptography is the important component of the cryptography [1]. The modular exponentiation of large integers is a crucial operation in the public-key cryptosystems (PKCs) such as RSA [2][3]. This operation is implemented by repeating modular multiplication. So the efficiency of the many PKCs is determined by the efficiency of modular multiplication algorithm and the number of modular multiplication which required in modular exponentiation algorithm [2][3]. Montgomery modular multiplication (M3) algorithm [4] is the most popular modular multiplication algorithm due to which replaces the trial division by the modulus with a simple right shift [5][6]. But in this algorithm, the full precisions of the multiplicand and modulus are processed, while the multiplier is handling bit by bit. So both Montgomery modular multiplication algorithm and modular exponentiations algorithm which use of Montgomery modular multiplication, have a simple hardware implementation but both algorithms are timeconsume algorithms [5].

LITERATURE REVIEW

There are many research efforts in order to increase the efficiency of the Montgomery modular multiplication such as Blum and Paar [7] used high-radix modular multiplication algorithm. Keshavarzi and Harrison [8] used parallel calculation quotient and partial result. Tenca and Koc [9] and Shieh and Lin [5] used scalable modular multiplication algorithm. Tawalbeh et al. [10] and Pinckney and Harris [11] used both high-radix and scalable technique. Also Pinckney et al.[12] and Lou et al. [13] used signed-digit recoding(SDR).

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

49

2 nd International Symposium on Computing in Science & Engineering

Also, there are many research efforts in order to reduce the number of modular multiplication required in modular exponentiation algorithm such as Nedjah and Murelle [2][14] used sliding window method in order to reduce the Hamming weight of the exponent. Ha and Moon in [15] proposed that the common part of modular multiplication in modular exponentiation can be computed once rather than twice and called it commonmultiplicand multiplication (CMM) method. Wu et al. in [16] proposed using canonical recoding technique in order to recode the exponent. So the probability of the nonzero digit is reduced. Therefore the efficiency of the modular exponentiation is increased. In [16] CMM method in [15] is used in multiplication phase. Wu in [3] proposed divide the signed-digit exponent into three equal lengths and use of CMM technique in order to compute common part of multiplications, once rather than several times. Also in our previous work [17], we improved Wu’s exponentiation algorithm [3] by using a new modified modular multiplication algorithm.

METHODS

This paper presents a new modular exponentiation algorithm base on both a new Montgomery modular multiplication algorithm and CMM-SDR method. In this new modified Montgomery modular multiplication algorithm, the constant length nonzero (CLNZ) sliding window method is performed on Signed-digit multiplier in order to reduce the Hamming weight of the multiplier. Then use of multiple shift technique in order to shift partial result in one cycle instead of several cycles. Also in proposed exponentiation algorithm, canonical signeddigit recoding is performed on exponent in order to reduce the Hamming weight of exponent. Then commonmultiplicand-multiplication (CMM) method is used in order to compute the common part of modular multiplication once rather than several times.

FINDINGS & CONCLUSION

The cryptosystems which use of the proposed modular exponentiation algorithm are standing against timing analysis attacks, simple power analysis (SPA) attacks and electromagnetic analysis (EMA) attacks. The results show that the average number of multiplication steps in the proposed CMM-SDR Montgomery exponentiation algorithm is reduced at about 53.3%-86.3%, 44%-83.5% and 23.6%-77.5% in compare with

Dusse-

Kaliski’s Montgomery exponentiation algorithm, Ha-Moon’s improved Montgomery exponentiation algorithm and Wu’s CMM-MSD Montgomery exponentiation algorithm respectively for d=2-10 . In addition the algorithm complexity of this new modular exponentiation algorithm is reduced in compare with Wu’s CMM-MSD Montgomery exponentiation algorithm and our previous work. So, using this new modular exponentiation algorithm, not only the security but also the efficiency of the cryptosystem increased considerably.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

50

2 nd International Symposium on Computing in Science & Engineering

REFERENCES [1] F. Xiangyan, Z. Jiahang,X. Tinggang and Y.Youguang,“The researcher and implement of high-speed modular multiplication algorithm basing on parallel pipelining,” Proceedings of the 2009 Asia-Pacific Conference on Information Processing,vol. 1, pp.398-403,2009 [2] N. Nedjah and L.M. Mouller, “Hihgh-performance hardware of the sliding-window method for parallel computation of modular exponentitions,” International journal of parallel programming, Springer Netherlands,vol.37, pp.537-555,2009. [3] C.Wu, “An efficient common-multiplicand-multiplication method to the Montgomery algorithm for speeding up exponentiation,” Information Sciences, vol.179, pp.410-421,2009. [4] P. L. Montgomery, “Modular multiplication without trial division, Mathematics of computation,” vol. 44, no.170, pp. 519-521,1985. [5] M. D. Shieh, and W. C. Lin, Word-based Montgomery modular multiplication algorithm for low-latency scalable architectures, IEEE Transactions on computers, vol. 59, no, 8, 2010. [6] A. Rezai and P. Keshavarzi, “ Improvement of high-speed modular exponentiation algorithm by optimum using smart methods, ” 18th Iranian Conference on Electrical Engineering, Iran, pp.2104-2109, May 2010. [7] T. Blum and C. Paar, “High-radix Montgomery multiplication on reconfigurable hardware,” IEEE Trans. on computers, vol. 50, no.7 pp. 759-764, 2001. [8] P. Keshavarzi and C. Harrison, “A new modular multiplication algorithm for VLSI implementation of public-key cryptography,” Proceedings of First International Symposium on Communication Systems and Digital Signal Processin, pp.516-519, 1998. [9] A.F.Tenca and C.K.Koc, “A scalable architecture for modular multiplication based on Montgomery’s algorithm,”IEEE Trans. On computer, vol.52, no.9, pp. 1215-1221, 2003. [10] L. A. Tawalbeh, A. F. Tenca, and C. K. Koc,“A radix-4 scalable design,” IEEE Potentials,vol.24, no.2, pp.16 – 18, 2005. [11] N. Pinckney and D. Harris, “Parallelized radix-4 scalable Montgomery multipliers,” Journal of Integrated Circuits and Systems, vol.3, no.1, pp. 39-45,2008. [12] N.Pinckney, P. Amberg and D. Harris, “Parallelized Booth-encoded radix-4 Montgomery multipliers,” 16th IFIP/IEEE Intl. Conf. on Very Large Scale Integration, Oct. 2008. [13] D.C. Lou, J.C. Lai, C.L. Wu and T.J. Chang, “ An efficient Montgomery exponent algorithm by using signeddigit-recoding and folding techniques,” Applied mathematic and computation, vol.185, pp.31-44, 2007. [14] N.Nedjah, and L.M. Mourelle, “A hardware/software co-design versus hardware-only implementation of modular exponentiation using the sliding-window method,” Journal of circuts, systems and computers, vol. 18, pp. 295-310, 2009. [15] J.C. Ha, S.J. Moon, “A common-multiplicand method to the Montgomery algorithm for speeding up exponentiation,” Information Processing Letters, vol.66, no.2, pp.105–107,1998. [16] C.Wu, D.Lou and T.Chang, “An efficient Montgomery exponentiation algorithm for public-key cryptosystem,”IEEE international conference on intelligence and security information, pp.284-285, June 2008. [17] A. Rezai, and P. Keshavarzi, “High-performance Modular Exponentiation Algorithm by Using a New Modified Modular Multiplication Algorithm and Common-Multiplicand-Multiplication Method,” in Proceedings of IEEE.world congress on internet security, pp. 206-211, 201

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

51

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/24

Improving PMG Mechanism for SPIT Detection Fatemeh Safaei,Azad University,Computer Engineering Department,Tehran,Iran,[email protected] Hassan Asgharian,Iran University of Science and Technology,Computer Engineering Department,Tehran,Iran,[email protected] Ahmad Akbari,Iran University of Science and Technology,Computer Engineering Department,Tehran,Iran,[email protected]

Keywords :

SPIT, Spam, VOIP, Anti-SPIT mechanism, PMG, IP-Domain correlation

INTRODUCTION

SPAM is unsolicited messages which are sent for advertisement, phishing, fraud or annoyance purposes. Like Email spams in Internet network, voice spams are appeared in VoIP systems with the development of this technology. Spam in Internet Telephony (SPIT) is an unwanted call that is sent to the VOIP users. SPIT threat is more than spam because it is real-time and has voice nature. Therefore anti-SPIT mechanisms must be attended in any VoIP system. In this paper we presented a mechanism based on previously proposed mechanism (Progressive Multi Gray leveling or PMG) for SPIT detection in VoIP. We also used IP Domain correlation for improving the PMG. Our proposed mechanism computes the call rates of the intruder in addition to his behavior even in situations which he changed his ID and for this reason it can detect more SPITs than PMG. The simulated scenarios shows the performance of the proposed mechanism in comparison with PMG.

LITERATURE REVIEW

Spam in Internet network is an unsolicited electronic mail that sends to user’s inbox. Reading and deleting spams wastes the time of users and also in some cases make them some inconvenience for them. The spams are unsightly for Internet users. Spam problem grows rapidly in Internet network, according to JISC technology reports [1] only 7% all received email in 2001 was spam but in March 2004 this increased dramatically to 63% and is rising ever further. According Symentic report [2] in 2008 spam average in email was 80%. Although anti-Spam techniques improve but Spam number is increased too. disadvantages of large amount of spams is its heavy processing load which causes decreasing throughput in servers and increasing annoyance level. Similar to spam in Internet network, there is SPam in Internet Telephony (SPIT) that is an unwanted call that is sent to the VOIP users. SPIT is much bigger threat for users than email spam since it will interrupt the users immediately [3]. In the near future we expect that SPIT problem is assimilated spam problem and even worse.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

52

2 nd International Symposium on Computing in Science & Engineering

SPIT is time and mood consumer and annoyer. Furthermore, it has potential risks for receiver user. Users may be exposed to unsolicited religious, political, or pornographic messages, and this can be a serious problem for younger users of these services [4]. The following table make a comparison between SPAM and SPIT. TABLE I spam and SPIT comparison [5] SPIT

Spam

Real time

Unreal time

Frivol time and mood for answering Immediately answering Voice nature

Frivol time and mood for reading and deleting

No need immediately answering

Text or picture nature

High cost for initiation sessions and transfer media Low cost with per message No ability for analyzing before rings

Ability for analyzing before delivery message

Sending data after session initiation

Sending data with message

Using Internet backbone Using Internet backbone Using signaling protocol No Using signaling protocol

1.

Anti-SPIT Mechanisms

In this section we express a number of anti-SPIT techniques to countering SPIT. Anti-SPIT techniques were classified into three broad classes: preventive techniques, detective techniques and also tolerable techniques. Any anti-SPIT mechanism uses one of the following methods. 1-

white and black lists

In white lists, any users explicitly state which users can call to him, URI identifier of allowed persons add to his contact list. An example of this model is used in Skype. Black lists maintain URI of SPITter and their call is blocked. White and black list can be generated manually or automatically. Although lists have some disadvantages but they are effective. 2-

content filtering

This filters check content of messages. Content filtering doesn’t suit for SPIT because filtering real time phenomenon is very hard and needs long time and also the SPIT contents are real-time and should be prevent before call setup. 3-

challenge and response model

In this model there are three lists white, black and gray. Persons in gray list need to verify. For user verification, system ask question from user while user answer correctly, he can call his favorite person. 4-

reputation system

In this kind of systems, before setting up any new call, reputation of user is evaluated. If reputation score be more than threshold then sender can connect to receiver. Reputation systems are complex and sender can connect a few users. 5-

payment

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

53

2 nd International Symposium on Computing in Science & Engineering

Free or low cost is main reason for spread SPIT. In this solution every user pays a little money per message, this amount will be very little for legal user and high value for SPITter. SPIT senders usually send a lot of call hence their cost will be high.

6-

feedback from receiver

In this solution while spam is received, receiver press spam key on the phone and sent report spam to proxy. When number of spam from one user gets greater than threshold, sender is blocked in the network.

METHODS

PMG is one of anti-SPIT mechanisms that determines and allocates a gray level for each caller. Decision for block or pass calls is based on gray level. If the gray level is over than threshold, call is blocked and if the gray level is lower than threshold, call is passed and user classified as a SPIT generator. PMG computes two levels, short term and long term and summation of two levels composes gray level. Short term represents a short period of time for (e.g 1 min). The short term level increases very quickly and protects server form these intensive calls in a short period of time. Short level decreases as soon as the caller stops sending call [6]. The long term increases slowly and protects from malicious call in large range of time (eg. One hours or one day), if SPITter doesn’t̕ t send call in long time, long term value decreases slowly. In computation long term, value is multiplied by the number of times caller was detected as spam generator and increases much faster than regular users [6]. Usually professional SPITter changes his own identity and for this reasons his short and long term scores don’t increase in PMG. We used a mechanism based on the identity of users in addition to his call rates to detect the SPITters and prevent the entrance their calls to network. Our mechanism makes up of two module PMG and IPDomain correlation. We use IP address and other SIP URI (ID and domain) for detecting users and assign a weight to SPIT sources. These weights are used in our score computation. For example when different usernames are associated with the same domain and IP address, it could be a suspicious event but it is also a typical of enterprise traffic where proxies are used or another suspicious event is when different IP addresses are associated to the same SIP identity but it is also typical for mobile users that change their IP addresses while moving.

FINDINGS & CONCLUSION

For evaluation of our algorithm, we defined some scenarios like [6] and compute scores based on the proposed algorithm and compare our results with traditional PMG and also with and without IP domain correlation module. Some of the results are shown in the following figures.

Figure 1. Detection and false detection rate vs. the number of IDs which SPITter use (spoofed count)

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

54

2 nd International Symposium on Computing in Science & Engineering

Figure 2. Comparison between PMG, Base Score and our Proposed Algorithm As it can be seen in the above figures, the proposed algorithm is worked better than the PMG mechanism which can’t detect the SPITter especially in situations which he change his identities.

REFERENCES

[1]

S. M. A. Salehin and N. Ventura, "Blocking Unsolicited Voice Calls Using Decoys for the IMS", IEEE

Communications Society Conf., pages: 1961-1966, 2007. [2]

Bower, D., Harnett, D., Long, j., Edwards, C., "The State of Spam A Monthly Report", July 2008.

[3]

J. Quittek, S. Niccolini, S. Tartarelli, and R. Schlegel, “Prevention of Spam over IP Telephony (SPIT),”

NEC TECHNICAL JOURNAL, vol. 1, 2006. [4]

D. Waiting, N. Ventura, “The Threat of Unsolicited Sessions in the 3GPP IP Multimedia Subsystem,” in

IEEE Communications Magazine, July 2007. [5]

safaeiR. E. Sorace, V. S. Reinhardt, and S. A. Vaughn, “High-speed digital-to-RF converter,” U.S.

Patent 5 668 842, Sept. 16, 1997. [6]

D. Shin, J. Ahn, C. Shim, “Progressive Multi Gray-Leveling: A Voice Spam Protection Algorithm,”

IEEE Network, 2006. [7]

F. Menna, “SPAM OVER INTERNET TELEPHONY PREVENTION SYSTEMS: DESIGN AND

ENHANCEMENTS WITH A SIMULATIVE APPROACH,” 2007.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

55

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/27

Prediction of Warp-Weft Densities in Textile Fabrics by Image Processing Kazım YILDIZ, Marmara University, Electronic Computer Department,İstanbul, Turkey, [email protected] Volkan Yusuf SENYUREK, Marmara University, Electronic Computer Department, İstanbul, Turkey, [email protected] Zehra YILDIZ, Marmara University, Textile Education Department, İstanbul,Turkey,[email protected] Keywords :

Warp-weft density, image processing,woven fabric

INTRODUCTION

Warp and weft densities in woven fabrics are very important parameters on behalf of making textile production measurements. Determination of weft and warp densities in textile woven fabric depends on the principle of analyzing the vertical and horizontal frequencies from a textile image. Counting these parameters is generally made by hand with the help of a loupe. So the exact density can change person to person. This paper presents a warp-weft density prediction system from a given textile image. In order to prevent the individual errors we used image processing technique to determine the exact weft-warp densities. For the system design 10 different textile fabrics were scanned in 1200 dpi resolution and then transformed to the MATLAB programme. The FFR analyze was performed from the vertical and horizontal frequencies of textile image. The results show that weftwarp densities can be determined just by using its’ image with the 94% accuracy.

LITERATURE REVIEW

Image processing techniques are widely used in textile sector specificly to detect structural defects. For instance a detection algorithm which employs simple statistical features (mean, variance, median) was developed in order to detect textile fabric defects [1]. A system that detects a linear pattern in preprocessed images via model-based clustering was composed [2]. The digital image correlation technique was used to assess macro-scale textile deformations during biaxial tensile and shear tests in order to verify the loading homogeneity and the correspondence between the textile and rig deformation [3]. Tow geometry parameters are measured with the

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

56

2 nd International Symposium on Computing in Science & Engineering

help of microscopy and image processing techniques in order to determine tow deformations in a textile preform subjected to forming forces and the influence of these tow deformations on composite laminate properties [4]. In another research an automated searching system based on color image processing system was improved for the forensic fiber investigations, in order to determine the connection between the crime scene and the textile material [5]. An automated visual inspection (AVI) system was design for real time foreign fibers detection in lint by using image processing techniques [6].

METHODS

This paper presents a new approach for processing images of woven fabrics to determine the warp-weft densities. This approach includes three main steps, namely, image transformation, image enhancement, and analyzing signals of the image. In the first step, 10 different woven fabrics were scanned in 1200 dpi resolution and then transformed into gray-scale images in MATLAB programme. In the second step, the contrast was enhanced and then applied to the highpass filter. Thus the undesirable lower frequencies components were eliminated. As filter the FIR filter from twentieth degree was used. Thirdly, the FFT analyze was carried out by getting horizontal signals from 400 different rows of the images. The average value of these signals give us the horizontal frequency components of the image. The same process was applied on the vertical direction of the image to determine the vertical frequency components.

FINDINGS & CONCLUSION

In conclusion, as can be seen from figure 6, the highest frequency values for both vertical and horizontal directions are giving us the exact value of warp and weft densities.Some real and predicted values of weft-warp densities belong to 10 different fabric type can be seen from table 1. It can be concluded that the designed image processing system can predict the densities with 94% accuracy. As can be seen from table 1 the cashmere and velvet fabrics’ predicted values is very different from the real densities. This result indicates that the fabrics having feathery surfaces cannot be applied in this image processing system, because the feathers prevent the accurate measurement of the frequencies.

Real Value

Predicted Value

Fabric Types

Warp

Weft

Warp

Weft

1

Printed Cotton

38

30

39.5

30.9

2

Batiste 44

29

45.11

26.74

3

Denim 38

30

37.3

27.9

4

Canvas 32

21

30.8

21.1

5

Cashmere

47

35

0

6

Madras 50

36

47.61

32.59

0

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

57

2 nd International Symposium on Computing in Science & Engineering

7

Damask 34

26

31.8

24.57

8

Georgette

28

26

30

9

Cloth

26

22

25.5

20

10

Velvet 26

23

0

0

23.7

Table 1 Real and Predicted Values of Warp-Weft Densities in Different Fabric Types

REFERENCES

[1]

A. Abouelela, H. M. Abbas, H. Eldeeb et al., “Automated vision system for localizing structural defects

in textile fabrics,” Pattern Recognition Letters, vol. 26, no. 10, pp. 1435-1443, 2005. [2]

J. G. Jampbell, C. Fraley, F. Murtagh et al., “Linear Flow Detection in Woven Fabrics Using Model

Based-Clustering,” Pattern Recognition Letters, vol. 18, pp. 1539-1548, 1997. [3]

A. Willems, S. V. Lomov, I. Verpoest et al., “Drape-ability characterization of textile composite

reinforcements using digital image correlation,” Optics and Lasers in Engineering, vol. 47, no. 3-4, pp. 343-351, 2009/4//. [4]

P. Potluri, I. Parlak, R. Ramgulam et al., “Analysis of tow deformations in textile preforms subjected to

forming forces,” Composites Science and Technology, vol. 66, no. 2, pp. 297-305, 2006. [5]

N. Paulsson, and B. Stocklassa, “A real-time color image processing system for forensic fiber

investigations,” Forensic Science International, vol. 103, no. 1, pp. 37-59, 1999. [6]

W. Yang, D. Li, L. Zhu et al., “A new approach for image processing in foreign fiber detection,”

Computers and Electronics in Agriculture, vol. 68, no. 1, pp. 68-77, 2009.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

58

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/30

Software Automated Testing: Best Technique for Preventing Defects Maryam Haghshenas,Islamic Azad University, Science & Research Branch,Tehran,Iran,[email protected] Khadijeh Yousefpour jeddi,Islamic Azad University, Science & Research Branch,Tehran,Iran,[email protected]

Keywords :

Software automated,progrsmming,standards,box testing,reliability,validation

INTRODUCTION

Software testing process that identifies the quality of computer software. Software testing process, including implementing a program to evaluate the software bugs, but not limited to it. With this thought, testing can never completely correct computer software to prove. One important point is that software testing, should be different opinions to be quality assurance software, with all areas of business process is accompanied not only the test areas. A software failure occurs through the following processes. When the programmer to do an error that led to failure in the software source code and compile and run the error, in certain situations the system will produce incorrect results that can lead to failure. Not all errors will not lead to failure. For example, an error code will not run ever. Will never lead to failure. But if the error code to change the software environment or changes in source data or interacting with different software. Will lead to failure. Reliability testing software, helps software developers using the operational reliability with the aim profile (profile of the system), to implement the tests. The most important innovation in providing this technique is increasing rapidly. in order to reduce development time and costs, various failures in the systems, In this article, regression test, and memory overflow method is presented with implementation and evaluation aimed to simultaneously reduce costs and increase reliability of testing software.

LITERATURE REVIEW

Software testing is an activity will diagnose the potential non-compliance and failure in different phases of the construction of the software product, especially in this implementation is correct and reliable software (v & v), and plays a major role.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

59

2 nd International Symposium on Computing in Science & Engineering

To access the software validity, some of the codes and standards of software as a key component are used to justify the behavior and performance of distributed systems. For example, International Standard ISO 9126, is a model to increase the quality of software products, so that in order to optimize the product defines customer satisfaction as well as IEEE 1012 standard for software verification and validation, are created the common framework for all activities And tasks during the software life cycle processes. Activity verification and validation software allows reducing the production costs and simultaneously to increase the reliability of the software. Testing software design and other engineering products, as the initial product design is Difficult. Any engineering product can be tested in two ways: 1) Knowing the specific function that the software is designed to do, we can design tests that show every performance is completely true or not. 2) Knowing how to work with internal product and process programming, we can design tests that show that the process of implementing software to perform or not. The first method to test the black box and the latter is called white box testing. In recent years, an intermediate approach has also been noted that when the greater access to some internal components of the software is used there. This method is called gray box testing. In this paper, based black box testing method is presented.

METHODS

After changing Software whether improving operation or debugging an error, a regression test can be performed all of tests that software passed successfully before again. The regression test will be sure that last improving software not to create any new fault. At the same time, during the proposed method test, a regression test can be performed. The key-step introduced in the proposed method, that allows important information to be obtained, is the simultaneous evaluation of the occupied memory of the system during the execution of the automated tests.Through the memory occupied monitor step lists of processes, the potential memory overflow can be diagnosed through the following parameters: • Private bytes: this parameter displays the number of bytes reserved exclusively for a specific process. • Working set: represents the current size of the memory area used by the process for tails, threads, and date. The dimension of the working set grows and decreases as the VMM (Virtual Memory Manager) can permit. Since it shows the memory that the process has allocated and reallocated during its life, when the working set is too large and doesn't decrease correctly it usually indicate a memory leak.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

60

2 nd International Symposium on Computing in Science & Engineering

FINDINGS & CONCLUSION

The proposed approach of dynamic software automated testing showed the importance of accelerated automated tests for the software debug and validation in a short time interval, before product distribution, with the aim of increasing quality in use and reliability. The application presented in this paper is able to stimulate the software under test with an established stress level, comparable in the sequence operations to the real use but accelerated four times compared to the manual tests. Information concerning the memory leaks and fault regression of the new software versions with respect to the old one can be deduced, as the experimental results proved. The proposed method represents a fundamental parameter that allows the system crash to be estimated due to an overflow. At the same time, it will allow the estimation of the software availability in order to plan an effective maintenance operation plan, voted to induce not only an increase of software quality in use and customer satisfaction but also a decrease of maintenance costs. Furthermore, the benefits of such approach based on accelerated automatic testing compared with the traditional one (manual testing) can be reached both at a lower cost and a decrease of the testing time of the software verification and validation. In addition, the possibility of replicating old test sequences on new future versions (with no testing cost) can also be considered an important benefit from the industrial point of view.

REFERENCES

[1] [Ham04] Paul Hamill, Unit Test Frameworks, O'Reilly, 2004. [2] [Chu05] Huey–Der Chu, John E Dobson and I–Chiang Liu, FAST: A Framework for Automating Statistics–based Testing, 2005. [3] Reliability Analysis Center, Introduction to Software Reliability: A State of the Art Review, Reliability Analysis Center (RAC), 1996. [4] J.D.Musa, Introduction to software reliability engineering and testing, Proceedings of The 8th International Symposium on Software Reliability Engineering, 1997. [5] A. Birolini, Reliability Engineering — Theory and Practice, Springer-Verlag3-540-40287-X 2004. [6] ANSI / IEEE St. 829, Standard for software test documentation, , 1998. [7] ISO/IEC 9126: Information technology — software product evaluation — quality Characteristics and guidelines for their use, 2001. [8] ANSI / IEEE Std. 1012, IEEE Standard for Software Verification and Validation Plans, , 1986. [9] E. Diaz, J. Tuya, R. Blanco, Automated software testing using a metaheuristic Technique based on tabu search, Proceedings of 18th IEEE International Conference on Automated Software Engineering, 2003, pp. 310– 313. [10] M. Rinard, C. Cadar, D. Dumitran, D.M. Roy, T. Leu, A dynamic technique for Eliminating buffer overflow vulnerabilities (and other memory errors), Proceedings Of the 20th Annual Computer Security Applications Conference, Tucson, Arizona, USA, December 6-10 2004.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

61

2 nd International Symposium on Computing in Science & Engineering

[11] H. Güneş Kayacık, A. Nur Zincir-Heywood, M. Heywood, Evolving successful stack Overflow attacks for vulnerability testing, Proceedings of the 21st Annual Computer Security Applications Conference, Tucson, Arizona, USA, December 5–9, 2005. [12] Y. Wiseman, J. Isaacson, E. Lubovsky, Eliminating the threat of kernel stack Overflows, IEEE IRI 2008, July 13–15, 2008, Las Vegas, Nevada, USA, 2008.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

62

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/31

Comparing Classification Accuracy of Supervised Classification Methods Applied on High-Resolution Satellite Images Ayşe ÖZTÜRK,Yalova University,Computer Engineering Dept,Yalova,Turkey,[email protected], Müfit ÇETİN,Yalova University,Computer Engineering Dept,Yalova,Turkey,[email protected] Keywords :

High-Resolution Satellite Images,Classification,Supervised,Accuracy,Comparison

ABSTRACT

Classification of satellite imagery is very useful to obtain results about land-use. While unsupervised classification is easy to apply on satellite images, supervised classification methods give better results for accuracy. In this study, a ‘traditional’ supervised classification method called Maximum Likelihood Classifier(MLC) and a newer one called Support Vector Machines(SVM) approach are compared. It is clear that SVM had superior performance with better accuracy compared to MLC which is a common method based on probabilities. SVM is a good candidate for satellite image classification works although its application on satellite image is a pretty new topic.

INTRODUCTION

High resolution satellite images provide rich spatial information which eases advanced applications which require mapping of Earth surface with precision and details [1]. In literature, many techniques for the classification of high resolution remote sensing images are reported. However, classification results of existing methods indicate that there is no ‘best’ classifier yet. In this study, supervised methods are preferred for classification. The accuracy is generally based on spectral signature of the various land cover features and distinguishability [2]. Therefore, the success of a method which is accuracy in this work will be dependent on separability of each individual class as well as overall accuracy.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

63

2 nd International Symposium on Computing in Science & Engineering

Supervised classification means classifying objects into classes by help of training and test data by giving instances of each class prior to classification. Supervised training is closely controlled by the supervisor; patterns and features are recognized in advance. Prior knowledge is required for classification [3].

LITERATURE REVIEW

Maximum Likelihood Classification:

Maximum Likelihood classifier is the most common method for the applications of remote sensing as a parametric statistical method where sample areas are predetermined as training zones which are fed into a computer algorithm to classify the pixels. Gaussian distribution is assumed. The probability density functions classify a pixel by assigning it to the highest probable class [4]. Maximum Likelihood classification method is a reliable technique. Pixels are assigned to the class of highest probability. Threshold can be set for better accuracy. Classification accuracy is dramatically affected by choice of training areas. Maximum Likelihood classification is often applied on satellite imagery. Maximum Likelihood method is based on statistical calculations and probabilities, and fuzzy logic is sometimes applied together with the method as in [5]. In the paper, MLC is used for discrimination among four spectrally similar classes which are hierarchically subdivided into subclasses with a fuzzy classifier.

Support Vector Machines (SVMs) Classification:

SVM method does not overfit and generally gives accurate results without having problem with dataset size. Thus, SVM is very attractive for classification of satellite images. SVM approach tries to find the optimal separating hyperplane between classes by help of the training cases which are called support vectors. Training cases other than support vectors are discarded. Thus, an optimal hyperplane is fitted; thus, high classification accuracy is possible even with small training sets. [6] Application of SVM on satellite image classification is a new trend, but results are promising. SVM is reported to be a good candidate for hyperspectral remote sensing classification because SVM method does not suffer from high dimensionality problem [7].

Comparison of classifiers is very common in literature. Methods are compared for accuracy, performance and some other criteria. SVM’s competitor is Artificial Neural Networks(ANN) and has comparable or better results than SVM method for accuracy consideration [8]. ANN trained by back-propagation algorithm is reported having good results [9]. In a study, MLC is reported as less affected by training phase and training set size. SVM has good results, but supervision process dramatically affects overall accuracy [10]. If the size of dataset is huge, it is not a big

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

64

2 nd International Symposium on Computing in Science & Engineering

problem for SVM. Training and testing time might be long, but it is tolerable. However, with small dataset, unexpected results might be observed.

METHODS

A preprocessing is applied on the data prior to application of classification. Maximum Likelihood and Support Vector Machines methods’ post processing confusion matrices provide valuable information about accuracy of each classification method. Kappa statistics is used to decide about success of each method. The Kappa coefficient is basically the proportionate reduction in error compared to the error of a completely random classification. [11] It is reported that kappa values greater than 0.75 are considered to have agreement beyond chance. Values below 0.40 have a low degree of agreement and values between 0.40 and 0.75 represent a fair to good level of agreement.

CONCLUSION

Maximum Likelihood Classisification Results:

Water-body is clearly distinguishable from the land part. However, some areas which are mixed in each other may be overwhelmed with the other one. Additionally, clear-cut squares may indicate a problem that may be due to not setting a proper threshold value. Overall Accuracy = 78.73% Kappa Coefficient = 0.7307 The best classification accuracy which is %99.36 belongs to water body. Then, respectively rural, forest, urban and mixed-forest are classified accurately. Overall 78.73% is an acceptable result for a supervised classification result. However, some improvements are needed to provide better classification accuracy.

SVM Classification Results:

Overall Accuracy = 83.23% Kappa Coefficient = 0.7825 SVMs with good generalization capacity overcomes over-fitting problem. 83.23% overall classification accuracy is achieved. SVM is more preferable as classification results indicate, with better overall classification accuracy than maximum likelihood. Accuracy values of Maximum Likelihood algorithm and Support Vector Machines Algorithm were compared. SVM is more powerful technique for satellite image classification as results clearly indicate. disadvantage of SVMs is long training time.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

65

The main

2 nd International Symposium on Computing in Science & Engineering

REFERENCES

[1] J-T Hwang, H-C Chiang, “The study of high resolution satellite image classification based on Support Vector Machine” Geoinformatics, 2009 17th International Conference [2] J-F Mas, “An Artificial Neural Networks Approach to Map Land Use/Cover using Landsat Imagery and Ancillary Data”, Geoscience and Remote Sensing Symposium, 2003. IGARSS '03 Proceedings. [3] ERDAS Field Guide, Fifth Edition. [4] M.R. Mustapha, H.S. Lim, M.Z. Mat Jafri 2010 “Comparison of Neural Network and Maximum Likelihood Approaches in Image Classification”, Journal of Applied Sci., 10: 2847-2854. [5] A. K. Shackelford and C. H. Davis, “A hierarchical fuzzy classification approach for high-resolution multispectral data over urban areas,” IEEE Trans. Geosci. Remote Sens., vol. 4, no. 9, pp. 1920–1932, Sep. 2003. [6] D. H. Nghi, L. C. Mai, “An Object-Oriented Classification Techniques For High Resolution Satellite Imagery”, GeoInformatics for Spatial-Infrastructure Development in Earth and Allied Sciences (GIS-IDEAS) Conference, 2008 [7] J.A. Gualtieri, R.F. Cromp “Support Vector Machines for Hyperspectral Remote Sensing Classification”, Proceedings of SPIE, Vol. 3584 [8] Y. Xiong, Z. Zhang, F. Chen, “Comparison of Artificial Neural Network and Support Vector Machine Methods for Urban Land Use/Cover Classifications from Remote Sensing Images”, 2010 International Conference on Computer Application and System Modeling (ICCASM 2010) [9] K.M. Buddhiraju and I.A. Rizvi “Comparison Of Cbf, Ann And Svm Classifiers For Object Based Classification Of High Resolution Satellite Images”, Centre of Studies in Resources Engineering, Indian Institute of Technology Bombay, Mumbai 400076, INDIA. [10] J.Guo, J.Zhang, “Study on the Comparison of the Land Cover Classification for Multitemporal MODIS Images”, International Workshop on Earth Observation and Remote Sensing Applications, 2008 [11] R. G. Congalton, “A review of assessing the accuracy of classifications of remotely sensed data,” Remote Sens. Environ., vol. 37, pp. 35–46, 1991.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

66

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/35

The Relationship Between the Angle of Repose and Shape Properties of Granular Materials Using Image Analysis Seracettin ARASAN,Ataturk University,Civil Engineering Department,Erzurum,Turkey,[email protected] Engin YENER, Bayburt University,Civil Engineering Department,Bayburt,Turkey,[email protected]

Keywords :

Angle of repose, granular materials, image analysis, roundness, fractal dimension

ABSTRACT

The importance of the shape of granular materials is well recognized due to their mechanical behavior. Durability, workability, shear resistance, tensile strength, stiffness, and fatigue response of concrete and asphalt concrete is heavily depend on the shape of aggregate particles. In recent years, image analysis is widely used to analyze the particle shape characteristics of aggregate. In this research, angle of repose and shape properties of granular materials were compared using image analysis by determining the shape characteristics such as roundness, sphericity, angularity, convexity and fractal dimension. Angle of repose values were determined using tilting box method (TBM). The test results indicated that there is a good correlation between some shape properties of granular materials and angle of repose.

INTRODUCTION

The angle of repose is defined as the slope angle for a material pile in which the material can rest without collapse. The angle of repose is one of the most important macroscopic parameters in characterizing the behavior of granular materials. It has been found that the angle of repose strongly depends on material properties such as sliding and rolling frictions (Lee and Herrmann, 1993; Hill and Kakalios, 1995) and density of particles (Burkalow, 1945), and particle characteristics such as size (Carstensen and Chan, 1976) and shape (Carrigy, 1970). It is generally reported that the angle of repose increases with increasing sliding and rolling friction coefficients and deviation from spheres, and decreases with increasing particle size and container thickness. However, quantitative description of the dependence that can be used generally in engineering practice is not available (Zhou et al., 2002).

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

67

2 nd International Symposium on Computing in Science & Engineering

In recent years, image analysis has been used in widespread applications in many disciplines, such as medicine, biology, geography, meteorology, manufacturing, and material science. But, there have been relatively fewer applications of image analysis used in civil engineering. Imaging technology has been used recently to quantify aggregate shape characteristics and several researchers had investigated the role of aggregate shape in concrete and asphalt mixture.

METHODS

Shape Properties In this study, roundness, sphericity, angularity, convexity and fractal dimension are measured for each particle. Additionally, the fractal dimension of the particles is evaluated using the area-perimeter method introduced by Mandelbrot (1983) and later Hyslip and Vallejo (1997).

Sphericity (S): Sphericity is among a number of indexes that have been proposed for measuring the form of particles.

………………………………………………...………………….........……..(1)

Roundness (R): This is a shape factor that has a maximum value of 1 for a circle and smaller values for shapes having a lower ratio of area to perimeter, longer or thinner shapes, or objects having rough edges.

(Cox, 1927) ……………………………………………..…………….……….(2) Angularity (K): This is a shape factor that has a minimum value of 1 for circle and higher values for shapes having angular edges.

…………………………………………………...……………..……...…..…..(3)

Convexity (C): Convexity is a measure of the surface roughness of a particle. Convexity is sensitive to changes insurface roughness but not overall form.

………………………………………………...……………..……...……....(4)

where: A = projected area of particle (mm2); L = projected longest dimension of particle (mm); P= projected perimeter of particle (mm); Pellipse= perimeter of equivalent ellipse (mm), I= projected intermediate dimension of particle Arectangle= area of bounding rectangle (mm2).

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

68

2 nd International Symposium on Computing in Science & Engineering

Image Analysis System and Image Processing

The image analysis system used by the authors consists of a Nikon D80 Camera and Micro 60 mm objective manufactured by Nikon. In this study, the equipment for taking pictures of particles is set up by mounting a camera on a photographic stand as shown in Figure 3, adjusting the height of the camera (30 cm) to obtain a sufficiently large measurement area on the sample tray, and adjusting the light sources so that there is no shading of any object placed on the sample tray. Whenever taking a picture, a black cotton cloth was placed on the background to obtain better contrast between the particle and the background pixels. The tests were performed in a dark room. Four fluorescent light sources were positioned on the base plane to make the borders of the particles more visible for digital measurements. The flashlight of the camera was not utilized during image acquisition.

Figure 1. Schematic of the image analysis systems The output of camera was a 3872-2592 pixel, 32-bit digital image of RGB color. The aggregate particles were identified prior to analysis. ImageJ was used as the image analysis program. Threshold gray intensity was therefore chosen. The gray intensity measured on a given point was compared to the threshold value. Then, the initial gray image was converted into a binary image in which the aggregate particles that have lower gray intensity than the threshold value were set to black while the background was set to white. Applying a global threshold value for all the image worked well only if the objects of interest (particles) had uniform interior gray level and rested upon a background of different, but uniform, gray level. This was made possible in this study by placing particles on black background. The original image (32-bit digital image of RGB) (1), 8-bit 256 gray scale image (2), 1-bit binary image (3) and output of ImageJ image analysis program (4) are shown in Figure 2.

Figure 2. Image processing steps The Tilting Box Method (Angle of Repose Test)

In this method, granular material is filled in the prismatic test box at the horizontal starting position. Then, slope angle is slowly increased tilting the box by hand. This process is simultaneously recorded in a camera. At a tilting angle, the material collapse out off the box. The angle of repose is determined as the maximum angle in which the material can rest without collapse measuring the angle in the camera records. Tilting box test apparatus and the determination method of the angle of repose can be seen in Fig. 3.

Figure 3. Determining the angle of repose by tilting box method.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

69

2 nd International Symposium on Computing in Science & Engineering

FINDINGS – CONCLUSION

The test results indicated that there is a good correlation between some shape properties of granular materials and angle of repose.

REFERENCES Arasan S., Yener E., Hattatoglu F., Hınıslıoglu S. Akbulut S. “The Correlation between Shape of Aggregate and Mechanical Properties of Asphalt Concrete: Digital Image Processing Approach”, Road Materials and Pavement Design. Accepted December 2010. 2. Arasan S., Yener E., Hattatoğlu S., Akbulut S., Hinislioglu, “The Relationship between the Fractal Dimension and Mechanical Properties of Asphalt Concrete”, International Joarnal of Civil and Structural Engineering, ISSN 0976 – 4399, Vol. 1, No 2, 2010. 3. Burkalow, A., 1945. Angle of repose and angle of sliding friction: an experimental study, Bulletin of The Geological Society of America 56, 669. 4. Carrigy, M., 1970. Experiments on the angles of repose of granular materials, Sedimentology 14, 147. 5. Carstensen, J. Chan, P., 1976. Relation between particle size and repose angles of powder, Powder Technology 15, 129. 6. Cundall, P.A. Strack, O.D.L.. A discrete numerical model for granular assemblies, Geotechnique 29 (1) (1979) 47– 65. 7. Hill, K.M. Kakalios, J., 1995. Reversible axial segregation of rotating granular media, Physical Review A: Atomic, Molecular, and Optical Physics 52, 4393. 8. Lee, J., Herrmann, H.J., 1993. Angle of repose and angle of marginal stability: molecular dynamics of granular particles, Journal of Physics A, 26, 373. 9. Li YJ, Xu Y, Thornton C. A comparison of discrete element simulations and experiments for ‘sandpiles’ composed of spherical particles. Powder Technol 2005;160:219–28. 10. Zhou YC, Xu BH, Yu AB, Zulli P. An experimental and Numerical study of the angle of repose of coarse spheres. Powder Technol 2002;125:45–54. 1.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

70

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/36

An Approach to Part of Speech Tagging for Turkish Emir TURNA,Dokuz Eylül University,Computer Engineering,İzmir,Turkey,[email protected] Çağdaş Can BİRANT,Dokuz Eylül Universite,Computer Engineering,İzmir,Turkey,[email protected] Prof. Dr. Yalçın ÇEBİ,Dokuz Eylül University,Computer Engineering,İzmir,Turkey,[email protected] Keywords :

Natural Language Processing , Part Of Speech

INTRODUCTION

Nowadays, Natural Language Processing (NLP) is one of most valued topics in the field of information technologies. Many researchers have been working on NLP methods including a wide range of studies from spell checking to natural language generation. Part of Speech (POS) Tagging is one of the main topics in NLP researches. The main purpose of POS tagging process is to solve disambiguates occurred during text analysis. In this study, an approach to POS tagging for Turkish was presented. In this approach, there are two phases: First phase includes the rule definitions about the relationship between word types and the second phase includes development of the software which is used to analyze the given sentence by applying the given rules. This software also uses pre-developed tools, Sentence Boundary Detection and Morphological Analyzer for the text analysis. The studies were carried out both with and without n-grams. After the analysis carried out, it was seen that more reliable results were obtained when n-grams applied onto the text to be analyzed.

LITERATURE

There are many POS tagging systems available today, and some of them report great accuracy. Those systems are mostly developed for English text and not suitable for porting into other languages. Therefore, researchers have to rebuild their tagging systems for different natural languages. Well known researches on rule-based POS tagging include Brill’s simple rule-based part-of-speech tagger (Brill, 1992) which acquires patches on itself to improve performance and the ENGTWOL tagger (Voutilainen, 1995) that uses the Constraint Grammar approach of Karlsson et al. (1995). Various POS tagging approaches relying on stochastic methods were published earlier (DeRose, 1988; Church, 1988). Modern stochastic taggers are mostly based on hidden Markov Model like practical POS tagger (Cutting, Kupiec et al., 1992).

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

71

2 nd International Symposium on Computing in Science & Engineering

One of these works done for POS Tagging was carried out at Illinois University for English (4). In this work, each word in the text was analyzed by representing them with their keys which show their types. The matching process of words and their type information was carried out by hand. Also, another work was also carried out on Penn Treebank at Stanford University, which was called Stanford Non-Linear Part-of-Speech Tagger (1). Claws POS Tagger is another work based on another processed data set of English, was carried out at UCREL (University Centre for Computer Corpus Research on Language at Lancaster University) (2). The most famous combination tagging system is Brill’s transformation-based tagger (Brill, 1995). The tagger determines ambiguous word classes using rules like other rule-based taggers. On the other hand, like stochastic taggers, it includes a machine learning mechanism that provides rules to be induced from the text. For other languages, some other POS Taggers were also developed such as POS Tagger and Chunker for Tamil Language (17), Tree Tagger for German (5). Unfortunately, there aren’t many researches available on Turkish tagging. Some researches on Turkish POS tagging include: a tagging tool for Turkish that is implemented on the PC-KIMMO environment (Antworth, 1990) was published by Oflazer and Kuruoz (1994), and POS Tagger developed by Levent Altunyurt & Zihni Orhan at Bogazici University (3).

METHODS

The proposed POS-Tagger works on a pre-processed data set. This set includes all roots and suffixes used in Turkish and also includes possible root and suffix types. The system includes three main steps: •

Separation of words to their root and suffixes: In this step, previously developed morphological

analyzer is used [1]. After separating possible roots and suffixes of each word, type of the root is searched within the data set. If the word analyzed does not contain any derivational suffix, then the type information is accepted as is. However, some derivational suffixes may change the root type to another. •

Finding and evaluating derivational suffixes: The detailed information about derivational suffixes is

also obtained. In Turkish, a suffix may change the word type to another type(s). For example; the rot “koş” (run (v.) ) may change itself to another words when an derivational suffix “ma” (negation suffix) is added. koş

root

run(v): verb

koş-ma

negation suffix:

run (v): verb -> verb; do not run (v.)

koş-ma

derivational suffix

run (n): verb -> noun

koş-ma

derivational suffix

running (n): verb -> noun

koş-ma

derivational suffix

ballad (n): verb ->noun



Word analysis depending on the relationship between them: The results coming form the third step are

analyzed and updated based on rules of Turkish grammar.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

72

2 nd International Symposium on Computing in Science & Engineering

By applying these methods onto the text to be analyzed, the types of the clauses such as verb clause or noun clause in the sentence can be obtained. This process is also carried out by applying n-gram analysis.

CONCLUSION

This study gives us simplicity to get more efficient statistical data by doing n-gram analysis with more meaningful word groups. This will help us solving disambiguates, getting statistical data about the structure of Turkish. The rule based analysis of Turkish sentences was carried out for different purposes. In this study, the Part-ofSpeech Tagging was carried out by using rule based methods. Besides, these methods were also combined with n-gram analysis methods. By applying these methods, clauses in a sentence can be discovered and possible disambiguities occurred for a analyzed word type can be solved. After the analysis carried out, it was seen that more reliable results were obtained and clauses in a sentence can be more precisely determined when n-grams applied onto the text to be analyzed.

REFERENCES

(1)-Kristina Toutanova and Christopher D. Manning. 2000. Enriching the Knowledge Sources Used in a Maximum Entropy Part-of-Speech Tagger. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/VLC-2000), pp. 63-70. (2)-R. Garside, G. Leech and G. Sampson (eds), 1987, The CLAWS Word-tagging System. The Computational Analysis of English: A Corpus-based Approach. London: Longman. (3)- Levent Altunyurt & Zihni Orhan,June 2006, Submitted to the Department of Computer Engineering in partial fulfilment of the requirements for the degree of Bachelor of Science in Computer Engineering, Boğaziçi University (4)- http://cogcomp.cs.illinois.edu/page/software_view/3, Illinois University (5)- Helmut Schmid at the Institute for Computational Linguistics of the University of Stuttgart, http://www.ims.uni-stuttgart.de/projekte/corplex/TreeTagger/ (6)-Antworth, E. L. 1990. PC-KIMMO: A Two-level Processor for Morphological Analysis. Summer Institute of Linguistics, Dallas, TX. (7)-Brill, E. 1992. A simple rule-based part-of-speech tagger. In Proceedings of the Third Conference on Applied Computational Linguistics, Trento, Italy. (8)-Brill, E. 1995. Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging. Computational Linguistics, 21(4), 543-566. (9)-Church, K. W. 1988. A stochastic parts program and noun phrase parser for unrestricted text. In Second Conference on Applied Natural Language Processing, pp. 136-143. ACL. (10)-Cutting, D., Kupiec, J., Pedersen, J. O., and Sibun, P. 1992. A practical part-of-speech tagger. In Third Conference on Applied Natural Language Processing, pp. 133-140. ACL.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

73

2 nd International Symposium on Computing in Science & Engineering

(11)-Dandapat, S., Sarkar, S., Basu, A. 2004. A Hybrid Model for Part-of-Speech Tagging and its Application to Bengali. In International Conference on Computational Intelligence, pp. 169-172. (12)-DeRose, S. J. 1988. Grammatical category disambiguation by statistical optimization. Computational Linguistics, 14, 31-39. (13)-Jurafsky, D., Martin, J. H. 2000. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Prentice-Hall, New Jersey. (14)-Karlsson, F., Voutilainen, A., Heikkilä, J., and Anttila, A. 1995. Constraint Grammar: A LanguageIndependent System for Parsing Unrestricted Text. Mouton de Gruyter, Berlin. (15)-Oflazer, K., Kuruoz, I. 1994. Tagging and morphological disambiguation of Turkish text. In Fourth Conference on Applied Natural Language Processing, pp. 144-149. (16)-Voutilainen, A. 1995. Morphological disambiguation. In Karlsson, F., Voutilainen, A., Heikkilä, J., and Anttila, A. (Eds.), Constraint Grammar: A Language-Independent System for Parsing Unrestricted Text, pp. 165284. Mouton de Gruyter, Berlin. (17)- Dhanalakshmi V, Anand kumar M, Rajendran S, Soman K P,POS Tagger and Chunker for Tamil LanguageTamil University, Thanjavur, Tamilnadu, India

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

74

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/37

Face Modeling and Synthesis Using 3-Dimensional Facial Feature Points Kamil YURTKAN, Cyprus International University,Computer Engineering Department,Lefkosa,TRNC,[email protected] Hasan DEMİREL, Eastern Mediterranean University, Electrical and Electronic Engineering Department,Gazimağusa,TRNC,[email protected] Keywords :

Face modeling, texture mapping, 3D face, facial synthesis

INTRODUCTION

Facial synthesis methods are highly used in various multimedia applications including model based coding and teleconferencing. Model based video coding systems can be used for very low bit rate video coding by first generating the 3-Dimensional (3D) model from a single face, and then coding the model parameters for the rest of the frames in the video sequence. 3D face models are also used in face and facial recognition studies. The accurate face modeling is the key part of the research fields including face processing. Our first studies were about to develop a 3D face model to be used in face and facial expression synthesis [1]. Using facial feature point locations on the neutral face, we have developed a face synthesis algorithm by the adaptation of the face model. The data about the facial feature points used were in 2D. Using 2D facial feature points can synthesize realistic frontal face images. But, when the face is rotated, more than 30 degrees, the synthesized faces are becoming very similar and unrealistic. In order to solve this problem, in this study, we have developed a facial synthesis algorithm using 3D facial feature points. By the improvements in the digital imaging field, 3D facial feature point locations can be calculated and the databases are available for current research studies. Using 3D facial feature points synthesize rotation invariant realistic face images for the side views.

LITERATURE REVIEW

Early researchers in 3D face synthesis tried to achieve a parameterized generic face model. In early 1980s, Forchheiner proposed a model-based videophone system that uses a computer-animated head model for the transmission of head and shoulder scenes [2]. This system attracted considerable interest, since then, many researchers have worked on this concept. The face model created by Stromberg, called CANDIDE, and its

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

75

2 nd International Symposium on Computing in Science & Engineering

versions are popular in many research studies [6]. In 2008, Sheng et. al. [5] proposed a fully automatic face synthesis system for unsupervised multimedia applications. Facial animation was pioneered by Ekman in 1970s. Later on, Facial Action Coding System (FACS) was developed by Ekman and Friesen to code facial expressions in which the movements on the face are described by action units [3, 4]. In 1999, MPEG-4 standard defined a model for neutral face with 84 feature points. MPEG4 standard also defined 68 Facial Animation Parameters (FAPs) for the animation of the face by the movements of the feature points. MPEG-4 FAPs are popular in most of the research labs to synthesize or recognize facial movements and expressions [7]. In our first studies, we develop a complete head model fully compatible with MPEG-4 standard for face synthesis [1]. Then we employed MPEG-4 FAPs in order to model facial animations [8]. Our last studies were about the development of a face mask and the synthesis of the face from single frontal face image [9, 10]. Also, we achieved the synthesis of the basic facial expressions, which are anger, fear, disgust, happiness, sadness, and surprise. In this study, we are modeling the face using 3D facial feature points, and synthesizing the side images as well as the frontal images in an acceptable quality.

METHODS

Our 3D face synthesis algorithm consists of two main parts: The model adaptation and the texture mapping. Firstly, the facial feature point locations in 3D are processed, in order to calculate the width and the height of the overall face. Then, the adaptation algorithm continues with updating the vertices of the local regions of the face. Unlike our previous studies, in this study, the local face regions, mouth, nose, eyes, chin and the forehead, are updated in height, width and depth. By the use of depth information for local regions of the face, accurate face models can be generated for the side views. The second part of the synthesis system is the texture mapping part after adapting the face model to the facial feature point locations o the individual. The texture mapping considers the neutral face image of the person, and applies the neutral face model data in the mapping stage. After texture mapping, the face model has a realistic appearance and the improvements in the depth of the local regions results in realistic side images.

FINDINGS & CONCLUSION

This study introduces a facial synthesis system using 3D facial feature points. By the addition of the third dimension to the facial feature point data, more accurate face models have been achieved. Especially, for the synthesized face images having rotations more than 30 degrees, our proposed synthesis algorithm generates realistic face images. Also, the other research studies using face modeling and synthesis, like face and facial expression recognition, can be adapted to this 3D facial synthesis system for the further system performance improvements. Our synthesis system is tested on BU-3DFE database [11] and qualitative and quantitative

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

76

2 nd International Symposium on Computing in Science & Engineering

performance metrics have been applied. 3D facial synthesis system provides high quality face images under rotation transformations.

REFERENCES [1]

Kamil Yurtkan, Hamit Soyel, Hasan Demirel, Hüseyin Özkaramanlı, Mustafa Uyguroğlu, and Ekrem

Varoğlu, “Face Modeling and Adaptive Texture Mapping for Model Based Video Coding”, A. Gagalowicz and W. Philips (Eds.): CAIP 2005, LNCS 3691, pp. 498 – 505, 2005. [2]

R. Forchheimer, O. Fahlander, and T. Kronander. Low bit-rate coding through animation. In Proc. Picture

Coding Symposium (PCS), pages 113-114, Davis, California, Mar. 1983. [3]

Ekman, P. & Friesen, W. (1976). Pictures of Facial Affect. Palo Alto, CA: Consulting, Psychologist

[4]

Ekman, P. & Friesen, W. (1978). The Facial Action Coding System: A Technique for the Measurement of

Facial Movement, Consulting Psychologists Press, San Francisco [5]

Yun Sheng, Abdul H. Sadka and Ahmet M. Kondoz, “Automatic Single View-Based 3D Face Synthesis

for Unsupervised Multimedia Applications”, IEEE Transactions on Circuits and Systems For Video Technology, vol. 18, no. 7, pp. 961-974, July 2008. [6]

Ahlberg, J.: ‘Candide-3: an updated parameterised face’. Report No. LiTH-ISY-R-2326, 2001 (Linkoping

University, Sweden). [7]

G.Abrantes, F.Pereira, “MPEG-4 facial animation technology: survey, implementation and results”, IEEE

Transactions on Circuits and Systems for Video Technology, vol.9, no. 2, pp. 290 – 305, March 1999. [8]

Hamit Soyel, Kamil Yurtkan, Hasan Demirel, Huseyin Özkaramanlı, Mustafa Uyguroğlu, Ekrem Varoğlu,

“Face Modeling and Animation For MPEG-4 Compliant Model Based Video Coding”, CGIM 2005. [9]

Kamil Yurtkan, Turgay Çelik and Hasan Demirel, “Automatic Facial Synthesis from Single Frontal Face

Image”, The 13th IASTED International Conference on COMPUTER GRAPHICS AND IMAGING, CGIM 2010, February 15-17 2010, Innsbruck, Austria. [10] Kamil Yurtkan ve Hasan Demirel , “Düsük Bit Hizli Video Kodlama Uygulamalari için MPEG-4 Uyumlu 3B Yüz Modellemesi ve Sentezi” ,3.Haberlesme Teknolojileri ve Uygulamalari Sempozyumu,Habtekus 2009,911 Aralik, Yildiz Teknik Üniversitesi (Accepted on 26th September) [11] Yin, L. Wei, X. Sun, Y. Wang, J. & Rosato, M.(2006). A 3d facial expression database for facial behavior research. In Proceedings of International Conferance on FGR, pp. 211-216, UK.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

77

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/43

Performance Analysis of Eigenfaces Method in Face Recognition System Salih GÖRGÜNOĞLU,Karabük University,Computer Engineering,Karabük,Turkey,[email protected] Kadriye ÖZ,Karabük University,Electronics and Computer Department,Karabük,Turkey,[email protected] Şafak BAYIR,Karabük University,Educational Sciences,Karabük,Turkey,[email protected] Keywords :

Face recognition, eigenfaces, performance analysis

INTRODUCTION

The biometric data has always been taken into consideration in improving security systems. Fingerprint, signature, iris recognition and face recognition are the main methods used in biometric systems. Persons’ volunteering is essential for fingerprint, signature or iris recognition. It should be taken into consideration that people do not want to be checked either with their signature or fingerprint each time when they are entering into a high building through identity control. Although people are much more voluntary in iris recognition, it requires rather a high cost. On the other hand, since the image of a face is sufficient in face recognition, the cost of it is quite low compared to other systems and also voluntary basis is not needed in face recognition. In this study, a face recognition system has been developed in order to be used for security checks in high-rise or special buildings where large numbers of people enter and exit or in security check points of private buildings. Min-max normalisation method is used in order to inhibit different lighting conditions. The eigenfaces method which was studied broadly in previous studies and well-known method has been used in this face recognition system. Time performance analyses of this system were carried out. LITERATURE REVIEW The problem of human recognition, in general, is seen as iris recognition, fingerprint recognition, face recognition, recognition with hand or facial veins. Moreover, since the dataset required for the face recognition process is obtained much more easily compared to other methods, it is widely used on the issues such as recognition of staff at the entrance of the workplace and identification of offenders with security cameras. [1]

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

78

2 nd International Symposium on Computing in Science & Engineering

Halid Ergezer used eigenfaces, neural networks and Gabor wavelets methods, which are commonly utilised algorithms in the related area, in ARDB and ORL database in order to test usability in real-time face-recognition systems. [2] Another study was done for the effective access of news videos. The study includes a systematic evaluation of face detection methods to find the people who are the most important items of news on videos [3]. Whole image is scanned with face detection and image recognition algorithms to find regions of face on the image via sliding reference pixels over candidate pixels. Therefore, especially large size images require a lot of time to scan. The reason of this is that scanning is executed on the image which also includes non-face parts. Muhammad et al, in their study, aimed to reduce scanning time with using a pre-scanning to eliminate places non-or less-to similar face before face perception and recognition algorithms [4]. İbrahim Saygın Topkaya studied a face recognition system executed on the video images. System trained including only one person in each video without limitation of pose, angle and rotation, and then tries in the same way in order to recognize the people in different videos [5]. A new face recognition system based on histogram matching is suggested in one study. The system proposed utilises histograms obtained in different colour spaces and related to face images as feature vectors in the recognition process. In addition, there are also studies on merging statistical vectors of colour pixels [6,7]. In the study presented by Necla Özkaya and Şeref Sağıroğlu, it is discussed that the existence of any relationship between biometric features such as fingerprint, face, iris, retinal and hand geometry and a new intelligent system based on artificial neural networks to guess any person faces using only the fingerprint is introduced [8]. Moreover, subspace-based methods are used to solve the problem of face recognition in various studies carried out [9,10]. METHODS Images with different exposures and lighting conditions for each person are given to the system. First, these images are converted into gray-scale formats. The gray-scale conversion process is carried out finding the brightness values of each images’ RGB (Red Green Blue) values and then these values are multiplied by different coefficients. To minimise the impact of different lighting effects, each image is exposed to normalisation process in their own. Min-max normalisation is used for the conversion. Images that are pre-processed are reduced to lower dimensions with Principal Component Analysis method. Eigenfaces method is used to find the N eigenface which depict faces in database. These eigenfaces’ weights are used for the new face recognition. The second phase of system is the determination of the new face in which the face whether belongs to a known or unknown person. The threshold value becomes important in the process of comparison carried out with Euclidean distance method. False Match Rate - FAR and False Acceptance Rate FMR are used in the selection of threshold value. In this study, a user interface developed via C# programming language in Visual Studio .NET environment in which the user is allowed to create his/her database for a face recognition system is presented. The obtained average face and the first five eigenfaces are displayed. The picture of the face which is needed to be recognised

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

79

2 nd International Symposium on Computing in Science & Engineering

and the picture of the face which was recognised are displayed simultaneously. Selection of the threshold value is presented as an option. FINDINGS & CONCLUSION As a result, a face recognition system has been developed based on the method of eigenfaces in this study. The eigenfaces method is preferred as a rapid approach to the solution due to easy applicability. Fine results were obtained under different lighting conditions utilising the face recognition system developed within the scope of this study. Furthermore, pose variations which do not affect the visibility of the face were also tolerated by the system. Pre-processes carried out for the pictures, the calculation of eigenfaces and the operations done for the recognition of desired facial image were analysed and evaluated in terms of time. Although the time analysis results were acceptable where there is a small number of face, when there is a large number of face pictures, for instance, 10.000 pictures, the system got slower to calculate the system eigenvectors and eigenfaces. Because of the necessity of recalculating eigenvectors and eigenfaces for each new person added to the system, the face recognition system presented in this study is thought to be improved in terms of speed. REFERENCES 1.

Gümüş, E., “Yüz tanıma problemine karma yöntemlerin uygulanması”, Yüksek Lisans Tezi, İstanbul

Üniversitesi, (2008). 2. Ergezer, H., “Yüz tanıma:Öz yüzler, yapay sinir ağları,gabor dalgacık dönüşümü yöntemleri”, Yüksek Lisans Tezi, Başkent Üniversitesi, (2008). 3. Can Acar, Arda Atlas, Koray Cevik, Isa Olmez, Mustafa Unlu, Derya Ozkan, Pinar Duygulu , “Yuz Bulma Yontemlerinin Haber Videolari icin Sistematik Karsilastirmasi”, IEEE 15. Sinyal İşleme ve İletişim Uygulamaları Kurultayı (SIU),(2007). 4.

İrfan Muhammad, Ziya Telatar, Önder Tüzünalp, “Yüz Algılama algoritmalarında tarama zamanının

azaltılması için bir hızlı ön tarama algoritması”, SIU ,(2001). 5.

İbrahim Saygın Topkaya, “Video görüntülerinden yüz tanıma”, Yüksek Lisans Tezi, Yıldız Teknik

Üniversitesi, (2008). 6. Gholamreza Anbarjafari, "A new face recognition system based on colour statistics", Yüksek Lisans Tezi, Doğu Akdeniz Üniversitesi, (2008). 7.

Hasan Demirel, Gholamreza Anbarjafari, “Renkli Histogram Eşleştirme tabanlı yeni bir Yüz Tanıma

Sistemi”, SIU ,(2008). 8. Necla Özkaya, Şeref Sağıroğlu, “Parmak izinden yüz tanıma”, Gazi Üniv. Müh. Mim. Fak. Der., Cilt 23, No 4, 785-793, 2008 9. Hüseyin Gündüz, “Altuzay Temelli Yaklaşımlar Kullanarak Gerçek Zamanlı Yüz Tanıma”, Yüksek Lisans Tezi, Eskişehir Osmangazi Üniversitesi.(2010) 10. D. Kern, H.K. Ekenel, R. Stiefelhagen, “Aydınlanma Alt-uzaylarına dayalı Gürbüz Yüz Tanıma” IEEE. (2006)

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

80

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/46

Support Vector Machines with the COIL-20 Image Library Classification Practice Baha ŞEN,Karabük University,Computer Engineering,Karabük,Turkey,[email protected] Ahmet ÇELİK,Karabük University,Computer Engineering,Karabük,Turkey,[email protected] Keywords :

Suppport Vector Machine, COIL-20, Computer learning, Data mining, Optimization, Image

classification

INTRODUCTION

In this study, the COIL-20 (Columbia Object Image Library) on a picture library of Support Vector MachineSVM, using machine learning was based on an attribute. SVM classification method on the data set used in this study, was conducted in Matlab code page package program. SVM decision-making, as an attribute of each data set, he performed a classification using the values obtained by subtracting the average. SVM, a classification method based on optimization, and with this method, including linear and nonlinear classification can be done in two ways. Support vector machine learning methods in a classification is a group of hosts [9,12]. First, according to the linear classification problem encountered in the first classification not be carried as a linear non-linear classification is preferred if you tried. In this study, two-class separation performed using a linear classification. The classification has been considered in making the selection of the appropriate plane. Choosing the most appropriate plane, the plane of the weight vector elements of each class was selected with the most distant points [13]. Taking into account these points with 20-gray scale image classification methods and often the preferred COIL-20 Image Library [14], the library data sets, toys and toys as a linear classification of images success rate was %100.

LITERATURE REVIEW

Data mining is a method commonly used in classification, databases (data set) is used to reveal hidden patterns [1]. The classification of an object property (own property) to be different by using the example of the different objects of different classes ayırabilmektir uzayımızı basic purpose of a separator using the optimal function of two separate classes {-1, 1} ayırabilmektir. Creating two classes of distinguishing two points of each plane is performed by determining the maximum length [10]. But this study can be done using several methods for classification Support Vector Machine (SVM) were used.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

81

2 nd International Symposium on Computing in Science & Engineering

Categories of data for a specific process is followed. First of all, a part of the training data set using the classification levels in order to be established. Subsequently, this limits the classification can be made using the new situation arose. Support Vector Machine is trying to find a space that positive and negative examples of the best known or can be created, a very useful method of data classification learning [5,6,10]. SVM classification method is performed with the help of linear or nonlinear function. This method is most appropriate to separate the data based on function estimation. Further, this method of machine learning methods were preferred in the field of data mining today. In this method, a boundary plane (hyper plane) is created with the help of two classes do not overlap with each other [1.10]. The remaining data in a different class of upper plane (y = 1), the sub-region includes the remaining spots in the data for different classes. Boundary plane is expressed as follows; w.x+b=0 In this expression w: the weight vector b: constant value x represents the attribute values.This expression also can be expressed as follows [2,7]. Σ wi.xi+b=0 Limit the remaining points at the top of the plane meets the following inequality. w.x+b>0 -> y=1 (class 1)

[10]

Limit the remaining spots at the bottom of the plane meets the following inequality. w.x+b y= -1 (class 2)

[10]

f(x) = Σ yi αi . (x , xi ) + b

[1]

Found that f (x) 'refers to the sign of x-class membership. xi vectors, vectors indicate the most appropriate support to the top of the plane. Signal functions are; f(x)= sign(w.x +b) is expressed in the form. METHODS

In this study, COIL Image Library images in the library according to the desired classification using SVM classification images were not toys and toys with [3]. SVM is the most appropriate method of image classification [11]. Learning stage, images with and without toys with the help of DVM system attributes were taught. Then photos of the test attributes are classified using SVM. N training data samples (20 pieces picture in Image Library) xi entries for the first set, then the good example of self-giving qualities of space, including limit} {-1.1, the two clusters separately classifies f (x), the function of was. First, the M pieces of the image size of each of the M * N [M * N] * 1 of it was transformed into vectors. x; [M * N] average size of the matrix was removed and the matrix itself is calculated to be as follows. x = ( x - repmat(mean(x,1),[20 1]) ) / 1000 Example of classification levels are determined as follows. y(1) = 1;

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

82

2 nd International Symposium on Computing in Science & Engineering

y(3) = 1; y(4) = 1; y(6) = 1; y(13) = 1; y(19) = 1; Y matrix is shown above, in fact, y [M, 1] (20 line 1 column) -1 as the size of the values assigned to the first toy, then the images as above is divided into classes by giving a value of 1. α=inv(H)*ones(M,1) α : Lagrange multiplier [2,4] Hij=yiyj.(xiT.xj) [8] w = sum( repmat( alfa.*y, [1 N] ).*x ) b = sum( 1./y - x*w' ) / M f = sign(x*w' + b) Command window, the values α (α> = 0) and the results show values. A result, the values consist of 3 columns. Exceeding specified levels of classification according to the F (x) function obtained from the classification levels and the learning results are shown in section F (x) function with the help of performance measurement values were initially determined by comparing the classification values. Example data set was 100% for learning success.

FINDINGS – CONCLUSION

According to the results obtained in this study and the data set of COIL-20 Image Library on the toy library with toys and non-classification of data was carried out. Support vector machine (SVM) classification and a classification method based on optimization. SVM method in the study, image processing (image processing) by performing classification of images in the library of COIL-20 has been successfully performed. Obtained by using this function be performed in different categories of data in this data set. These functions form the basis of the distinctive points classification weight vector (w), constant value (b) and attribute value (x) is. Hessien matrix with the help of Lagrange multiplier (α), there is one important step in obtaining the function values. According to the results of the experiment, SVM classification method used to obtain reliable results is shown.

REFERENCES [1] Veri Madenciliği Yöntemleri Dr. Yalçın Özkan Papatya Yay. Mayıs 2008 [2] http://en.wikipedia.org/wiki/ [3]http://www.cs.columbia.edu/CAVE/software/softlib/coil-20.php [4] A Tutorial on Support Vector Machines for Pattern Recognition by Christopher J. C. Burges. Data Mining and Knowledge Discovery 2:121–167, 1998

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

83

2 nd International Symposium on Computing in Science & Engineering

[5] A Practical Guide to Support Vector Classi_cation, Chih-Wei Hsu, Chih-Chung Chang, and Chih-Jen Lin, Department of Computer Science, National Taiwan University, Taipei 106, Taiwan, April 15, 2010 [6] Classication by Support Vector Machines Florian Markowetz, Max-Planck-Institute for Molecular Genetics Computational Molecular Biology, Berlin, Practical DNA Microarray Analysis 2003 [7] http://nlp.stanford.edu/IR-book/ [8] Support Vector Machines Explained, Tristan Fletcher, March 2009 [9] http://www.support-vector-machines.org/ [10] Kristin P. Bennett and Colin Campbell, "Support Vector Machines: Hype or Hallelujah?", SIGKDD Explorations, 2,2, 2000, 1–13 [11] An Investigation of Significant Object Recognition Techniques, V.N. Pawar, Sanjay N. Talbar, IJCSNS International Journal of Computer Science and Network Security, VOL.9 No.5, May 2009 [12] Subspace Based Object Recognition Using Support Vector Machines, O. G. Sezer, A. Ercil and M. Keskinoz, EUSIPCO 2005 [13] Content Based İmage Retrieval Using Exact Legendre Moments And Support Vektör Machine Ch.Srinivasa Rao , S.Srinivas Kumar, B.Chandra Mohan, JIMA, Vol 2, No 2, May 2010 [14] S. A. Nene, S. K. Nayar and H. Murase, "Columbia Object Image Library (COIL-20),"Technical Report CUCS-005-96, February 1996.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

84

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/50

Learning Management System Design And Network Access Security With RFID

Dr. Ali Buldu, Marmara University, Faculty of Technical Education, Istanbul, Turkey, [email protected] Nagihan Bekdaş, Marmara University, Faculty of Technical Education, Istanbul, Turkey, [email protected] Serdar Sert, Marmara University, Faculty of Technical Education, Istanbul, Turkey, [email protected]

Keywords : E-learning, learning, web-based education, RFID, radio frequancy identification, network security, web access

INTRODUCTION

Today, students who studying at undergraduate level, need education that independent of time and space because of their life together with training and business. This situation brought about the need for students to use technology. So, supporting formal education or as another option to formal education, the importance of webbased education has increased steadily. In this study is intended to bring the solution with a software development requirement, the software provides training on the internet. This software is a solution of all educational stages as Students' system registration, education process and assessment of student achievement with a variety of test modules. So that allows students to receive education as synchronous or asynchronous and aims to develop a solution to the negativity which brought formal training environment. At the same time, educational institutions, students will be served the identification cards, which has RFID specialty. Students can access the software with that cards, in-house or outside the institution by a computer that has RFID reader, without the need for security measures provided by the computer user name and password.

NEW METHODS IN EDUCATION: E- LEARNING MANAGEMENT SYSTEM

The e–learning management system which uses to achieve e- learning, provides management of learning activities. The system provide functions such as providing learning materials, editing, sharing the learning material and discussions, making assignment, take exams, providing feedback on these assignments and examinations, students, teachers, and keeping system records, receiving reports.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

85

2 nd International Symposium on Computing in Science & Engineering

ALL MODULES IN THE E-LEARNING MANAGEMENT SYSTEM

Module Explanation Membership Module

The instructor and students to become a member of the system and to limit the

authority provides access to the system. Messaging Module

The instructors for students in public or private, to send messages and make

announcements, send a message to the students to each other. Categories Module

Lesson plans and the contents of nested grouped into categories, the content of this

training the students will be served. Document Module

Course content categories on the basis of the installation of the system, allows

students to download and use and reporting of students' use of the document. Video Module

Installation of the system study the contents in video format, downloaded and used by students

and provides students reported the use of the document. Homework Module

The system allows uploading papers of the students by students and the trainers can

download and examine these papers. Quiz Module

During the e-learning, the system provides quizzes which as a test form.

Examination Module

Students are examined on the web site. The system allows entry of exam questions

according to their degree of difficulty by trainer with an administration panel Assessment Module

Provides entries of Homework, Quiz and Exam module the evaluation by trainers by a

management panel. Tag Module

The module provides training contents outside of the categories and one or more categories

associated with the subject and filtered. Voting Module The contents can be voted by students, this contents how much useful for students can be detect and reporting by the results. Comment Module

The module provides to making comments and share information to students and

educators about the training content.

E-LEARNING APPLICATION

The application can educate in real-time through the institutional web site over the internet. Application

Explanation

Video Conference

By performing real-time speech and video over the web site, allowing trainers to make

their students practice orally. Synchronicity Education Supporting training with video conference at real time over web site.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

86

2 nd International Symposium on Computing in Science & Engineering

THE NEW FACE OF SECURITY: RFID

As a result of improvements in information and communication technologies, more accurate, more detailed, faster and more secure flow of the datas have become a necessity nowadays. Because of that reason, RFID system is one of the most promising technologies in control and monitoring of business systems in different fields. The design of the RFID system aims to compose, to collect and to manage dynamic information, without human interaction. Its usage increases exponentially every year. RFID is a technology that allows to track an object, which is microprocessor – equipped and carrying a label, by the information carried by this label and works with radio frequency. With RFID, objects can be identified and tracked throughout their entire life cycle, the chain of operations from production to distribution. With this new technological substructure, data collection, service delivery and system management is carried out without human intervention and the quality and speed of service is increased by reducing error rate.

ACCESS TO RFID

Basically, all smart-cards and RFID-readers are hardware products. To acquire digital data from these cardreaders, by accessing hardware, processing the acquired results, which are turned into appropriate format, might be needed. In that case, it is needed to develop software products which are suitable for all of these card-readers. Hardware access is provided by unmanaged codes. The unmanaged codes which is provided for access hardware is picked as a group in a library. By creating these libraries, provides access to hardware which has software is developed at high level languages. In the Windows Operating Systems, the access to the RFID readers is provided by unmanaged dll (dynamic link library) which is named “winscard.dll”. If software will be developed for Windows Operating Systems, by using this Dll can be access to binary data which is created RFID hardware. These results are converted into digital data formats and can be shown to users who are the application's target.

Figure 2 : Application Block Diagram

PROJECT ARCHITECTURE

The project developed with Microsoft .Net Framework 4.0, Entity Framework Flash Action Script 3.0 and Microsoft SQL Server 2008 technologies. If necessary, data transfer can be made to other platforms envisaged with the "XML" and "XML Web Services technology.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

87

2 nd International Symposium on Computing in Science & Engineering

REFERENCES

1.

Balkovich, E., Bikson, T. K., Bitko, G., 9 to 5: Do You Know If Your Boss Knows Where You Are? : Case Studies of Radio Frequency Identification Usage in the Workplace, Santa Monica, CA, USA: Rand Corporation, 2005. s.3. 2. Karygiannis T., Eydt B., Barber G., Bunn L., Phillips T., Guidelines for Securing Radio Frequency Identification (RFID) Systems. Recommendations of the National. Institute of Standards and Technology, NIST Special Publication 800-98, National Institute of Standars and Tecnology, Gaithersburg. 3. Roberts, C.M., “Radio frequency identification (RFID)”, computers&security25, s. 18 – 26, 2006. 4. Thornton, F., Haines, B., Das A. M., Bhargava, H., Campbell, A., Kleinschmid, J., RFID Security, Syngress Publishing, Rockland, 2006 5. White Paper, RFID Teknolojisi, 2005 s.2 6. http://tr.wikipedia.org/wiki/RFID , access date, 12.03.2011 7. http://www.rfidturkey.com/web/index.php , access date, 11.03.2011 8. http://pinvoke.net/ , access date, 11.03.2011 9. http://msdn.microsoft.com/en-us/default , access date, 11.03.2011 10. http://www.hidglobal.com , access date, 13.03.2011

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

88

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/51

PSO based Trajectory Tracking PID Controller for Unchattering Control of Triga Mark-II Nuclear Reactor Power Level Gürcan LOKMAN, Haliç University, Department of Electric-Electronics Engineering, Istanbul, Turkey, [email protected] Vedat TOPUZ,Marmara University,Vocational School of Technical Science,Istanbul, Turkey,[email protected] A. Fevzi BABA,Marmara University,Department of Electronics and Computer Education, Istanbul, Turkey, [email protected]

Keywords : Optimization PID parameters, PSO, PID control, Trajectory Tracking, Nuclear Reactor Control.

INTRODUCTION

Proportional Integral Derivative (PID) controller has been widely used in industry due to its advantages such as simple in structure, highly stable and easy to implement. Although there are many conventional tuning methods, such as Ziegler-Nichols, Cohen-Coon method, Tyreus-Luyben, and relay feedback auto tuning, all these methods are greatly depend on model and have unchanged tuning result. Neural network, fuzzy based approach, neurofuzzy approach, evolutionary computational techniques have also been reported in literature for tuning of parameters of PID controller. In this study, a Trajectory Tracking PID Controller (TTPIDC) is designed to control the nuclear research reactor Triga Mark-II at Istanbul Technical University (ITU) and Partial Swarm Optimization (PSO) is used to optimize its parameters. This method provides proper and easy design procedure that requires less control variable of the reactor. In this approach, the designed controller uses trajectory that calculated by the values of periods and initial and desired values of reactor power levels as the reference value of power. The aim of this study is to control reactor using PID controller with given trajectory.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

89

2 nd International Symposium on Computing in Science & Engineering

LITERATURE REVIEW

Particle Swarm Optimization (PSO) is a recently invented high performance optimizer that possesses several highly desirable attributes, especially the ease with which the basic algorithm can be understood and implemented. PSO is a technique modeling swarm intelligence that was developed by Kennedy and Eberhart in 1995 [11], who were inspired by the natural flocking and swarming behavior of birds and insects. This technique has been used in addressing several optimizations and engineering problems [14].

There has been a lot of research carried out into the power control of the Triga Mark-II research reactor [14-17]. The use of Trajectory tracking to control nuclear reactor has been proposed for reduction of the controller’s variables by Can at all [17]. In 2004, Baba used this method in Fuzzy Logic Controller that is used as a controller to control reactor. However there were many parameters determined by the user in this study. In order to simplify designing of the controller, PID can be preferred as a controller.

METHODS

The trajectory tracking method is preferred in this study. The used trajectory path has three elements, the first and the third elements are defined as a third order function, and the second element is defined as a second order function. To work together with the PSO algorithm, required changes were made in the designed controller. Thus, PSO has the task of overcoming such difficulties in selecting suitable P, I and D parameters. Integral Time Absolute Error Criterion (ITAE) was selected as a performance criterion. Object of PSO algorithm is to find optimum PID parameters which provide minimum performance criterion value.

FINDINGS & CONCLUSION

This paper introduced the PSO based TTPID controller to control the Triga Mark-II research reactor established at Istanbul Technical University. The parameters of PID controller were determined using PSO algorithm. In applications, criterion value was optimized successfully by PSO. To demonstrate the effectiveness of this controller, a series of simulations were carried out on the Istanbul Technical University (ITU) Triga Mark-II research reactor. The results of the simulations demonstrated that TTPIDC efficiently provides a simple and easy way to control the system. As a result, PSO could optimize the parameters of PID for this controller and the PSO-TTPIDC could control the system successfully trajectory under various working conditions within the acceptable error tolerance.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

90

2 nd International Symposium on Computing in Science & Engineering

REFERENCES

1.

Bernard, J.A., Use of a rule-base system for process control, IEE Control System Magazine 1988

2.

Akin, H.L., Altin, V., Rule-based fuzzy logic controller for a PWR-type nuclear power plant, IEEE

Trans.Nucl.Sci.38 (2)(1991) pp.883-890 3.

Lin, C., Yangh, D.H., Design of a fuzzy logic controller for water level control in an advanced boiling

water reactor based on input-output data, Nuclear Technology, June 1998, vol.122, pp.318-329 4.

Ruan, D., Wal, A., Controlling the power output of a nuclear reactor with fuzzy logic, Information

Scince, 1998, pp.151-177 5.

Ruan, D., On-Line Experiments of controlling nuclear reactor power with fuzzy logic, IEEE

International Fuzzy Systems Conference Proceedings, 1999, August 22-25, Seoul, Korea. 6.

Bernard, J. A. (1988). Use of a rule-based system for process control. IEEE Control System Magazine,

8(5), 3–13. 7.

Ruan, D., & Van der Wal, A. J. (1998). Controlling the power of a nuclear reactor with fuzzy logic.

Information Sciences, 110, 151–177. 8.

Baba, A. F. (2004). Fuzzy logic controller. Nuclear Engineering International, 49, 36–38.

9.

Instrumentation System (1976). Operation and maintenance manual. USA: General Atomic Co. Omatu,

S., Khalid, M., & Yusof, R. (1996). Neuro-control and its applications: Advances in industrial control. London: Springer-Verlag. 10.

Kennedy, J., Eberhart, R.C. “Particle swarm optimization” Proc. IEEE int. Conf. on neural

Networks,Vol. IV, pp. 1942–1948. Piscataway, NJ, 1995. 11.

Y.H. Shi, R.C. Eberhart, A modified particle swarm optimizer, in: IEEE International Conference on

Evolutionary Computation, 1998, pp. 69–73. 12.

J. Kennedy, R.C. Eberhart, Y. Shi, Swarm Intelligence, Morgan Kaufman Publishers, San Francisco,

2001. 13.

H. Yoshida, K. Kawata, Y. Fukuyama, Y. Nakanishi, A particle swarm optimization for reactive power

and voltage control considering voltage security assessment, IEEE Transactions on Power Systems 15 (2000) 1232–1239. 14.

Erkan, K., Butun, E., Can, B., Triga Mark-II Reactor controller design using genetic algorithm, Fuzzy

Logic and Intelligent Technologies for Nuclear Science and Industry, World Scientific, Singapure, 1998. 15.

Can, B., The optimal control of ITU Triga Mark II Reactor, The Twelfth European Triga User’s

Conference, NRI Bucuresti-Pitesti, Romania, 22-25, September 1992. 16.

Erkan, K., Can, B., Self-Tuning control of ITU Triga Mark II reactor, First Trabzon International

Energy and Environment Symposium, Karadeniz Technical University, Turkey July 29-31, 1996. 17.

CAN, B., BABA, A.F., ERDAL, H., Optimal controller design by using three zones trajectory for ITU

Triga Mark-II Reactor, Journal of Marmara University for pure and applied sciences, 1999, Vol.15, pp. 187-196, Istanbul, Turkey.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

91

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/54

Person Dependent Model Based Facial Expression Recognition Kamil YURTKAN, Cyprus International University,Computer Engineering Department,Lefkosa,TRNC,[email protected] Hasan DEMİREL, Eastern Mediterranean University, Electrical and Electronic Engineering Department,Gazimağusa,TRNC,[email protected] Keywords: Facial expression recognition, facial expression analysis, face modeling, facial expression synthesis INTRODUCTION Face and facial expressions play an important role in human computer interaction systems. Recognition of facial expressions is useful in order to support natural dialogue with a computer. During last ten years, the developments in the field of multimedia signal processing and 3D imaging have attracted the utmost attraction of the researchers to focus on a robust solution to facial expression recognition problem. By the development of facial expression recognition systems, a number of applications in the field of human computer interaction can be developed. Facial expressions can be recognized depending on a person or it can be person independent. In our study, we are focusing on the problem of facial expression recognition depending on a person. The aim of the study is to recognize the facial expressions of a known person at acceptable recognition rates. Early researches in facial expressions field lead to the classification of human basic facial expressions in seven categories: Anger, Fear, Disgust, Happiness, Neutral, Sadness and Surprise [1]. Later on, facial movements are defined on the face in order to code facial expressions [2]. Most of the current methods in classifying the facial expressions are based on 2D facial feature point locations on the face. Our proposed algorithm uses 3D wire frame model and model based image synthesis technique compatible with MPEG-4 standard, to synthesize the basic facial expressions of the face. Like the most methods proposed before, our method also uses facial feature point locations in order to model a face with 3D wire frame model. Face model and expression models are created and then the artificial face images are generated for each facial expression. Using the artificial images generated, we classify the input face image as one of the closest artificial facial expression image. LITERATURE REVIEW The early studies of Ekman in 1970s have been accepted as the reference for the classification of basic facial expressions [1]. The basic facial expressions according to Ekman’s studies are anger, fear, disgust, happiness, neutral, sadness, and surprise. Later on, Facial Action Coding System (FACS) was developed by Ekman and Friesen to code facial expressions in which the movements on the face are described by action units [2]. Many researchers have worked on these concepts then. Most of the researchers analyze facial expressions in 2D, and were tracking the facial movements with the help of facial feature point locations in images and video. In 1999, MPEG-4 standard defined a model for neutral face with 84 feature points. MPEG-4 standard also defined 68 Facial Animation Parameters (FAPs) for the animation of the face by the movements of the feature points. MPEG-4 FAPs are popular in most of the research labs to synthesize or recognize facial movements and expressions [3]. For the face modelling field, one of the most important works done is the face model created by Stromberg, named as CANDIDE, which is popular in many research studies [4]. Some of the methods are using two orthogonal face images to generate the facial texture or on single frontal face image. In our previous studies, we have developed a new face model for three different resolutions, high, low and medium, and applied two

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

92

2 nd International Symposium on Computing in Science & Engineering

orthogonal face images to obtain the face texture, with our rotation adaptive texture mapping method [5]. Later on, we developed a face mask in order to synthesize a face from a single frontal face image [6, 7]. Face modelling is also used in face recognition systems [10]. In this study, we are using the artificial face and facial expression images generated using our previously proposed model based facial expression synthesis system [9] in order to recognize the basic six expressions of a face, for a known individual. METHODS Our proposed method is using the facial expression synthesis system that we propose in [6]. The method considers the 3D wire frame model and 20 facial feature points on the face as shown in [6] to synthesize the neutral face [6, 7]. The generated wire frame models are then deformed by the use of MPEG-4 FAPs in order to develop basic facial expression models of the known face. The face and the facial expression models are then textured with the known person’s neutral face image in order to synthesize the neutral, and six basic facial expressions of the face. Then, an unknown face image of the known person is classified by the nearest artificial face image including one of the facial expressions. The similarity of the test image and the artificial face images is measured with the direct subtraction metric. Thus, the test face image is subtracted from each artificial face image in the set, and then it is classified as the nearest facial expression image. FINDINGS & CONCLUSION In this study, person dependent model based facial expression recognition system has been proposed. To test our system performance, we used BU-3DFE facial expression database [8]. There are 100 subjects in the database, with 6 basic facial expressions and the neutral faces. Each facial expression has 4 different levels. Totally, each subject has one neutral, and 24 expression faces. Our algorithm produces high recognition rates in BU-3DFE database. The overall recognition rate is around %83. The proposed model based algorithm produces promising results by using only single face image for an individual as the training image, whereas, the other recognition systems are using number of face images in the training phase. By the improvements in the training phase, higher recognition rates can be achieved with the proposed model based system. REFERENCES [1] Ekman, P. & Friesen, W. (1976). Pictures of Facial Affect. Palo Alto, CA: Consulting, Psychologist [2] Ekman, P. & Friesen, W. (1978). The Facial Action Coding System: A Technique for the Measurement of Facial Movement, Consulting Psychologists Press, San Francisco [3] G.Abrantes, F.Pereira, “MPEG-4 facial animation technology: survey, implementation and results”, IEEE Transactions on Circuits and Systems for Video Technology, vol.9, no. 2, pp. 290 – 305, March 1999. [4] Ahlberg, J.: ‘Candide-3: an updated parameterised face’. Report No. LiTH-ISY-R-2326, 2001 (Linkoping University, Sweden). [5] Kamil Yurtkan, Hamit Soyel, Hasan Demirel, Hüseyin Özkaramanlı, Mustafa Uyguroğlu, and Ekrem Varoğlu, “Face Modeling and Adaptive Texture Mapping for Model Based Video Coding”, A. Gagalowicz and W. Philips (Eds.): CAIP 2005, LNCS 3691, pp. 498 – 505, 2005. [6] Kamil Yurtkan, Turgay Çelik and Hasan Demirel, “Automatic Facial Synthesis from Single Frontal Face Image”, The 13th IASTED International Conference on COMPUTER GRAPHICS AND IMAGING, CGIM 2010, February 15-17 2010, Innsbruck, Austria. [7] Kamil Yurtkan ve Hasan Demirel , “Düsük Bit Hizli Video Kodlama Uygulamalari için MPEG-4 Uyumlu 3B Yüz Modellemesi ve Sentezi” ,3.Haberlesme Teknolojileri ve Uygulamalari Sempozyumu,Habtekus 2009,911 Aralik, Yildiz Teknik Üniversitesi (Accepted on 26th September) [8] Yin, L. Wei, X. Sun, Y. Wang, J. & Rosato, M.(2006). A 3d facial expression database for facial behavior research. In Proceedings of International Conferance on FGR, pp. 211-216, UK. [9] Kamil Yurtkan and Hasan Demirel, "Facial Expression Synthesis from Single Frontal Face Image", 6th International Symposium on Electrical and Computer Systems, 25-26 November 2010, Eueropean University of Lefke, Gemikonağı,TRNC [10] Kamil Yurtkan and Hasan Demirel , “Model Based Face Recognition Under Varying Facial Expressions“, International Science and Technology Conference, ISTEC-2010, 27-29 October 2010, Gazimağusa, TRNC

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

93

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/55

Privacy Aspects of Newly Emerging Networking Environments: An Evaluation Mehmet Cudi OKUR,Yasar University,Department of Computer Engineering,Izmir,Turkey,[email protected]

Keywords :

Privacy protection,anonymization,social network,internetworking,deep packet inspection

INTRODUCTION

Hundreds of millions of people connect to the Internet to perform such diverse tasks as talking to others, video sharing,playing games, gambling, shopping, online banking, conducting business etc. All these and similar activites are essentially the products of large scale developments in hardware, software and networking technologies. Unfortunatelly,these developments have also created serious security and privary risks for individuals and organizations. The risks are especially considerable for newer networking and data processing environments including social networks, search engines,mobile and claud computing.It is a well known fact that, active and passive profiles of most entities contain private information which is collected with very little control or involvement from their part. The utilization purposes and areas of available information can be harmful for the entities concerned. This paper investigates privacy related risks involved in currently popular internetworking activities. The contributions of

major hardware and software components and user behaviors in these environments are

discussed and the effectiveness of recommended techniques for improving privacy are assessed. It is pointed out that the legal and ethical safeguards are far from being sufficient to protect individuals from privacy related risks concerned with their online activities.

LITERATURE REVIEW

The study and research on privacy and anonymization concerning public databases and other online data attracted researchers from diverse areas such as computer science,computer engineering ,psychology and sociology.Earlier set of privacy protection rules defined in [12] and similar documents have been critisized later to be insufficient and outdated[13],[15].The series of research started by Sweeny [9] introduced and improved the privacy preservation methods known as k-anonymity, l-diversity and t-closeness [11]. However none of these and similar methods have been found to be

fully effective in case of social networking[2],[3],claud

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

94

2 nd International Symposium on Computing in Science & Engineering

computing,mobile computing and industry implementations such as deep packet inspection[4].Popular social networking sites including Facebook [6] and Twitter [8] have been criticized about their inadequate privacy preservation policies[7]. Privacy related risks for the users in these environments have mostly been reported to be difficult to handle and remove[10],[5].Legal protection in the face of deep packet inspection has also been criticized as being insufficient[1].

METHODS

In this work,the privacy related implications and consequences of internetworking expansion of the last decade have been evaluated

by analysing the privacy policies of social network operators and the other

participants.Other important source of threats to privacy namely,deep packet inspection has been examined by taking into consideration both hardware and software aspects of the current technologies.Potential privacy threats concerned with network control and security tools have been examined based on their implementation methodologies.Potentially harmful role of deep packet inspection in newer internetworking environments has been identified using the hardware and software component specifications of major producers. The anlyses and their results have also been contrasted with the available findings in the literature.The data and specifications that are taken into consideration have been obtained essentially from the resources that are made publiquely available by major companies and social network operators involved.

FINDINGS & CONCLUSION

An overall evaluation of social networking environments and deep packet inspection practices indicate that privacy related risks in virtual worlds keep increasing. Despite all contrary claims and privacy preservation options offered by the companies involved, this trend would not slow down without imposing additional legal and technical safeguards. Current practices indicate that ,inconsiderate technological and legal preventive solutions also have the potential of reducing positive contributions of online activities in most aspects of daily life.It is also concluded that, the speed and fast expansion of internetworking technologies render static legal frameworks ineffective for protecting privacy rights of individuals in their online activities.For the forseeble future,the best protection for the participants still appears to be

conscientious usage of

internetworking

environments.

REFERENCES

[1] A. Daly .The legality of deep packet inspection.First Interdisciplinary Workshop on Communications Policy andRegulation 'Communications and Competition Law and Policy – Challenges of the New Decade', University of Glasgow 2010. [2] A. Mislove, M. Marcon, P. Gummadi, P.Druschel, and B. Bhattacharjee. “Measurement and analysis of online social networks,” Internet Measurement Comference, Proceedings of the Seventh ACM SIGCOMM,2007.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

95

2 nd International Symposium on Computing in Science & Engineering

[3] A. Narayanan and V. Shmatikov. De-anonymizing social networks. In IEEE Symposium on Security and Privacy,2009. [4] Bivio Networks and Solera Networks (2008). White Paper: Complete Network Visibility through Deep Packet Inspection and Deep Packet Capture.: Solera Networks. www.soleranetworks.com/products/documents/dpi_dpc_bivio_solera.pdf [5] B. Zhou, J. Pei, and W. Luk. A brief survey on anonymization techniques for privacy preserving publishing of social network data. SIGKDD Explorations, 2008. [6] H. Jones and J. H. Soltren. Facebook: Threats to privacy. Technical report,Massachusetts Institute of Technology, 2005. [7] J He, W. Chu, and V. Liu. Inferring privacy information from social networks.In Mehrotra, editor, Proceedings of Intelligence and Security Informatics, volume LNCS 3975, 2006. [8] L. Humphrey, P. Gill, and B. Krishnamurthy, 2010. How much is too much? Privacy issues on Twitter, http://www2.research.att.com/~bala/papers/ica10.pdf. [9] L. Sweeney. K-anonymity: A model for protecting privacy.Int. J. Uncertain. Fuzz., 10(5), 2002. [10 ]M. Hay, G. Miklau, D. Jensen,P. Weis, and S. Srivastava.Anonymizing Social Networks. University of Massachusetts Amherst Technical Report No. 07-19, 2007. [11] N. Li, T. Li, and S. Venkatasubramanian, “t-closeness: Privacy beyond k-anonymity and l-diversity," in Data Engineering, 2007. [12] OECD (1980): Guidelines on the Protection of Privacy and Transborder Flows of Personal Data. [13] R. Gross, A. Acquisti, and J. H. Heinz. Information revelation and privacy in online social networks. In WPES ’05: Proceedings of the ACM workshop on Privacy in the electronic society, USA, 2005. [14] R. Heatherly, M. Kantarcioglu, J. Lindamood, and B. Thuraisingham. Preventing private information inference attacks on social networks. Technical Report UTDCS-03-09, University of Texas at Dallas, 2009. [15] R. Kumar, J. Novak, and A. Tomkins. Structure and evolution of online social networks. In KDD ’06: Proceedings of the 12th ACM SIGKDD internationalconference on Knowledge discovery and data mining,2006.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

96

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/56

Realization of Campus Automation Web Information System in Context of Servıce Unity Architecture Özkan CANAY,Sakarya University,Computer Research and Application Center,Sakarya,Turkey,[email protected] Selim MERİÇ,Sakarya University,Institute of Natural Sciences,Sakarya,Turkey,[email protected] Hayrettin EVİRGEN,Sakarya University,Computer Research and Application Center,Sakarya,Turkey,[email protected] Metin VARAN,Sakarya University,Computer Research and Application Center,Sakarya,Turkey,[email protected]

Keywords :

Service Unity,Web Based Architecture,One-Single Point Control,Campus Automation

INTRODUCTION

Information system applications are basically designed to administer business processes faster and more comprehensively. Today, institutions are tendency in the information systems projects to use predominantly web-based architectures. These projects must comprise certain qualified features including authentication, authorization, logging, error handling/reporting, presenting a user-friendly interface and a healthy account management administration. In practice, using all of these features separately in each developed application makes the application development process longer, disrupts the integrity of business systems and forces to user to login separate addresses to access applications. In the light of these findings, we should highlight the importance of web portals. There have been many different definition sentences about web portal. Like so many of the terms that we encounter in our industry, the word portal has come to mean nearly anything; in fact, it is probably the most overused term on the web today. But what does it mean? What do you do when your boss comes to you and says he wants you to build a portal? Most people will tell you that a portal is some kind of entry page that represents a larger section of content [1]. Wikipedia, for example, defines it as follows: A web portal is a website that provides a starting point or gateway to other resources on the internet or an intranet [2]. So, given a definition as loose as that, what isn’t a portal? Each of the following examples would fit the definition of a portal quite nicely:

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

97

2 nd International Symposium on Computing in Science & Engineering

• http://Google.com—as a search engine, it could easily be considered as the starting point for the entire Internet. • The front page of your company’s intranet—it has links that lead to all of your company’s content. Specific application portals are typically built on common sets of core services, so reusability of these services is a key problem in portal development [3]. In this study we address the reusability problem by presenting a comprehensive view of an interoperable portal architecture, beginning with a set of core services built using the web services model and application metadata services that can be used to build campus automation front ends out of these core services, which in turn may be aggregated. This study also presents a proposal on managing different servers and server services under a single framework. Proposed single framework might be regarded as a portal architecture. This proposal promotes security level by allowing login control operations under one-single point and thus prevents recurrence of same operations. In this study; besides regarding web-based projects into a service unity, it is also aimed for carrying out instant used services needs such as authentication, authorization, logging, error handling/reporting and an healthy account management administration through a user-friendly interface.

This study was supported by Sakarya University Scientific Research Projects Foundation (Project number: 2007.01.10.001)

LITERATURE REVIEW

With the introduction of Web portals, the Web is in the process of reinventing itself once again. This change may prove to be more far-reaching than any other change to hit the Web, and it will change the way that university and corporate web pages are built, the organizational structures used to build them, and the fundamental way that people use the web. Portals are not a fad or a new name for something that we’ve been doing all along. They will turn the Web from an institution-centric repository of information and applications to a dynamic user-centric collection of everything useful to a particular person in a particular role. Instead of a single home page that proclaims identically to all who visit how grand the institution is, portals will give nearly every user a customized, personal-specific, unique web page.

With the appearance of the internet and the development of advanced internet portal technology [4], the design of internet-based information systems which could serve common sector/chain information interests seems to be within reach. There are many confusing and often contradictory definitions about portals. It is useful to divide portals into two groups: horizontal portals, or HEPs (Horizontal Enterprise Portals, also called mega-portals), and vertical portals, or VEPs (Vertical Enterprise Portals). A horizontal portal is a public Web site that attempts to provide its users with all the services they might need [3].

The HEPs combine a wide range of services, information, etc. and their building is generally simple, because the main principle valid in this process is to make the site present anything for many types of users [5],[6]. A VEP is a portal that delivers organization-specific information in a user-centric way. A university VEP should also

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

98

2 nd International Symposium on Computing in Science & Engineering

deliver all the information a HEP delivers. Whereas a HEP looks the same to all who first enter it, a VEP looks quite different. Unlike a HEP, a VEP requires authentication for access. When a user logs on to a VEP, it produces a customized portal page, tailored to the user who logged on. It knows a great deal about the user who logged on because the user is a member of the organization that produced the VEP[7],[8]. It knows, for example, what cohort a user belongs to (for example, student, faculty, staff), what role a user plays (for example, help desk manager, department chair, lacrosse team member), what projects a user is involved with, how many vacation days a user has taken this year, and much more.

The present situation in European countries as well as in the US is characterized by the emergence of horizontal portals, some of them highly publicized and with strong backing by interested groups [9] However, the majority of them has a narrow focus on one or a few sectors (products). Vertical portals are still rare and on an experimental stage. The experimental vertical portals are usually managed by a single member (enterprise) of a chain and might only be accessible by members of the chain.

Examples of horizontal portals at farm level in different countries include machinery trade, plant production, commodity trade, meat production, etc. [10]. Examples of vertical portals include a portal in meat production where farms are linked with meat processors, restaurants and

consumers and a chain-internal portal in

grain/flour production where farms are linked with mills and bakeries [11]. In this study, proposed automation architecture falls into vertical portal category due to vertical portal that delivers organization-specific information in a user-centric way. Many universities are considering what they call student portals or course portals or financial information portals. Although starting with a portal that has a limited constituency may make sense, the goal of a university should be to move as quickly as possible to a single portal that serves everyone: students, faculty members, staff members, alumni, and parents of students, prospective students, trustees, donors, and anyone else who would access a university home page. This work comprises beyond these necessities. Beyond expected necessities find its meaning in proposed comprehensive framework that also combine all other campus automations under a single framework named as CAWIS.

METHODS Design Preferences During all development processes of this automation system, ultimate web technologies have been traced and investigated closely. At early times in process of development, necessities of university have been revealed openly. There have been occurred comparative performance tests among alternative technologies and results have been reported. Among different programming languages, database systems and operating systems, following preferences have been selected. 3.1.1 Operating System Due to satisfied results from our tests especially in performance and security category, Linux was selected as main operating system.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

99

2 nd International Symposium on Computing in Science & Engineering

3.1.2 Web Server Apache web server was chosen as web server because of unique harmony with Linux operating system[12]. 3.1.3 Programming Language Today, PHP [13], ASP [14], CFM [15], JSP [16] etc. are used for server-side scripting languages. By the light of our test, it has been shown that PHP is speedier, more flexible and has more performance than other languages. PHP was chosen as programming language. 3.1.4 Database Management System MySQL RDBMS was chosen as database system because of unique harmony with Linux operating system and Apache-PHP [17]. MySQL RDBMS is most preferred database system with Apache web server and PHP programming languages due to security and performance mentions. 3.1.5 Standardization For the aim of getting rid of programming complexities and defining some orders about quality manners, it was chosen a “Web Software Standards” cover at early times in the process of development. 3.1.6 Database Design Currently used databases in other departments are in major purposes of this portal. That means created service unity architecture comprises a flexible database structure over all projects used in university automation. This database model satisfies one-point control on all projects. Especially, this model increases the efficiency of transaction durations and makes one-point security phenomenon easy to implement. 3.1.7 Security 128 bit SSL crypto was designed to use for protecting each created web page [18]. User specific information was also protected with MD5 crypto-algorithm [19]. Every input was saved with an IP number and detailed transaction types. Authorization mechanism comprises due assignment for each users with different access level on each services.

Service Unity Architecture and Services

Services running under CAWIS are in structure type of one-point service framework, in other words service unity architecture. CAWIS comprised 9 different services namely WebGate, WebMail, WebObis, WebAbis, WebPbis, WebMenu, WebRehber, WebForm and WebAnket.

Figure 1. Services running under CAWIS

WebGate (User Information Service) carries out access control, security logs, password change and personal preferences settings. WebMail (E-mail Service) carries out rich text support mail receive/send service, file and addresses manipulations. WebObis (Student Information Service) carries out demonstrating course schedules, selected courses, grade results, transcript view, and executing course enrollments and some edit manipulations.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

100

2 nd International Symposium on Computing in Science & Engineering

WebAbis (Academic Information Service) carries out demonstrating course information, activitiy workload definitions, scores and attendance lists, and executing relative evaluations and score entries of courses for academic staff. WebPbis (Staff Information Service) carries out demonstrating salary details, additional payments, charge lists, telephone bills, personal specific preferences and pdf print choices for all staff. WebMenu (Monthly Meal Information Service) carries out demonstrating normal and diet meal lists, meal calorie values and ingredients of meals. WebRehber (Telephone and E-mail Search Service) carries out searching with name, surname, telephone, username, and department in electronic staff guidebook. WebForm (Form Transfer Service) carries out holding satisfactions, critiques and suggestions about whatever university manners. WebAnket (Survey Service) carries out survey creation, management and reportings.

FINDINGS & CONCLUSION

Architectural Achievements:

Owing to this one-point service framework architecture, user account and session managements, error handling, logging, and screen and navigation tools design templates are served quickly. When implementing a new service in this portal, developed framework constitutes a flexible and easy procedure. This property also satisfies need for quick service implementation durations.

All services served in this portal use a one-point username and password control. User access levels for all services are distributed after this control point. Using CGI’s it is possible to manage LDAP, mail and web server accounts on the other Linux based servers. This is another satisfactory benefit of this framework.

Performance Results:

100 test users have attempted many transactions on server, transaction durations given as follows:

• Each account creation: 0.5 s. • Each Password change: 0.23 s. • Each Account deletion: 0.41 s. • Each Writing temp table: 0.012 s.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

101

2 nd International Symposium on Computing in Science & Engineering

REFERENCES

[1] Zabir, O.A., Building a Web 2.0 Portal with ASP.Net 3.5, O'Reilly Media; 1 edition, ISBN-13: 9780596510503, January 11, 2008. [2] http://en.wikipedia.org/wiki/Web_portal [3] C., Young, “Web Services Based Architecture in Computational Web Portals”, Doctorate Dissertation, Syracuse University December-2003. [4] B., Detlor, The Corporate Portal as Information Infrastructure: Towards a Framework for Portal Design. International Journal of Information Management, Year 2000, Vol. 20 (2): 91-101. [5] D., Barnick, D., Smith, G., Phifer, “Q&A Trends in Internet and Enterprise Portals”. Gartnet Group. Bartels, A., 1999. “What's a portal and Why It Matters” Giga Information Group. [6] Milutinovic, V., Infrastructure for Electronic Business on the Internet. 2001. [7] Kashyap, V., A. Sheth, Information Brokering Across Heterogeneous Digital Data, 2000. [8] http://www.dkms.com/ html. [9] Fritz, M., Kreuder, A.C., Schiefer, G. (eds.) 2001. “Information Portals and Information Agents for Sector and Chain Information Services”. Report A-01/4. University of Bonn-ILB. [10] Schiefer, G., Helbig, R., Rickert, U. (eds.) 1999. Perspectives of Modern Information and Communication Systems in Agriculture, Food Production and Environmental Control. Proceedings of the 2nd European Conference of the European Federation for Information Technology in Agriculture, Food and the Environment. University of Bonn-ILB, Bonn, ISBN 3-932887-07-7. [11] Boeve, A.D. 1999. Integrated Veal Information. In: Schiefer et al. (1999): 835-844. [12] http://www.apache.org/ [13] http://php.net/ [14] http://www.asp.net/ [15] http://www.adobe.com/products/coldfusion [16] http://java.sun.com/products/jsp/ [17] http://www.mysql.com/ [18] http://www.openssl.org/ [19] http://en.wikipedia.org/wiki/MD5

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

102

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/57

Application Specific Cluster-Based Architecture for Wireless Sensor Networks

Taner CEVIK,Fatih University,Department of Computer Engineering, Turkey, [email protected] A. Halim Zaim,Istanbul Commerce University,Ins of Science and Engineering,Turkey,[email protected]

Keywords :

Wireless sensor networks, energy conservation, routing,

INTRODUCTION

Recent developments in wireless communications, signal processing, and microprocessors have made it possible to develop very small-size, low-cost devices called sensors. In spite of having very small dimensions those tiny devices have their own processor, data storage areas, the sensing mechanism to collect physical data from environment and radios for providing communication with outside. The biggest disadvantage of being smallsized is to have a limited amount of energy resources. Therefore, many researches have been done for the sake of solving or making contributions to that energy shortage problem. Most of those researches have been focused on contributing solutions for the energy consumption of the communication unit, which is the major energy dissipating component of sensor nodes. In this paper, we present an energy efficient complete architecture which is based on clustering scheme as in LEACH. In the scheme we propose, cluster heads are selected in a round robin manner. Data aggregated in clusters are transmitted by cluster heads to the sinks via other cluster heads of the clusters that are closer and on the way to the sinks. Other cluster based schemes mainly focus on intra cluster organization and communication. However, much more energy is dissipated during the inter cluster communication according to the intra cluster communication. The architecture we propose here do not only deal with intra-cluster communication, also takes into account the data aggregation, multi-hop data transmission and best-effort next hop selection according to a cost factor. Simulations show that, our proposed architecture achieves incredible amount of energy savings, thereby prolongs the network lifetime.

LITERATURE REVIEW

Today, computer and electronics take more space in our lives and take the mission of man power in many tasks which are impossible, dangerous or time consuming for human being to perform. Previously, it could not pass

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

103

2 nd International Symposium on Computing in Science & Engineering

beyond imagination for using computerized electronic devices instead of human being to collect data physically from a dangerous environment. Recently, wireless sensor networks have been taking the mission of human being in many areas such as performing control functions in offices, fabrics; early diagnosis of faults in ships, trains, cars and airplanes; forest fires prevention with the help of early diagnosis; realization of efficient agricultural watering systems and instant notification of attacks at places which are critical in terms of security [1, 2, 3, 4]. It is observed that, the amount of energy expended during transmission of a single bit between two wireless sensor nodes is equal to the amount of energy spent by a sensor node performing a thousand transactions [5]. Another important leading energy consuming unit is the sensing unit that is used for gathering physical data from environment. In many cases, depending on the sensor type used, the amount of energy spent by that unit varies, however it is still negligible relative to the energy spent during data transmission [6]. Low-Energy Adaptive Clustering Hierarchy (LEACH) [7, 8] is an energy-efficient clustering based protocol that has created a base for many studies. In LEACH, the topology is divided into clusters. Each sensor node belongs to a cluster with a cluster head (CH) selected randomly at the beginning of each round. Nodes go into sleep except the transmission time interval dedicated for them by the cluster head. During transmission period, if they have packets to send, they transmit their packets to their cluster heads. After cluster heads get all the packets from the nodes in their clusters, they compress and reduce the size of the data to be sent to the sink. PEGASIS (Power-Efficient Gathering in Sensor Information Systems) [9], is an improvement of LEACH [7,8] and constructed its fundamental logic over the base of LEACH. However, in PEGASIS, instead of clustering scheme, a chain structure is applied. Each node in the chain, send and receive data to and from its closest neighbor. Chains are formed either by applying greedy algorithm or calculated by the sink and later on broadcasted to all nodes in the topology. HEED (Hybrid Energy-Efficient Distributed Clustering) [10], is another energy efficient-approach for clustering nodes in sensor networks. In HEED, cluster heads are selected periodically but not at each round, according to hybrid of their residual energies and a parameter called average minimum reachability power (AMRP). The architecture proposed in [11] is constructed over ZigBee/802.15.4 protocol [12]. Tree structure of the clusters is assumed to be created by using the tree addressing scheme of ZigBee. It is aimed to find a time division cluster scheduling in order to prevent inter-cluster collisions.

METHODS

Cluster formation and notification of sensor nodes is performed by a central controller mechanism at the setup stage. Sinks are deployed around the network area so as to surround all simulation area. In each cluster, there is a single cluster header as it bases on the structure described in LEACH [7, 8]. However there are some differences regarding to intra-communication scheme of LEACH. In LEACH, a frame is described in time domain and that frame is divided into small sub-frames each belonging to a sensor node. Every node sends its in-node processed data to its cluster header in its pre-allocated sub-frame. That is an obvious application of Time Division Multiplexing. In our approach, we applied the Code Division Multiple Access (CDMA) scheme for intra-cluster communication as it is used in cellular technology.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

104

2 nd International Symposium on Computing in Science & Engineering

After the packet is generated, nodes encode the packet by the orthogonal code assigned at the setup stage by the controller. After sending their data to the CH, plain nodes do not need to stay at active state because of the reason that all the communication after that state will be maintained by CHs. Thus, until the start up next round which is the next data collection period, plain nodes remain at sleep state and do not consume energy unnecessarily. Nodes are charged as Cluster Head (CH) every round in a round robin manner. This is aimed to maintain load balance and fairness. After decoding the data coming from plain nodes in the cluster, CH applies data aggregation. Instead of sending data of every node, min and max values are selected and sent with the owner ids of those packets. Next hop calculation is performed locally, that is by just considering the attributes of the next layer cluster heads. Calculation is performed base on the formula below: CostFactor = RsdEndChi * d3 .

(1)

Next layer in the coverage area with the minimum cost value is selected as the next hop. There appears another challenge that has to be dealt while selecting the next hop. The CHs of the two clusters that are adjacent in the same layer can select the same next CH at the same time. Therefore, although it is possible to select two distinct CHs as next hop, without applying any control mechanism, load balance and fairness may not be achieved. In our project, a token mechanism is included in order to keep the next hop selection crash under control. Every sink is familiar about the geographical position of each node in the topology. Therefore, they can calculate the energy spent during send and receive operations by the nodes on the full path, by looking at the related fields of the packets. Each time a packet arrives at the sink, the sink records the energy changes of the nodes on the path. All packets emerge from the clusters at different layers arrive to the closest sink to that cluster. In this case, every sink must inform others about the incoming packets. Therefore, a notification mechanism is applied in order to achieve that informing process. A sink is charged with the notification process and permanently employed with this task. At the beginning of each round, every sink starts a timer. When the timer expires, all the sinks transmit their data to the sink charged with notification process. In order to prevent collisions, CDMA mechanism is employed here. Each sink is assigned its own code orthogonal to the codes of other sinks. Therefore, all other sinks can send information about incoming packets to the informer sink at the same time without any collision. As soon as the informer sink gets the messages from other sinks, after decoding, it generates the ultimate message and broadcasts to the whole network. During the broadcast process, not all of the nodes in the network will get the message. The broadcast notification message is only needed by the cluster heads of the following round. Therefore, other nodes will not spend energy during this broadcast stage redundantly.

CONCLUSION

In this paper, we presented an energy-efficient cluster based architecture containing various energy-efficient techniques in order to decrease energy dissipation of the nodes thereby prolonging the network lifetime. Clusterbased structures are before proved to be more energy efficient with respect to others. In our simulations we

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

105

2 nd International Symposium on Computing in Science & Engineering

observed this reality with brief measurements. As it is discussed in previous sections, hot-spot problem emerges in wireless sensor networks because of the reason that sensor nodes located near the sink have to relay other sensor nodes’ data that are far from the sink and do not have a possibility to transmit data directly to the sink. Thus, multi-layer structure is employed with clustering mechanism. In this approach, clusters located near the sinks are sized larger and contain more nodes than the ones far from the sink. That is because, nodes in the nearby clusters transmit more data and being cluster-head turn must come less frequently in order to prevent energy depletion. Another method, improving the energy efficiency is the multi-sinks idea. By setting a network to a single sink located at one side of the topology, all data traffic from the nodes flow over the nodes located at the same side with the sink. Hence, those nodes quickly deplete of energy and network lifetime reduces. However, by employing multiple sinks with surrounding the all topology, instead of relaying the data to the sink located at the other side of the topology, CHs can forward their data to the sink as close as possible. Furthermore, another challenge discussed in this study is finding a cost factor to be applied during the next hop selection process. Cost Factor proposed in our architecture helps nodes to select the optimum next hop nodes by means of the residual energy of the next node and the distance between the sender and the receiver. By employing cost-factor-considering-routing method, network lifetime prolongs with a factor of 50%. By using multi-channel communication collisions are prevented thereby, undesirable energy consumptions caused by retransmissions of packets are precluded. Besides, after physical data collection form environment and transmission of that data to the CHs, it is inessential for the plain nodes to stay awake until the next round start up. Thus, a periodic sleep-wake-up schedule is employed with the aim of redundant energy consumption. In conclusion, none of the methods described above is alone enough for minimizing the energy consumption in the network. Therefore, the smartest thing is to use the above-mentioned methods blended together as we did in our study.

REFERENCES

1.

Sohraby, K.; Minoli, D.; Znati, T. (2007). Wireless Sensor Networks, Wiley.

2.

Seminar, University of Massachusetts. USA. (2003).

3.

Whitehouse, K.; Sharp, C.; Brewer, E.; Culler, D. (2004). Hood: A Neighborhood Abstraction for

Sensor Networks. In Proceedings of the ACM International Conference on Mobile Systems, Applications, and Services (MobiSys’04). 4.

Whitehouse, K.; Karlof, C.; Culler, D. (2004). Getting Ad-Hoc Signal Strength Localization to Work.

Technical Report, University of California-Berkeley. 5.

Pottie, G.; Kaiser, W. (2000). Wireless integrated network sensors, Communication of ACM.

6.

Anastasi, G.; Conti, M.; Francesco, M. D.; Passarella, A. (2009). Energy Conservation in Wireless

Sensor Networks: A survey. Elsevier Ad Hoc Networks.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

106

2 nd International Symposium on Computing in Science & Engineering

7.

Heinzelman, W.; Chandrakasan, A.; and Balakrishnan, H. (2000). Energy-Efficient Communication

Protocol for Wireless Microsensor Networks. In Proceedings of the Hawaii International Conference on System Sciences. Island of Maui, Hawaii. 8.

Heinzelman, W.; Chandrakasan, A.; and Balakrishnan, H. (2002). An Application-Specific Protocol

Architecture for Wireless Microsensor Networks. IEEE Transactions on Wireless Communications 1(4): 660670. 9.

Lindsey, S.; Raghavendra, C. S. (2002). PEGASIS: Power-Efficient Gathering in Sensor Information

Systems. In Proceedings of the IEEE Aerospace Conference. 10.

Younis, O.; Fahmy, S. (2004). HEED: a hybrid, energy-efficient, distributed clustering approach for ad

hoc sensor networks. IEEE Transactions on Mobile Computing 3(4): 366-379. 11.

Hanzalek, Z.; Jurcik, P. (2010). Energy Efficient Scheduling for Cluster-Tree Wireless Sensor Networks

With Time- Bounded Data Flows: Application to IEEE 802.15.4/ZigBee. IEEE Transactions on Industrial Informatics. 12.

IEEE P802.15 Wireless Personal Area Networks: Proposal for Factory Automation, Working Draft

Proposed Standard. (2009).

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

107

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/58

Scalability Evaluation of the Wireless AD-HOC Routing Protocols in ns-2 Network Simulator Zafer Albayrak Ahmet Zengin Fatih Çelik Mehmet Recep Bozkurt

Keywords: MANETs, Ns-2, Scalability

INTRODUCTION

In recent years, the progress of communication technology has made wireless device smaller, less expensive and more powerful. Such rapid technology advance has promoted great growth in mobile devices connected to the Internet [1]. There are two wireless networks: -

infrastructure networks ( Fig. 1)

-

ad-hoc networks (Fig. 2).

Fig.1[1]

Fig.2[1]

In infrastructure wireless network, there exists a base station (BS) or an access point (AP) to be the portal of wireless devices. Ad-hoc network [2,3,4] is a self-organized, dynamically changing multi-hop network. All mobile nodes in an ad-hoc network are capable of communicating with each other without any established centralized controller. The mobility of wireless nodes will cause the change of network topology. Many routing protocols have been proposed for ad-hoc Networks [1,5,6,7,8,9,11,12]. These routing protocols can be divided roughly into two types, table-driven and on-demand routing protocol [9]. Table-driven routing protocols, such as Destination-Sequenced Distance-Vector routing (DSDV) [8], attempt to keep a global picture of network topology and respond to topological changes by propagating update messages throughout the wireless network. One or more tables are required to maintain consistent, up-to-date routing information for each node in the wireless network. In a highly mobility network environment, to maintain the routing information fresh causes heavy overheads.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

108

2 nd International Symposium on Computing in Science & Engineering

Recently, the on-demand routing protocols for ad-hoc network are appealing because of their low routing overheads and effectiveness when the frequency of route re-establishment and the demand of route queries are not high. The route is created only when it is desired by the source node in the on-demand routing protocols. Many on-demand routing protocols have been proposed in [8,11,7,10]. High routing overheads usually have significant impacts on performance in low-bandwidth wireless links. Therefore, the reactive on-demand routing algorithms where routing paths are established only when required are the recent trend in ad-hoc networks, such as the Ad-hoc On-Demand Distance-Vector (AODV) routing protocol. In the AODV protocol, there is only a single path established during the transmission. Therefore, when the transmission path fails, data packets are simply dropped by nodes along the broken path.

BeeAdHoc[14] algorithm has been taken into consideration in the comparison as an energy efficient routing algorithm which is inspired from foraging principles of honey bees. The bee behavior was instrumental in designing efficient mobile agents, scouts and foragers, for routing in mobile ad-hoc networks. Bees usually are forced to go long distances to find food. The areas of foraging bee finds a source of food for notice to the other members of the colony and return to the hive after a while they start to fly around the other bees. Honey bees are deaf, and therefore will not be able voice communications with each other. Establish communications with each other with different shapes. These shapes are called dance. This dance has source distance, direction, information about the quality and quantity of food available [13].

In this paper, AODV, DSDV and BeeAdHoc algorithms are empirically compared to investigate large-scale behavior. The results are presented as graphs and they showed that AODV has a brilliant performance in almost all conditions.

REFERENCES

[1] Wei K.Lai, S.Hsiao, Y.Lin, Adaptive backup routing for ad-hoc networks, 2006 [2] Internet Engineering Task Force (IETF) Mobile Ad Hoc Networks (MANET) Working Group Charter, Chaired by Joseph Macker and M. Scott Corson, http://www.ietf.org/html.charters/manet-charter.html. [3] J. Jubin, J.D. Tornow, The DARPA packet radio network protocols, Proceedings of the IEEE 75 (1) (1987) 21–32. [4] C.E. Perkins, Ad Hoc Networking, Addison Wesley (2001). and RSSI information, IEICE E88-B (9) (2005) 3588–3597. [5] S. Corson, J. Macker, Mobile ad hoc networking (MANET): routing protocol performance issues and evaluation considerations, RFC. 2501 (1999). [6] D.B. Johnson, D.A. Maltz, Dynamic source routing in ad hoc wireless networks, in: Mobile Computing, Kluwer Academic Publishers, 1996, pp. 153–181. [7] C.E. Perkins, E. Royer, Ad hoc on-demand distance vector routing, Proceedings of IEEE WMCSA (1999).

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

109

2 nd International Symposium on Computing in Science & Engineering

[8] C.E. Perkins, P. Bhagwat, Highly dynamic destination-sequenced distance-vector routing (DSDV) for mobile computers, Proceedings of the ACM SIGCOMM. (1994) 234–244. [9] E.M. Royer, C.-K. Toh, A review of current routing protocols for adhoc mobile networks, IEEE Personal Communications 6 (2) (1999) 46–55. [10] R. Dube, C.D. Rais, K. Wang, S.K. Tripathi, Signal stability based adaptive routing (SSR alt SSA) for ad hoc mobile networks, IEEE Personal Communication (1997). [11] C.E. Perkins, Royer, and S. Das, ‘‘Ad Hoc On-Demand Distance Vector (AODV) Routing,’’ Internet Draft, draft-ietf-manet-aodv- 13.txt, February 2003. [12] H. F. Wedde and M. Farooq, “The wisdom of the hive applied to mobile ad-hoc Networks” , University of Dortmund, 2005 [13] K. von Frisch. The Dance Language and Orientation of Bees. Harvard University Press, Cambridge, 1967.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

110

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/59

A Design for Practical Fault Tolerance in Java Accessing Native Code Ebubekir Temizkan,TUBITAK Space Technologies Research Institute, Ankara,Turkey, [email protected] Kerem Önal,TUBITAK Space Technologies Research Institute, Ankara,Turkey, [email protected]

Keywords: Java, JNI, Fault Tolerance

INTRODUCTION

Fault tolerance has always been a necessary part of software systems. It becomes more substantial as software systems evolve. Nowadays, many software systems are implemented with high level programming languages in order to avoid fatal errors. One of these programming languages is Java. It does not allow pointer-arithmetic which may cause to memory leaks. It has null-pointer and array-bound checking mechanisms that allows us to write robust software easier. But serious errors may occur, which crash whole Java Virtual Machine (JVM), when it comes to accessing native code (C/C++). We need native code to operate hardware or to run mathematical intensive algorithms. It is much harder to free these parts from errors which may lead to system failures. Java code must remain stable and responsive in case of faults in underlying native part and be tolerant to those faults. And unfortunately the proposed way of Java, Java Native Interface (JNI) [1], has no mechanisms to prevent JVM from crash in case of memory leaks in native code. Separation of operating system processes brings us the fault tolerance we need by using the memory address space protection. There are several ways to implement this approach in Java. In this paper we analyze the details of this process separation approach and propose a few architectures that isolate the faults. We have already conducted experiments on a ballistic imaging system with one of the appropriate architectures and prevent 41 failures of 1504 cycles.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

111

2 nd International Symposium on Computing in Science & Engineering

LITERATURE REVIEW

Software reliability is the probability that software will not cause the failure of a product for a specified time under specified conditions [2]. It is also an important factor affecting system reliability. Software fault tolerance is a necessary aspect of a system with high reliability. It is a way of handling unknown and unpredictable software system failures [3]. With the development of the complex software systems, fault tolerance becomes unavoidably significant. Therefore, researchers have intensively studied on this subject. Golden G. Richard III and Tu [4] presented their experiences with Java as a portable solution for developing completely fault tolerant programs. They also argued Java based fault tolerant programming patterns that allow development of fault tolerant Java solutions for a large class of sequential and distributed applications. Xin et al. [5] designed an object oriented developing framework of fault tolerance system using Java to solve some problems. They used JRMI technique as the communication mechanism and also used Object Oriented Design method to improve software usability. Pleisch and Schiper [6] presented Java-based fault tolerant mobile agent system called FATOMAS. Liang et al. [7] provided a fault tolerance mechanism based on Java exception handling, Java thread status capturing technology and mobile agent to handle Grid fault in Linux platform. Atul Kumar [8] proposed a fault tolerant system design and implementation for CORBA that employs object replication. Atul Kumar benefited from Java serialization to capture object’s state. Main objective for this study to advance availability of Java objects in CORBA 2. Li and friends [9] proposed a fault tolerance sandbox to support the reconfigurable fault tolerant mechanism on application servers to improve availability and reliability of reconfigurable applications. Lastly Yang et al. [10] implemented a tool named McC++/Java for enabling monitoring and fault tolerance in C++/Java programs on multi core applications.

METHODS

In our ballistic imaging system [11], firstly we need to operate a camera which comes with its closed-source firmware. This is a risk and in our case the last two firmware updates of the camera resolved two important bugs which caused segmentation faults and dead-locks. Second, we have a custom made hardware and its controller software that is developed by our team. Its tests are done solely during this project. Finally, we speeded up some mathematical algorithms by implementing them in C and used much pointer arithmetic in them which may lead to memory problems. These problems are the main cause of JVM crashes. We analyzed JNI, separate native executable and child Java process techniques to interoperate with the native libraries. For the last one, we have to solve communication problem between main and child Java processes. We proposed 4 ways to overcome this problem. These parts can communicate over disk drive, software ram drive, socket or other advanced inter process communication techniques (e.g. named pipe, shared memory) which are highly dependent on the operating system.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

112

2 nd International Symposium on Computing in Science & Engineering

FINDINGS & CONCLUSION

In this paper, we proposed a fault tolerance design using Java accessing native code in a single computer. Also, we have explained methodologies of fault tolerant software systems that interoperate Java and C/C++. Main concept of our design is separating processes to provide address space protection. In our case, we used a child Java process to interoperate with the native libraries and a software ram drive to overcome the communication problem. We implemented the proposed design on our ballistic imaging system and conducted experiments. The ballistic imaging system was operated by two ballistic experts for 1504 cartridge cases. We logged about size of 6.5 MB log file. Total number of native methods called by Java on child process is 72192. 18 faults crashed and 23 faults caused hangs in the child process. We succeeded to encapsulate the crashes and hangs in the child process. Crashes and hangs caused by the native codes do not lead to failure of whole system. Thus, our system is reliable and it continues to operate without any data loss.

REFERENCES

1. 2.

Liang, S. (1999). The Java Native Interface: Programmer's Guide and Specication, Addison-Wesley. Krajcuskova, Z. (2007). Software Reliability Models. Radioelektronika International Conference. Brno,

24-25 April. 3.

Pan, J. (1999). Software Reliability. Carnegie Mellon. From the World Wide Web:

http://www.ece.cmu.edu/~koopman/des_s99/sw_reliability. 4.

Richard III, G. G. and Tu, S. (1998). On Patterns for Practical Fault Tolerant Software in Java.

Proceedings of IEEE Reliable Distributed Systems Symposium.West Lafayette, IN 20-23 Oct. 5.

Xin, Z., Daoxu, C. and Li, X. (1998). An object-oriented developing framework of fault-tolerant system.

In Proc. of the 31st International Conference on Technologies of Object-Oriented Language and Systems. 6.

Pleisch, S. and Schiper, A. (2001). FATOMAS: A Fault-Tolerant Mobile Agent System Based on the

Agent-Dependent Approach. Proc. Int’l Conf. Dependable Systems and Networks. 7.

Liang, J., WeiQin, T., JianQuan, T. and Bo, W. (2003). A Fault Tolerance Mechanism in Grid.

Proceedings of Industrial Informatics IEEE International Conference. 8.

Kumar, A. (2008). A fault-tolerant system for Java/CORBA objects. Parallel and Distributed Processing

IEEE International Symposium. 9.

Li, J., Huang, G.,Chen, X., Chauvel, F., Mei, H.(2009). Supporting Reconfigurable Fault Tolerance on

Application Servers. Parallel and Distributed Processing with Applications, IEEE International Symposium. 10.

Yang, L., Yu, L.,Tang, J., Wang, L., Zhao, J. and Li, X.(2010). McC++/Java: Enabling Multi-core Based

Monitoring and Fault Tolerance in C++/Java. Engineering of Complex Computer Systems (ICECCS) EEE International Conference. 11.

Sakarya, U., Es, S., Leloglu, U., Tunali, E. and Birgul, O. (2010). A Case Study of an Automated

Firearms Identification System: BALISTIKA 2010. 1st Annual World Congress of Forensics 2010.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

113

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/60

On the Cache Performance of Time-Efficient Sorting Algorithms Ilker KORKMAZ,Izmir University of Economics,Department of Computer Engineering,Izmir,Turkey, [email protected] Senem KUMOVA METIN,Izmir University of Economics,Department of Software Engineering,Izmir,Turkey, [email protected] Orhan DAGDEVIREN,Izmir University,Department of Computer Engineering,Izmir,Turkey, [email protected] Fatih TEKBACAK,Izmir Institute of Technology,Department of Computer Engineering,Izmir,Turkey, [email protected]

Keywords: Merge Sort, Quick Sort, Cache Effected Sorting, Simulation, Complexity

INTRODUCTION

Sorting operation [1] is commonly used in mathematics and computer sciences as well. Sorting functions are generally implemented as top-level definitions to be called whenever required in any software program or mathematical application. Therefore, sorting methods are ready-to-use and re-usable tools for any science and engineering application. There are two main performance issues for a sorting algorithm: time and space complexity. Although there is a theoretical trade-off between those two issues, practical sorting implementations try to balance the effects and provide efficient running time results. There may be some different hardware configurations for the sorting codes to be executed in more performance-effective manner. In this concept, cache configuration is an important concern affecting the performance of sorting algorithms.

In this paper, we will introduce a comprehensive performance evaluation of cache tuned time efficient sorting algorithms. We will cover the Level 1 and Level 2 cache performance of memory tuned versions of both merge sort [2] and quicksort [3] running on data sets having various sizes and probability distributions. These algorithms are implemented on Pentium 4 processor [4] different than the previous studies in [5-9]. The data and instruction miss counts on Level 1 and Level 2 caches are measured with the time consumption for each algorithm by using Valgrind simulator [10]. Besides, we give the design of a new merge sort algorithm which aims to decrease run time by reducing Level 1 misses.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

114

2 nd International Symposium on Computing in Science & Engineering

LITERATURE REVIEW

Sorting algorithms effect runtime and efficiency of operations in which they are used. If the algorithms are qualified, it may result with a decreased running times. The traditional method which is accepted to be the most effective way of qualifying algorithms to reduce run time is decreasing the number of instructions in the algorithm. Most of the researchers rearrange algorithms in such a way that reduces instruction count. Although reducing the instruction number of any algorithm has a positive effect on running time, it is not possible to state that it supplies enough improvement for sorting algorithms. The approach in which the reality of cache usage in sorting algorithms is considered to reduce running time of sorting operations was stated firstly by LaMarca and Ladner [5-7]. Following, Xiao et al. [8] contributed to the literature of cache effected algorithms with their tiled merge sort with padding and multi merge sort with TLB padding algorithms. Sorting algorithms such as multi merge sort, tiled merge sort etc. take account of cache properties and cache size and reduce running time by reducing the memory access rates or cache miss rates. Basic merge sort algorithm which is designed as divide and conquer paradigm is a comparative sort algorithm with O(nlogn) complexity. In tiled merge sort algorithm, the data is divided in two sub arrays and the arrays are sorted between each other. Multi merge sort algorithm merges all sub arrays in one operation. Tiled merge sort with padding organizes data locations to decrease the collision miss rate. Multi merge sort with TLB padding aims to reduce TLB misses created by multi merge sort algorithm. The other examples of cache effected algorithms are different versions of quick sort algorithm such as memory-tuned quicksort, and multi-quicksort. Cache effected algorithms can be simulated by Valgrind [10] and Cachegrind which is a Valgrind tool that detects miss rates by simulating Level 1 and Level 2 caches.

METHODS

In this study, in order to investigate the cache performances and effect of cache usage on sorting algorithms, the basic variations of merge based algorithms, such as base merge sort, tiled merge sort, multi merge sort, and tiled merge sort with padding, are examined in Valgrind simulator [10]. In the experimentation step, two-level cache structure of Northwood core Pentium 4 processor architecture [4] is simulated. Level 1 cache is configured as 4-way associative with a total capacity of 8 K, which has a line size of 64 Bytes; Level 2 cache is used as 8-way associative with a capacity of 256 K, and with the line size of 128 Bytes. Several data sets with different distributions (random, poisson, etc.) of 1 K to 4096 K are used as input sets. Each different experiment is repeated 5 times and the average value obtained from repetitions is used to evaluate the corresponding algorithm. The data miss rates of Level 1 and Level 2 caches of each algorithm are used to observe the cache performances. The running times of the implementations are also measured. It is observed that although all merge sort implementations have the same time complexity of O(nlogn), the algorithms, which have more misses in the last level of the cache and have more accesses to the memory, result with higher values of running times.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

115

2 nd International Symposium on Computing in Science & Engineering

FINDINGS & CONCLUSION

The results of the experiments to compare different variations of merge sort algorithms showed that cache effective ones lead with lower miss rates both in Level 1 and Level 2 caches in huge data sizes. Moreover, tiled merge sort and tiled merge sort with padding variations reduce run time in huge data sizes compared to other algorithms. The initial comparison results of different merge sort algorithms have been given previously in [9]. The further details of running time and cache performance results for different sorting algorithms, considering quicksort as well, will be given in full paper. By using these findings, we will show the design of a new merge sort algorithm which aims to reduce Level 1 miss rate in order to decrease the run time. As a result, it is possible to state that although the cache effective algorithms have the same theoretical time complexities with the other algorithms, they have an advantage of cache miss rates which reduces running time in practical.

REFERENCES

[1] H. Demuth, "Electronic Data Sorting". PhD thesis, Stanford University, 1956. [2] D. E. Knuth, "The Art of Computer Programming", Addision Wesley, Reading, MA, 1973. [3] C. A. R. Hoare, "Quicksort", Oxford The Computer Journal 5(1) (1962), pp. 10-16. [4] http://www.intel.com/products/desktop/processors/pentium.htm [5] A. LaMarca and R. E. Ladner, "The Influence of Caches on the Performance of Sorting", The 8th Annual ACM Symposium on Discrete Algorithms, SODA 97. [6] A. LaMarca and R. E. Ladner, "The Influence of Caches on the Performance of Heaps", Journal of Experimental Algorithms 1 (1996). [7] A. LaMarca and R. E. Ladner, "The Influence of Caches on the Performance of Sorting", Journal of Algorithms 31(1) (1999), pp. 66-104. [8] L. Xiao, X. Zhang, and S. A. Kubricht, "Improving Memory Performance of Sorting Algorithms", ACM Journal on Experimental Algorithmics 5(3) (2000), pp. 1-22. [9] F. Tekbacak, I. Korkmaz, O. Dagdeviren, S.K. Metin, "Application and Performance Analysis of Cache Effected Merge Sort Algorithms", The 4th International Student Conference on Advanced Science and Technology, ICAST 10, Izmir, May 2010. [10] http://valgrind.org/

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

116

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/61

Performance Comparison of A Homogeneous Linux Cluster and A Heterogeneous Windows Cluster Deniz DAL,Ataturk University,Engineering Faculty,Computer Engineering Department,Erzurum,TURKEY, [email protected] Tolga AYDIN,Ataturk University,Engineering Faculty,Computer Engineering Department,Erzurum,TURKEY, [email protected]

Keywords: Parallel Computing, Cluster Computing, Linux Cluster, Windows Cluster, Homogeneous Cluster, Heterogeneous Cluster

INTRODUCTION

A cluster is a type of parallel or distributed processing system, which consists of a collection of interconnected stand-alone computers cooperatively working together as a single, integrated computing resource [1]. Clusters serve a number of purposes. A single system with several computers can be used to obtain the high performance need of a scientific application. Another cluster may be recruited to create a high-availability for a web serving environment. By using commodity computers and connecting them over a switching device enables us to obtain a processing power once only supercomputers had the opportunity to employ. The flexibility of including a new processing unit when one of the computing nodes from the cluster fails makes clusters a highly reliable and scalable solution for system administrators. Due to their above-mentioned advantageous, there are many computer clusters that are on the world’s top 500 supercomputers list along with SMP style supercomputers. Linux would be the top operating system choice that runs on each computing nodes of a cluster since it is an open source software and it does not require a licensing cost. In the context of this paper, a homogeneous Linux cluster is referred as the cluster where each node of the cluster has Linux installed on it (same version of Linux distribution, kernel and software packages) and has the same CPU and memory configuration. On the other hand, a heterogeneous Windows cluster is the one where every member from the cluster has any Windows operating system installed and has different CPU and memory configurations. The scope of this paper is to compare the performance of those two cluster options.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

117

2 nd International Symposium on Computing in Science & Engineering

LITERATURE REVIEW

Some real life problems scientists face today needs to be solved as fast as possible. Even the fastest single processor computer is not capable of forecasting tomorrow’s weather less than 24 four hours. By using parallel computing, people can predict the weather and the climate changes two or three weeks in advance. Another problem requiring huge amount of processing power is the human genome project [3]. Parallel computing is simply using more than one processor sharing the same memory bank (supercomputing) or one computer with distributed memory

(cluster) to achieve higher processing power. Clusters offer high performance, high

reliability and flexibility with very little cost over their supercomputer counterparts. Message Passing Interface (MPI) is used for intra-node communications when needed [4]. Clusters can be either homogeneous or heterogeneous. A heterogeneous parallel and distributed computing environment using PVM is introduced in [5]. Load balancing for heterogeneous clusters of PCs is examined in [6]. Our work will compare a homogeneous Linux cluster with a heterogeneous Windows cluster in terms of performance.

METHODS

This paper is inspired by a discussion held on a graduate level parallel programming course. Students form a heterogenous cluster by connecting their personal laptops (all have windows OS installed) together over the faculty’s WLAN to run parallel programs written in C++ and MPICH2 [2] implementation of MPI during the academic semester. “Does the operating system the compute nodes of the cluster run on affect the performance?”, “Linux or Windows? Which one is better for clusters?”, “Clusters with very homogeneous software and hardware and clusters with very heterogeneous software and hardware. Which one is better in terms of performance and load balancing?” All those questions needed to be answered and we wanted to share the answers when the results are available. For this purpose, a homogeneous Linux cluster with 10 nodes will be designed. The compute nodes of the cluster have a 1.8 GHz P4 processor, a 512 MB of RAM and a 40 GB hard disk. Fedora 14 will be installed on every node. One of the nodes will serve as the administrative node that will share the same home and opt (for MPICH2 binaries and libraries) directories over the cluster nodes through NFS. On the heterogeneous Windows cluster side, we will try to use the students’ personal laptops with different versions of Windows OS installed and the average CPU speeds and the average RAM of the selected computers will match its homogeneous counterpart. Those two options will be compared in terms of performance especially over embarrassingly parallel benchmarks.

FINDINGS & CONCLUSION

Connecting commodity computers over a switching device and making them act as a single processing unit offer a cost effective and highly reliable solutions for parallel computing needs. This is called clustering and it is widely recruited to find solutions for various kinds of problems within a reasonable amount of time. Clusters can

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

118

2 nd International Symposium on Computing in Science & Engineering

be composed of either homogeneous or heterogeneous compute nodes. The comparison of performance of those two different implementations of clusters will be the main goal of this work.

REFERENCES

1. Cluster Computing Tools, Applications, and Australian Initiatives for Low Cost Supercomputing, Hai Jin, Rajkumar Buyya and Mark Baker, Monitor, The Institution of Engineers Australia (IEAust), Volume 25, No 4, December 2000 2. http://www.mcs.anl.gov/research/projects/mpich2/ 3. Parallel Computation in Biological Sequence Analysis, Yap, T.K, Frieder, O., Martino, R.L., IEEE Transactions on Parallel and Distributed Systems, Volume 9, Issue 3, 1998 4. http://www.mcs.anl.gov/research/projects/mpi/ 5. Heterogeneous Parallel and Distributed Computing, V. S. Sunderam and G. A. Geist, Journal of Parallel Computing, Volume 25, Issue 13-14, 1999 6. Load Balancing for Heterogeneous Clusters of PCs, Christopher A. Bohn and Gary B. Lamont, Future Generation Computer Systems, Volume 18, Issue 3, 2002

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

119

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/62

Author Identification Feature Selection by Genetic Algorithm Feristah ÖRÜCÜ, Dokuz Eylul University, Computer Engineering Department, Izmir, Turkey, [email protected] Gökhan DALKILIÇ, Dokuz Eylul University, Computer Engineering Department, Izmir, Turkey, [email protected]

Keywords: Author Identification, Genetic Algorithms, Feature Selection

INTRODUCTION

Author identification deals with the task of identifying the corresponding author of an anonymous text, among given a set of predefined author candidates. Aim is to automatically determine the author of a given text with approximately 0% failure rates. The main idea under computer-based author identification is defining characteristics of documents to determine the writing style of authors. Multivariate analysis techniques which combine lexical and syntactic analysis techniques, can achieve desirable success ratios. In our study 24 style markers (attributes) are formed to characterize 16 columnists. However, not all of these 24 attributes affect author attribution process positively. Genetic algorithm can be used for attribute selection. In recent studies, feature selection by genetic algorithm is applied on handwriting author identification or source code author identification. In our study, we deal with identification of a newspaper article’s author among 16 possible columnists using 24 style markers which are formed to characterize these 16 authors. Genetic algorithm is used for selection of attributes to discard useless attributes and explore optimal feature subset.

LITERATURE REVIEW

In the recent fifty years, there were many studies in the author identification area. Two of the most significant studies were Morton’s study, which is focused on sentence lengths [1], and Brinegar’s study, which is focused on word lengths [2]. Samples of some closer studies belong to Stamatatos et al., who applied Multiple Regression and Discriminant Analysis using 22 style markers [3] and Corney, who gathered and used a large set of stylometric characteristics of text including features derived from word and character distributions, and frequencies of function words [4].

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

120

2 nd International Symposium on Computing in Science & Engineering

Genetic algorithms has been applied to a broad range of subjects like pipeline flow control, pattern recognition, classification, structural optimization, and have been used to solve a broad variety of problems in extremely diverse array of fields. Some examples of these fields are acoustics, aerospace engineering, electrical engineering, chemistry, financial markets, mathematics, military and law enforcement, astronomy and astrophysics, geophysics, materials engineering, molecular biology, data mining and pattern recognition, routing and scheduling, and author identification [5, 6, 7, 8, 9, 10]. One of the genetic algorithm studies for the area of author identification is made by Schlapbach et al., who extracted 100 features from a handwriting sample to identify the author of the sample handwriting from a set of writers. By applying genetic algorithm as feature selection and extraction method on this set of features, subsets of lower dimensionality are obtained. They show that, significantly better writer identification rates can be achieved, if smaller feature subsets are used [11]. In 2006, Gazzah and Amara, used genetic algorithm for feature subset selection in order to eliminate the redundant and irrelevant ones in study of identifying the writer using off-line Arabic handwriting. Experiments have shown writer identification accuracies reach acceptable performance levels with an average rate of 94.73% using optimal feature subset [12]. In 2007, Lange and Mancoridis, developed a technique to characterize software developers' styles using a set of source code metrics to identify the likely author of a piece of code from a pool of candidates. They used genetic algorithm to find good metric combinations for effective distinguishing developer styles [13]. In 2009, Shevertalov et al., presented a genetic algorithm to discretize metrics to improve source code author classification by evaluating the approach with a case study involving 20 open source developers and over 750,000 lines of Java source code [14].

METHODS

At the beginning of this study, 24 style markers are collected for 16 columnists of two newspapers. Using all of 24 attributes is not efficient for author identification process and some attributes have negative effect. A genetic algorithm is developed to discard inefficient attributes. Genetic algorithm (GA) is a programming technique that mimics biological evolution as a problem-solving strategy. In this study a potential solution is a 24 bits chromosome that encodes which attribute will be used in author identification process. At the beginning, an initial population that includes parameterized size of chromosomes is generated randomly. Each bit of a chromosome represents corresponding attribute that will be used for identification process. Fitness value to quantitatively evaluate a chromosome is author identification success ratio that is obtained when attributes selection is made according to chromosome’s encoding. Linear Ranking Selection is used to select chromosomes which are used for reproduction operation. Reproduction operation includes crossover and mutation operations. Termination condition of the evaluation process is a pre-determined number of generations or when the time a satisfactory solution has been achieved.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

121

2 nd International Symposium on Computing in Science & Engineering

FINDINGS – CONCLUSION

By using genetic algorithm, candidate solutions are kept and allowed to reproduce a new population of candidate solutions. As a result, the average fitness of the population is increased each round, and so by repeating this process for hundreds or thousands of rounds, correct author for the given anonymous text is found within acceptable success ratios. This study shows that, genetic algorithm is an effective and useful attribute selection method to obtain optimal feature subset. Author identification success ratio is increased by using optimal feature subset which is found with GA.

REFERENCES

[1] Morton A.Q. (1965). The Authorship of Greek Prose, Journal of the Royal Statistical Society, Series A, 128, pp.169-233 [2] Brinegar C. (1963). Mark Twain and the Quintus Curtius Snodgrass Letters: A Statistical Test of Authorship. Journal of the American Statistical Association, Vol.58, pp.85–96. [3] Stamatatos E, Fakotakis N., Kokkinakis G. (2001). Computer-Based Authorship Attribution without lexical measures. Computers and Hummanities, pp.193-214. [4] Corney M. (2003). Analysing e-mail text authorship for forensic purposes. Master of information technology (research) thesis. [5] Tang K.S., Man K.F., Kwong S., He Q. (1996) .Genetic algorithms and their applications. IEEE Signal Processing Magazine, vol.13, no.6, pp.22-37. [6] Haupt R., Sue E.H. (1998). Practical Genetic Algorithms. John Wiley & Sons. [7] Koza J., Forest B., David A., Martin K. (1999). Genetic Programming III: Darwinian Invention and Problem Solving. Morgan Kaufmann Publishers. [8] Au W.H., Keith C. (2003) A novel evolutionary data mining algorithm with applications to churn prediction. IEEE Transactions on Evolutionary Computation, vol.7, no.6, pp.532-545. [9] Rizki M., Michael Z., Louis T. (2002). Evolving pattern recognition systems. IEEE Transactions on Evolutionary Computation, vol.6, no.6, pp.594-609. [10] Burke E.K., Newall J.P. (1999). A multistage evolutionary algorithm for the timetable problem. IEEE Transactions on Evolutionary Computation, vol.3, no.1, pp.63-74. [11] Schlapbach A., Kilchherr V., Bunke H. (2005). Improving Writer Identification by Means of Feature Selection and Extraction. Proceedings of the 2005 Eight International Conference on Document Analysis and Recognition (ICDAR’05). [12] Gazzah S., Amara N. E. B. (2006). Writer Identification Using Modular MLP Classifier and Genetic Algorithm for Optimal Features Selection. Lecture Notes in Computer Science, Volume 3972/2006, pp.271-276.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

122

2 nd International Symposium on Computing in Science & Engineering

[13] Lange R., Mancoridis S. (2007). Using Code Metric Histograms and Genetic Algorithms to Perform Author Identification for Software Forensics. GECCO '07 Proceedings of the 9th annual conference on Genetic and evolutionary computation ACM New York, USA. [14] Shevertalov M., Kothari J., Stehle E., Mancoridis S. (2009). On the Use of Discretized Source Code Metrics for Author Identification. Search Based Software Engineering, 1st International Symposium on Issue, pp.69-78.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

123

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/64

A Real-Time TTL based Downlink Scheduler for IEEE 802.16 WiMAX Melek Oktay,Fatih University,Computer Engineering,Gebze Institute of Technology, 34500, Buyukcekmece, Istanbul, Turkey, [email protected] Haci Ali Mantar,Gebze Institute of Technology,Computer Engineering,41400, Gebze, Kocaeli, Turkey, [email protected]

Keywords: TTL, Scheduling, Wimax, Downlink, IEEE 802.16, EDF

INTRODUCTION

WiMAX is an emerging broadband wireless network and it supports Quality of Service (QoS) in MAC level [6]. It supports class-based QoS for application and it has several components for supporting QoS such as admission control, scheduling, classification, etc. Scheduling algorithm, one of the most important components in WiMAX networks, is not standardized by IEEE 802.16. In the literature, many scheduling algorithms and schemes are proposed but most of these algorithms are developed for uplink schedulers. There is no so much effort for downlink schedulers in the literature and aim of this study is to improve real-time application performance by designing downlink scheduler for WiMAX networks. We propose a new TTL based downlink scheduler in WiMAX networks via decreasing delay of packets. To the best of authors of this article knowledge, almost nothing has been published on the subject of this article.

LITERATURE REVIEW

Real-time applications, such as Voice over IP (VoIP) and video streaming, generate symmetric traffic in the network. For this reason, when a new scheme is proposed for these applications, both uplink and downlink scheduler must be taken into account. Also, a real-time application can compensate certain amount of packet loss, but they are sensitive jitter and delay of packets. In order to the improve performance of them, appropriate algorithms should be used. Earliest Deadline First (EDF) [1] algorithm is one of them, and it shows good performance in real-time applications [2, 3]. EDF scheduling algorithm marks each incoming packet with

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

124

2 nd International Symposium on Computing in Science & Engineering

deadline and calculation of this deadline is critical issue. In Wimax network, subscriber station (SS) informs base station (BS) about QoS parameters of flow during the admission control phase, latency and jitter are some of these parameters. Classical deadline calculation of packet in EDF as follows: latency of flow is added to arrival time of packet in the flow. Although EDF shows good performance in the real-time applications, it can be improved for downlink scheduler in WiMAX networks. Classical EDF does not differentiate packets that are come far from destination (more hop) and near from it. It takes only arrival time and latency into account. However, packets that come far from source generally face more delay than near one. Therefore, when far packet enters to the BS, it should be send to SS as soon as possible even if its deadline is greater than near destination packet. The main contribution of this study is giving more precedence packet which comes far from than the near one.

METHODS

There is a correlation between hop-count and delay in the internet [5]. When hop-count of packet increases; the delay of the packet that will be faced is also increases. Internet Protocol (IP) helps us to understand packet hop count. It consists of Time to Live (TTL) 8-bit field and each router (hop) decrements this value by one which packet that comes through. At the destination, hop count of the packet that reaches from source to destination can be calculated in the final TTL. But different operating systems (OS) assign different default value for TTL. Therefore, there cannot be single initial TTL value for each sender IP. Today’s modern Operating Systems select 30, 32, 60, 64 and 512 as initial values. In the literature, it is observed that these TTL values are between 30 and 32, 60 and 64, and 32-64 [4]. This assumption is done in this study, for example if the final TTL value is 110; it is obvious that initial value TTL value 128. Then, packets are differentiated by BS according to the final TTL values and packets that are come far from destination have more precedence than near one. In this study, packet delay in WiMAX network is balanced.

FINDINGS & CONCLUSION

WiMAX is the broadband wireless technology and it supplies ubiquitous high speed connectivity in the last mile. The packet scheduler algorithm allocates resources to SSs according to their QoS requirements and scheduler selection is critical for the performance of the network. In this study, a new scheme whose name is TTL-based downlink scheduler is developed. It is implemented in NS-2 [7] simulator and Wimaxforum WiMAX add-on [8] is installed on Linux operating system. The developed scheme shows better performance than the classical EDF scheme in downlink scheduler. The simulation results are promising.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

125

2 nd International Symposium on Computing in Science & Engineering

REFERENCES

1. C. L. Liu and J.W. Layland. “Scheduling algorithms for multiprogramming in a hard-real-time environment”, J. ACM,20(1):46–61, 1973. 2.

Melek Oktay, Haci Ali Mantar, "A Real-Time Scheduling Architecture for IEEE 802.16 – WiMAX

Systems", 9th IEEE International Symposium on Applied Machine Intelligence and Informatics, Slovakia/Slomonice,SAMI 2011 3.

N. A. Ali, P. Dhrona, and H. Hassanein. “A performance study of uplink scheduling algorithms in point-

to-multipoint wimax networks.”Comput. Commun., 32(3):511–521, 2009. 4.

H. Wang, C. Jin, and K. G. Shin.” Defense against spoofed IP traffic using hop-count filtering”.

IEEE/ACM Trans. Netw. 15, 1 (February 2007), 40-53. 5.

A. Fei , G. Pei, R. Liu and L. Zhang “Measurements on delay and hop-count of the internet”, Proceedings

of IEEE GLOBECOM'98, , January. 1998 6.

M. S. Kuran and T. Tugcu, "A Survey on Emerging Broadband Wireless Access Technologies," Computer

Networks, Vol. 51, No:11, pp 3013-3046, August 2007. 7.

http://www.isi.edu/nsnam/ns/

8.

http://code.google.com/p/ns2-wimax-awg/

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

126

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/65

Morphological Disambiguation via Conditional Random Fields Hayri Volkan AGUN, Trakya University, Department of Computer Engineering, Edirne, Turkey, [email protected] Yilmaz KILIÇASLAN, Trakya University, Department of Computer Engineering, Edirne, Turkey, [email protected]

Keywords: Natural Language Processing, Morphological Disambiguation, Conditional Random Fields

INTRODUCTION

Most of the rule based NLP tasks require a robust morphological parsing method. Although there are several rule based methods for morphological analysis of Turkish (e.g. [9]), none of these always offer unique results for a single

word.

For

instance,

the

"masal+Noun+A3SG+Pnon+Acc",

word

"masali",

has

three

different

morphological

"masal+Noun+A3sg+P3sg+Nom",

analyses: and

"masa+Noun+A3sg+Pnon+Nom^DB+Adj+With". We propose a method for resolving such morphological ambiguities in Turkish. The method consists of a language model, a rule-based morphological parser and a supervised machine learning model. We use Conditional Random Fields (CRF) [5][11] as a learning model to filter out faulty results which the parser generates. We particularly emphasize that not only the word to be analyzed but also the neighboring ones should be given due attention for better performance results.

LITERATURE REVIEW

Morphological disambiguation is an important task in syntactic parsing, word sense disambiguation, spelling correction and semantic parsing for all languages. For Turkish, rule-based and statistical approaches have been proposed to this effect. In rule based approaches, a large number of hand-crafted rules are organized to obtain the correct morphological analysis [10].

Statistical work done on Turkish involves a purely statistical

disambiguation model [3] and a hybrid model of rule-based and classification approaches [12]. For other agglutinative languages, several methods have been proposed:

unsupervised learning for Hebrew [6], a

maximum entropy model for Czech [4] and a combination of statistical and rule-based disambiguation methods for Basque [2].

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

127

2 nd International Symposium on Computing in Science & Engineering

The aim of this work is to obtain the correct form of morphological analysis from several morphological analysis offered by the rule-based parser. In this scope, what we propose differs from alternative statistical approaches in at least two respects. Firstly, we use n-grams to recognize repeating sequences of tags instead of manually organized rule sets. Secondly, rather than confining ourselves to a single word, we also take into consideration the neighboring words in the learning process. In this process, we use LingPipe's language models and Mallet framework to build CRF models. Combining these two methods, we achieve satisfactory accuracy results on dataset obtained from METU-SABANCI Treebank [1][8].

METHODS

Our method comprises two steps: First, we compile a dataset containing alternative analysis results for given words. Second, we use a CRF model to train and test the learning on the samples of the prepared dataset. In order to build our dataset, we analyze the words

in METU-SABANCI Treebank using the Zemberek

morphological analyzer toolkit. Analysis results are checked against the METU-SABANCI Treebank to detect the correct ones. Once the dataset is formed, we apply the n-grams language model on it to obtain groups consisting of frequently co-occuring morphological tags. These groups of tags are then used as features in the CRF model where the correct result is signaled by an appropriate label.

FINDINGS & CONCLUSION

With 95% confidence rate in 10-Fold Cross Validation tests, we observe 92% accuracy for morphological partof-speech (POS) tagging and 88% accuracy for morphological disambiguation of Turkish words. In these experiments we confined ourselves only to 500 words with a 986 different morphological analysis results (with a ratio of 1.972 morphological result per word). The results for morphological disambiguation of Turkish in several combinations of CRF models and corresponding n-gram lengths are given in the following table:

1st Order CRF

2nd Order CRF

n-gram length 1 78.86% 81.88% n-gram length 2

89.90% 92.52%

n-gram length 3 92.01% 92.85% Table 1: Accuracy of Morphological Disambiguation

In the calculation of morphological part-of-speech tagging, we don't include any stem as a feature. Instead we try to predict the POS just from the morphological result without using any lexicon. These results are given below:

1st Order CRF

2nd Order CRF

n-gram length 1 91.86% 89.13% n-gram length 2

92.90% 92.52%

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

128

2 nd International Symposium on Computing in Science & Engineering

n-gram length 3 92.01% 92.45% Table 2: Accuracy of POS Tagging

In conclusion, we use different CRF models to test and train our system along with the different language models with differing n-gram lengths. We show that tuning n-gram length helps to determine a good accuracy via decreasing the number of combination of features in the sample space.

REFERENCES

1. Atalay N. B., Oflazer K. and Say B., (2003). The Annotation Process in the Turkish Treebank, in Proceedings of the EACL Workshop on Linguistically Interpreted Corpora - LINC, April 13-14, Budapest, Hungary. 2. Ezeiza, N. et. al. (1998). Combining stochastic and rule-based methods for disambiguation in agglutinative languages. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics (COLLING/ACL98), pages 379-384. 3. Hakkani-Tur, D. Z., Oflazer, K. and Tur, G. (2002). Statistical Morphological Disambiguation for agglutinative languages. Computers and the Humanities, 36:381-410. 4. Hajic, J. and Hladka, B. (1998). Tagging inflective languages: Prediction of morphological categories for a rich, structured tagset. In Proceedings of 36th Annual Meeting of the Association for Computational Linguistics (COLING/ACL98), pages 483-490, Montreal, Canada. 5. Lafferty J., McCallum Andrew, and Pereira Fernando. (2001). Conditional Random Fields: Probabilistic Methods for Segmenting and Labeling Sequence Data. In Proceedings of the Eighteenth International Conference on Machine Learning. 6. Levinger, M., Ornan, U. and Itai, A. (1995). Learning morpho-lexical probabilities from an untagged corpus with an application to Hebrew. Computational Linguistics, 21(3):383-404. 7.

McCallum,

and

Kachites

A.

(2002).

MALLET:

A

Machine

Learning

for

Language

Toolkit.

http://mallet.cs.umass.edu. 8. Oflazer K., Say B., Hakkani-Tür D. Z. and Tür G. (2003). Building a Turkish Treebank, Invited chapter in Building and Exploiting Syntactically-annotated Corpora, Anne Abeille Editor, Kluwer Academic Publishers. 9. Oflazer, K. (1994). Two-level description of Turkish morphology. Literary and Linguistic Computing, 9(2):137148. 10. Oflazer, K. and Tur, G. (1997). Morphological disambiguation by voting constraints. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics (ACL97, EACL97), Madrid, Spain. 11. Thomas G. Dietterich. (2002). Machine Learning for Sequential Data: A Review. In Structural, Syntactic, and Statistical Pattern Recognition; Lecture Notes in Computer Science, Vol. 2396, T. Caelli (Ed.), pp. 15-30, SpringerVerlag. 12. Yuret D. and Ture F. (2006.) Learning morphological disambiguation rules for Turkish. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics (HLT-NAACL '06). Association for Computational Linguistics, Stroudsburg, PA, USA, 328-334.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

129

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/67

Classification of Alcoholic Subjects using Multi Channel ERPs based on Channel Optimization Algorithm Mehmet ÇOKYILMAZ,Fatih University,Computer Engineering Department,Istanbul,Turkey, [email protected] Nahit EMANET,Fatih University,Computer Engineering Department,Istanbul,Turkey, [email protected]

Keywords: Event Related Potentials, Channel Optimization Algorithm, Probabilistic Neural Network, Support Vector Machine, Random Forest Classifier

INTRODUCTION

Alcoholism is defined as alcohol addiction and uncontrolled consumption of alcohol causing social, physical, psychiatric and neurological damages on individuals. Drinking large amounts of alcohol for very long period results in serious and persistent changes in brain. Therefore, researchers focus on advanced technologies to investigate the effects of alcoholism on brain and alcoholic patients; Magnetic resonance imaging (MRI), positron emission tomography (PET) and electrophysiological brain signals are a few of them. The main deficit of alcoholism on human brain is shown as cognitive impairments which results irregularity in brain’s electrophysiological signals. In order to detect these irregularities Electroencephalogram (EEG) signals are recorded by measuring electrical activity of the brain through electrodes placed on the scalp. In order to further investigate cognitive ability of the patients, event-related potentials (ERPs) which consist of negative and positive electrical voltage changes against to a given stimulus are derived from ongoing EEG [1] [2]. In this paper, alcoholic and non-alcoholic subjects are classified by using three different classifiers Probabilistic Neural Network (PNN), Support Vector Machine (SVM), and Random Forest (RF) classifier by using a channel optimization algorithm developed to find a subset of channels that have the most valuable information in classification. The purpose of this study is two-fold. To find the regions of the brain which play the most important role in alcoholism and to classify alcoholic patients form healthy subjects. Additionally, the performance analysis of each classifier is examined.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

130

2 nd International Symposium on Computing in Science & Engineering

LITERATURE REVIEW

In the ERP literature of alcoholism, P300 or P3 component of ERPs is the most commonly used feature to reveal the deficits of alcoholism. The significant maximal positive peek between 300 and 900 ms is characterized as P300 which is generated over parietal/central area of the brain. Although some studies use P300 components directly to investigate the deficits in the brain, most of the recent studies are based on state of the art signal processing and classification methods. Wavelet transformation mechanism is used to extract features from both ERP and EEG by investigating the signals in different frequency bands. Thus, the wavelet coefficients of EEG and ERP signals are utilized as features for automatic classification of alcoholic patients by using not only Artificial Neural Network (ANN), but also different classification mechanisms such as SVM, Learning Vector Quantization Network (LVQ) [4] [5] [6] [7]. METHODS The ERP data used in this work were collected in a large study that tried to find a link between genetic predisposition and alcoholism at Neurodynamics Laboratory at the State University of New York Health Center at Brooklyn [3]. The ERP data was recorded in three different experiment environments by using a 61 channel electrode cap according to Standard Electrode Position Nomenclature. 35 alcoholic and 35 non-alcoholic subjects are selected from the data set for this work. Global field synchronization (GFS) measurement of multi channel ERPs in Delta, Theta, Alpha, Beta and Gamma frequency bands is used as feature extraction method. Five GFS value is obtained from each subject as feature vector. The GFS value is considered to refer to the functional connectivity of brain channels as a response to a given stimulus [8] [9] [10]. After obtaining the features from ERPs, PNN, SVM and RF are used to classify alcoholic and control subjects. Additionally, a channel optimization algorithm is applied both to improve the classification accuracies for classifiers and to find the regions of the brain which carry the most important information for alcoholism [12] [13] [14] [15] [16]. FINDINGS & CONCLUSION Initially, the three classifier mechanisms are applied to 61 multi-channel ERPs by using GFS feature extraction method. The accuracy performances for three classifiers are found between 50% and 60%. After that, the channel optimization algorithm is used with the initial proposed system to improve the accuracies of the classification systems and to find the brain regions that have the most valuable information for classification [11]. The classification accuracies are improved to nearly 80% for all three classifier systems. Moreover, statistical measures such as ROC analysis, specificity, sensitivity, and accuracy calculations are used to test the effectiveness and performance of each classifier. As a result of applying channel optimization algorithm, the brain channels that improve the classification accuracies are found. Nearly one-quarter of all channels are very

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

131

2 nd International Symposium on Computing in Science & Engineering

effective in the classification. The main reason of why there is an inversely correlated pattern between the classification accuracies and the number of channels is that alcoholism affects unimpaired brain regions compared to damaged regions of the brain. The effected regions are found and used in classification, thus, the classification accuracies are improved.

REFERENCES 1.

M. G. H. Coles, M. D. Rugg, Electrophysiology of Mind: Event-Related Brain Potentials and Cognition, Oxford

University Press, 1995, pp. 1-23. 2.

B. Porjesz, H. Begleiter, “Event-related potentials in individuals at risk for alcoholism,” Alcohol, vol 7, Sep-Aug 1990,

pp. 465-469. 3. 4.

L.Ingber http://kdd.ics.uci.edu/database/eeg/eeg-full.tar X. L. Zhang, H. Begkiter, B. Porjesz, W. Wang and A.Litke, “Event Related Potentials During Object Recognition

Tasks,” Brain Research Bullctin.vol. 38, no. 6, 1995, pp. 531-538. 5.

C.D. Lopes, J. O. Mainardi, M. A. Zaro, and A. A. Susin, “Classification of event-related potentials in individuals at

risk for alcoholism using wavelet transform and artificial neural network”, Computational Intelligence in Bioinformatics and Computational Biology, 2004, pp. 123 – 128. 6.

M. R. N. Kousarrizi, A. A. Ghanbari, A.Gharaviri, M. Teshnehlab, M. Aliyari, “Classification of Alcoholics and Non-

Alcoholics via EEG Using SVM and Neural Networks”, Bioinformatics and Biomedical Engineering , 2009, pp. 1-4. 7.

C.D. Lopes, E. Schuler, P. M. Engel, A. A. Susin, “ERP signal identification of Individuals at Risk for Alcoholism

using Learning Vector Quantization Network”, Computational Intelligence in Bioinformatics and Computational Biology, 2005, pp. 1-5. 8.

T. Koenig, D. Lehmann, N. Saito, T. Kuginuki, T. Kinoshita and M. Koukkou, “Decreased functional connectivity of

EEG theta-frequency activity in ?rst episode, neuroleptic-naive patients with schizophrenia: preliminary results,” Schizophr Res 2001;1-2(50): pp. 55–60. 9.

T. Koenig, L. Prichep, T. Dierks, D. Hubl, L. O. Wahlund, E. R. John, and V. Jelic, “Decreased EEG synchronization in

Alzheimer's disease and mild cognitive impairment,” Neurobiol. Aging 26, 2005b, pp. 165–171. 10. M. Kikuchi, T. Koenig, Y. Wada, M. Higashima, Y. Koshino, W. Strik and T. Dierks, “Native EEG and treatment effects in neuroleptic-naïve schizophrenic patients: Time and frequency domain approache,” Schizophrenia Research 97, 2007, pp. 163-172. 11.

M. Çokyilmaz, N. Emanet, “Classification of Alcoholic Subjects using Multi Channel ERPs based on Channel

Optimization and Probabilistic Neural Network”, 9th IEEE International Symposium on Applied Machine Intelligence and Informatics, 2010. 12. D. F. Specht, “Probabilistic neural network,” Neural Networks 3, 1990, pp. 109-118. 13. M. H. Hammond, C. J. Riedel, S. L. Rose-Pehrsson, F. W. Williams, “Training set optimization methods for a probabilistic neural network,” Chemometrics and Intelligent Laboratory Systems vol. 78, 2004, pp. 73-78. 14. N. Emanet, H. R. Öz, N. Bayram, “Pulmonary Diagnostic System to Identify Asthma Patients by Using Random Forest Algorithm”, Computing in Science and Enginearing, 2010. 15. M. A. Hearst, S. T. Dumais, E. Osman, J. Platt, B. Scholkopf, “Support Vector Machines”, Intelligent Systems and their Applications, 1998, vol. 4 pp. 18-28. 16. C. J. C. Burges, “A Tutorial on Support Vector Machines for Pattern Recognition”, Data Mining and Knowledge Discovery, 1998, vol. 2, pp. 121-167.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

132

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/68

E-Learning Content Authoring Tools and Introducing a Standard Content Constructor Engine Hossein Keynejad, Islamic Azad University, South Tehran Branch,Tehran,Iran, [email protected] Maryam Khademi, Islamic Azad University, South Tehran Branch,Tehran,Iran, [email protected] Maryam Haghshenas, Islamic Azad University, Science & Research Branch,Tehran,Iran, [email protected] Hoda Kabir, Islamic Azad University, South Tehran Branch,Tehran,Iran, [email protected]

Keywords: Course Authoring Tools, Meta Data, Learning Management System Learning Content Management System, Knowledge Management

INTRODUCTION

The E-learning industry continues to evolve, change and advance continuously, every day, and common methods and tools creating and maintaining content and infrastructure applications are critical, hence standards are needed. E-learning contents with a convenient feature that goal, identify the most relevant features and use them. So standards must first examine the contents of E-learning, then they are efficient in making an appropriate content according to the constraints Available (limited bandwidth, as the country online, send reports as appropriate content offline). For this purpose, such as standards (SCORM, IMS, IEEE,...) are used to create a suitable architecture for software components needed in E-learning. This paper is the final consequence of studies and investigations on choosing an appropriate authoring tool for constructing standard learning content which can be revealed offline or even online. Furthermore it will introduce a content constructor engine which is designed and produced in our academic group. This engine demonstrates standard contents and besides some new features is included.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

133

2 nd International Symposium on Computing in Science & Engineering

LITERATURE REVIEW

E-learning includes all forms of electronically supported learning and teaching styles. The information and communication systems, whether networked or not, which are served by specific media, implement the learning process. In other words, E-learning is a system involving education, knowledge, management science and communication. E-learning content authoring tool is software designed to create E-learning modules and present E-Learning lessons. Authoring tools develop sheets comprised of text, audio, image, video and animation. By organizing the sheets, produced contents will ease learners to track on the process of learning and appraise their progression themselves and this will apparently lead to self-study. The contents are generally written to conform to some international standards, such as IMS (IP Multimedia Subsystem) or SCORM (Shareable Content Object Reference Model).The objective of standards is to provide fixed data structures and communication protocols for E-learning contents. This enables interoperability between applications. SCORM is the collection of standards and specifications used for web-based E-learning. It defines communications between client side content and a host system which is commonly supported by a learning management system. Besides IMS is an architectural framework for delivering Internet Protocol (IP) multimedia services.

METHODS

This article is going to explain the standards specified to authoring tools and to clarify the importance of them. Authoring tools, to produce E-learning courses so that these tools are usually produced and pages of text, graphics and added other media to the other pages. Authoring tools, to produce E-learning courses so that these tools are usually produced and pages of text, graphics and other media to the other pages are added. And a framework to organize the pages and create lessons is able to assure users among the academic subjects to search. The new created engine produces E-learning content and some new features are included. Produced content, supporting IMS and SCORM standards, is used in every Learning Management System (LMS) and also Learning Content Management System (LCMS). Making interactive multimedia content for E-learning environment, graphical user interface should be selected so that the user maximums have control over the components in the learning content assets are called.

FINDINGS & CONCLUSION

Although many tools have been developed to produce content but only some of them are compatible to standards. Because of this fact, we intend to present the criteria and properties to choose best tools. In addition, we will introduce a content constructor engine which is designed and produced in our academic group. This engine demonstrates standard contents and besides some new features is included. The most significant characteristic used in this software is to display the produced content in form of both offline and online. When

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

134

2 nd International Symposium on Computing in Science & Engineering

the considered band width is not available for the user (learner or teacher), the offline phase is recommended in the process of learning and for reaching higher performance a web service is employed to transfer user’s information. Some of other attributes in the engine are: simulating variable books, adding notes, interactive environment, searching abilities and reportage of user actions. It must be mentioned that the engine we made has capability to use knowledge management in content, as well.

REFERENCES

[1] Hossein keynejad/Course content authoring tools designed to build electronic media/Feb 2008 (in Persian) [2] Rachel Ellaway / the Northern Ontario School of Medicine. E-Learning Standards and Specifications / June 1, 2009. [3] Norm Friesen, PhD / Canadian Journal of Learning and technology Volume 30(3) Fall,Editorial - A Gentle Introduction to Technical E-learning Standards / autumn 2004. [4] William Horton and Katherine Horton/ E-learning Tools and Technologies: A consumer’s guide for trainers, teachers, educators, and instructional designers/Jan 10, 2003. [5] Dana Fine / Senior Instructional Designer, SyberWorks, Inc/Choosing the Right Content-Authoring Tool for Your e-Learning Needs/www.syberworks.com/articles/2010. [6] IEEE. (2005). The Computer Managed Instruction Standard. Retrieved July 13, 2005 from: http://ieeeltsc.org/wg11CMI/CMIdescription . [7] ukkhem, N., & Vatanawood, W. (2005). Instructional design using component-based development and learning object classification. Fifth IEEE International Conference on Advanced Learning Technologies (ICALT'05), 492-494. [8] Duval, E. (2002). “Learning Technology Standardization: Too Many? Too Few?” Proceedings of Standardisierung im eLearning, Frankfurt/Main, Germany, 2002. [9] Krull, G.E. (2004). An investigation of the development and adoption of educational metadata standards for the widespread use of learning objects. Master’s thesis, Rhodes University, South Africa. [10] Fallon, C., & Brown, S. (2002). E-learning standards: A guide to purchasing, developing and deploying standards-conformant e-learning. St. Lucie Press.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

135

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/69

Investigating Optimum Resource Allocation in University Course Timetabling using Tabu Search: An Incremental Strategy Mehtap KOSE ULUKOK,Cyprus International University,Computer Engineering Department,Haspolat,KKTC, [email protected] Yasemin ERCAN,Cyprus International University,Computer Engineering Department,Haspolat,KKTC, [email protected]

Keywords: Timetabling Problem, Tabu Search Algorithm, Metaheuristics

INTRODUCTION

In this study, the possible solutions of the Tabu Search algorithm is investigated for the solution of the University Course Timetabling Problem (UCTP). The UCTP has been choosen by many researchers since 1965. It is a NP-hard problem and it is a real-world application of combinatorial optimization problems. The diffuculty of the UCTP is the coordination of lectures, classrooms and teachers according to some constraints. The replacement of courses and lectures to periods and classrooms is the main task that have to be performed for each academic semester. This replacement requires new solutions for new semesters because of part-time lecturers, changes on workloads of lecturers and the number of students for each course changes every semester. Therefore, the UCTP must be solved every semester again and again. The main aim of this study is to solve the UCTP with the use of optimum number of classroom assignment by reducing the unused periods in classrooms’ timetable. Assigning courses and lectures to periods and classrooms is a difficult task when there are some limitations with the available number of classrooms. Tabu Search incremental strategy is studied within the Faculty of Engineering at Cyprus International University (CIU).

LITERATURE REVIEW

Timetabling problems are mainly worked in high schools and universities. Usually, these problems are solved manually. Depending on the search space of the problem these manual solutions often require a few weeks or more to solve, because during the construction process of timetabling always some changes are done. After some

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

136

2 nd International Symposium on Computing in Science & Engineering

number of modifications, the result is mostly not optimal. Over the past few years, researchers have been investigated new algorithms or techniques to find general solution of timetabling problem. The UCTP solutions may vary university to university because of their own curriculum and therefore they require individual solutions.

Timetabling problems are studied with several techniques for more than four decade. Some of the recent studies includes Constraint-Based Reasoning [1], Integer Programming [2], Tabu Search Algorithm [3], Memetic Algorithms [4], Genetic Algorithms [5]-[7-9]. Other popular techniques including Graph Coloring Algorithms [6] and Linear Algorithms [10].

A mathematical model of university timetables is introduced as a linear program [10]. The timetabling problem is a type of multi-objective satisfaction problem. Mainly the constraints are classified into two as hard and soft constraints. The violation of these constraints denotes the quality of the developed solution. Therefore, the solution space of the problem is huge and finding the optimal solution is a complex task. Studies show that Genetic Algorithms find optimal solutions [5],[7]-[9] but they are complicated with respect to time and memory. Morever, recently a computer tool is developed to solve the UCTP with tabu search algorithm and efficient solutions are reported [3]. Another computer tool is developed for the generalized solution of UCTP at Yeditepe University called TEDI [7]. METHODS Tabu search algorithm is proposed to solve the timetabling problem with incremental strategy. The timetabling problem constraints are mainly considered as hard constraints in which every timetable must satisfy all these constraints, and soft constraints which are not essential for a timetable solution but it makes the solution better when they solved. Almost in all of the past studies similar hard and soft constraints are listed [1]-[3]. Depending on the university’s limitations these constraints may vary. In this study, the number of classroom assignment is aimed to minimize to reduce the university resource usage cost. The proposed algorithm starts with the randomly generated solutions to the UCTP and these initial solutions are improved by using incremental strategy where a best movement is chosen in a classroom time table by satisfying the hard constraints. If there is any hard constraint violation, these solutions get worst fitness values. All periods in a classroom timetable is searched first to fulfill the classroom periods. If there is no suitable period available, then a new classroom period is started to search.

FINDINGS & CONCLUSION

The proposed tabu search algorithm is investigated for the solution of timetabling problem in Faculty of Engineering at Cyprus International University (CIU). The developed computer software will be used in all faculties of CIU. CIU has same first two years lectures in the all departmetns of the Faculty of Engineering and it

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

137

2 nd International Symposium on Computing in Science & Engineering

consisting of seven departments. Last two years of each department differs from each other. Thereby, the whole problem search space is reduced to consecutive two year curriculum courses. One of the aim of this study to develop a computer automated software which solves UCTP for the CIU with optimum classroom usage. A user friendly computer graphical user interface will be developed to solve other faculties timetabling problems in CIU.

REFERENCES

1.

Ho Sheau Fen, Safaai-Deris, Siti Zaiton-Mohd Hashim, Investigating Constraint-Based Reasoning for

University Timetabling Problem, Proc. of the International MultiConference of Engineers and Computer Scientists 2009 Vol I, March 18 - 20, Hong Kong, 2009. 2.

M. Akif Bakir, Cihan Aksop, A 0-1 Integer Programming Approach to a University Timetabling Problem,

Hacettepe Journal of Mathematics and Statistics, Vol. 37(1), pp. 41-55, 2008. 3.

Çagdas Alkan Aladag, Gülsüm Hocaoglu, A Tabu Search Algorithm to Solve a Course Timetabling

Problem, Hacettepe Journal of Mathematics and Statistics, Vol. 36 (1), pp. 53-64, 2007. 4.

Ender Özcan, Alpay Alkan, A Memetic Algorithm for Solving a Timetabling Problem: An Incremental

Strategy. Proc. Of the 3rd Multidisciplinary Int. Conf. On Scheduling: Theory and Applications, P. Baptiste, G. Kendall, A. M. Kordon, F. Sourd (ed.), pp.394-401, 28-31 August 2007, Paris, France, 2007. 5.

Nawat Nuntasen, Supachate Innet, A Novel Approach of Genetic Algorithm for Solving University

Timetabling Problems: a case study of Thai Universities, 7th WSEAS International Conference on Applied Computer Science, Venice, Italy, November 21-23, pp. 246-252, 2007. 6.

Özgür Ülker, Ender Özcan, Emin Erkan Korkmaz, Linear Linkage Encoding in Grouping Problems:

Applications on Graph Coloring and Timetabling, Proc. of the 6th International Conference on the Practice and Theory of Automated Timetabling, 303-319, 2006. 7.

Ender Özcan, Alpay Alkan, Timetabling using a Steady State Genetic Algorithm, Proc. of the 4th

International Conference on the Practice and Theory of Automated Timetabling, pp.104-107, August 2002. 8.

Edmund Burke, David Elliman, Rupert Weare, A Genetic Based University Timetabling System, the 2nd

East-West International Conference on Computer Technologies in Education, 1994. 9.

Alberto Colorni, Marco Dorigo, Vittorio Maniezzo, Genetic Algorithms and Highly Constrained

Problems: The Time-Table, 1st International Workshop on Parallel Problem Solving from Nature, 1990. 10. Eralp A. Akkoyunlu, A Linear Algorithm for Computing the Optimum University Timetable, Computational Journal Vol.16 (4), 1973.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

138

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/70

PSOMDM: Faster Parallel Self Organizing Map by Using Division Method Yunus DOGAN, Derya BIRANT, Alp KUT

Keywords: Self Organizing Map, Clustering, Machine Learning, Neural Networks

INTRODUCTION

This research is an answer for faster clustering by using of Self Organizing Map (SOM). Because the complexity of SOM algorithm is O(NC), where N is the input vector size and C is the number of dataset presentation cycles. N contains n2w as the multiply of the map size n2 and the number of weights w. C contains n2a as the multiply of the map size n2 and the number of attributes a. The number of attributes is equal to the number of weights; therefore, the complexity of SOM algorithm obtains O(N2). This big O(N2) number causes that if there is a great dataset for clustering, SOM algorithm cannot answer about the map solution for a long time. Usage of Parallel SOM supplies lower complexity than O(N2). According to the number of parallel threads or multi-core processors, parallel SOM accelerates the process. However, the threads or cores of parallel SOM process over the same neurons; therefore, the solutions which come from parallel SOM are at risk about lower accuracy. This new approach of parallel SOM by using division method has higher accuracy and speed.

LITERATURE REVIEW

SOM is an unsupervised neural network which enables clustering according to data similarities and learns the patterns within the data itself, without any external supervision and without preliminary knowledge of the process. SOM is composed of multiple units called cells or ?neurons which can be further grouped into clusters using similarity measures. This standard SOM of Kohonen which has the complexity O(N2) is proved to increase its speed and the simple SOM has been previously used in many applications [1]. Extended versions of SOM have also been proposed also proposed such as FSOM (Fast SOM) [2], ABSOM (Ant Based SOM) [3], and ESOM (Emergent SOM) [4]. Also, there are lots of parallel SOM applications [5]. The study of M. Takatsuka and M. Buis presents the parallel implementation of SOMs, particularly the batch map variant using Graphics Processing Units (GPUs) through the use of Open Computing Language (OpenCL) [6]. Another study of parallel SOM is about mining massive datasets by an unsupervised parallel clustering on a GRID [7]. The

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

139

2 nd International Symposium on Computing in Science & Engineering

studies at all brunches can be used SOM and its derivations. For example, the study about dynamics of soil organic matter and mineral nitrogen in soil can be given as an application which uses parallel SOM for geological researches [8]. Parallel SOM can be used for the computer security, the healthcare, ecological modeling, the financial sector and another area which needs clustering. Another example from different areas is the study about buildings. It is "changing the way we build" by J. Nasvik [9].

METHODS

The major goal of SOM is to determine the suitable weight values for the neurons according to dataset. The count of these weight values is equal to the number of attributes in dataset and each weight value corresponds to an attribute. At the beginning of the SOM algorithm, these weight values in all neurons are initialized randomly. Secondly, the best matching unit as the winner neuron is found by calculating Euclidean distance from each weight to the chosen sample vector which consists of the weights set in one of neurons. After finding the best matching unit, all vectors of the SOM are updated by using Gaussian function. These processes are repeated until a certain number of iterations. The loop accrues over the all neurons. However, this new approach firstly divides the area by four and this standard SOM processes for all little areas by parallel processing. Thus, datasets are divided for all processes and the complexity becomes lower. This approach is proved by four different datasets from UCI machine learning [10] and successful results are observed.

FINDINGS & CONCLUSION

This paper introduces a new clustering approach PSOMDM. The significant difference between PSOMDM algorithm, the standard SOM and parallel SOM is that PSOMDM divides the map area constantly and lower number data are clustered on different neurons by parallel method. PSOMDM algorithm has many advantages over conventional SOM based methods. The most remarkable advantage of PSOMDM is in saving training time for clustering large and complicated data sets by using division method. Furthermore, the rate of unstable data points decreases and internal error decreases. For future work, the proposed algorithm, PSOMDM, can be used for the computer security, the healthcare, ecological modeling, the financial sector and another area which needs clustering its large data on a map successfully at accuracy, consistency and speed.

REFERENCES

[1] T. Kohonen: Self-Organizing Maps. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2001 [2] A. El. Sagheer, N. Tsuruta, S. Maeda, R. Taniguchi, and D. Arita, "Fast Competition Approach using Self Organizing Map for Lip-Reading Applications", IEEE Proc. International Joint Conference on Neural Network( IJCNN), pp. 3775-3782, doi: 10.1109/IJCNN.2006.1716618, 2006

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

140

2 nd International Symposium on Computing in Science & Engineering

[3] S.-C. Chi and C.C. Yang, "Integration of Ant Colony SOM and K-Means for Clustering Analysis", Proc. KES 2006, Lecture Notes in Computer Science, vol. 4251, pp. 1-8, 2006 [4] J. Poelmans, P. Elzinga, S. Viaene, M.M. Van Hulle, and G. Dedene, "How Emergent Self Organizing Maps can Help Counter Domestic Violence", IEEE Proc. 2009 WRI World Congress on Computer Science and Information Engineering (CSIE), vol. 4, Los Angeles (USA), pp. 126-136, 2009. doi:10.1109/CSIE.2009.299, 2009 [5]H. Guan, C. Li, T. Cheung, S. Yu, Advances in Parallel and Distributed Computing, 1997. Proceedings, p.26 31, Shanghai , China, ISBN: 0-8186-7876-3, 2002 [6] M. Takatsuka, M. Bui: Parallel Batch Training of the Self-Organizing Map Using OpenCL, Lecture Notes in Computer Science, 2010, Volume 6444/2010, 470-476, DOI: 10.1007/978-3-642-17534-3_58, 2010 [7] A. Faro, D. Giordanoa and Francesco Maioranaa, Mining massive datasets by an unsupervised parallel clustering on a GRID: Novel algorithms and case study, Elsevier, doi:10.1016/j.future.2011.01.002, 2011 [8] P. Semaoune, M. Sebilo, S. Derenne, C. Anquetil, V. Vaury, L. Ruiz, T. Morvan, and J. Templier, Dynamics of soil organic matter and mineral nitrogen in soil: investigation into a complex relationship, EGU General Assembly 2011, Vol. 13, EGU2011-11740, 2011 [9]

J.

Nasvik,

Changing

the

Way

We

Build,

Concrete

http://www.concreteconstruction.net/BIM/changing-the-way-we-build.aspx, 2010 [10] UCI Machine Learning Repository, http://archive.ics.uci.edu/ml, 2011.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

141

Construction,

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/71

A Real-Time Generation and Modification of GIS-based Height Maps Serpil EROGLU, Karabuk University,Computer Engineering, Karabük, Turkey, [email protected] Baha SEN,Karabuk University, Computer Engineering, Karabük, Turkey, [email protected] Salih GÖRGÜNOGLU, Karabuk University, Computer Engineering, Karabük, Turkey, [email protected]

Keywords: GIS, DEM, DTED, Height Map, Modification

INTRODUCTION

Digital elevation maps are structures which are used to represent the topographic surface of the earth. Creation and usage areas of these maps have increased with the development of geographical information systems. Moreover, height maps are data structures which are also used for 3D applications such as military or civilian simulators, games and other applications aiming to create virtual worlds [1].

The aim of this study is to create height maps that will be used in 3-dimensional applications. Height maps are created with 2 ways. The first method aims to create height maps with random data. Random algorithms have various controls to create a model which is similar to the real world. In the second method, height maps are created through real-world data. For this purpose, modelling related to real world data was realised utilising the most commonly used geographic map files like DEM and DTED. In addition, users were allowed to do various real time modifications on this model. Furthermore, options such as increasing or decreasing the height value of any point on the model and saving the changes with the same format were provided to users.

LITERATURE REVIEW

Kamal and Udin have presented a technique which controls minimum level to produce height maps model on demand. This technique produces the terrain by creating continuous random polygons. When this technique is examined, it is seen that it provides a random performance. [2]

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

142

2 nd International Symposium on Computing in Science & Engineering

Lencher et al have described a method for procedurally generating typical patterns of urban land use using agentbased simulation. Users give a terrain description; their system returns a map of residential, commercial, industrial, recreational park and road land uses including age and density of development. But the users have to give realistic geomorphologic data while they are giving the terrain description. [3] Doran et al have explored a more controllable system that uses intelligent agents to generate terrain elevation height maps according to designer-defined constraints. This allows the designer to create procedural terrain that has specific properties [4]. But designers cannot make any changes after the creation of terrain. Li et al provide a semiautomatic method of terrain generation that uses a four-process genetic algorithm approach to produce a variety of terrain types using only intuitive user inputs [5]. They gave the sample terrain dataset to system which will produce new height map. Problem of this method is only way of creating desired height maps with similar height map data set. This means users have to give similar data set to the system and then they can get new height map. Brosz et al present a technique of terrain synthesis that uses an existing terrain to synthesize new terrain. They use multi-resolution analysis to extract the high-resolution details from existing models and apply them to increase the resolution of terrain [6]. But users cannot make any changes on this terrain. Göktas et al have devised an algorithmic system which automatically generates virtual cities supporting several different map formats (i.e. DTED, HTF, SRTM, and DEM). Depending on the conditions submitted, the system generates different layouts for the cities even on the same geographical site [7-8]. But users cannot make any real time changes on this city. Wiggenhagen have produced software for the inspection and quality control of digital terrain models and orthoimages. He uses data standards like USGS and BMP-Format [9]. Errors found can be corrected through interpolation methods.

METHODS

Height maps are kept in the form of a grid structure. Value of each point on the grid map gives us the relevant height value matching on the height map. When the height maps are being modelled in 3D, all points are drawn in triangle shape. When users choose a point on this model, primarily the position of the relevant point on the triangle is found out. For this aim, 3 dimensional coordinate values of all points in the height map are converted into OPENGL unit volume reference system. After converting process, the point selected by users was being passed through an intersection test considering all triangles on the model. At this stage, a new equation based on Barycentric coordinate plane was obtained in order to see whether the point is on any triangular. The values of equation obtained are transformed into matrix form. Gauss Elimination method is utilised in order to solve these equations. If the values of equation solved with this method match with the Barycentric coordinate plane, the position of the point selected is found out on the relevant triangle [10-12].

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

143

2 nd International Symposium on Computing in Science & Engineering

FINDINGS AND CONCLUSION

In this study, height maps were created with two methods. First method is creating height maps with random values. For this reason, Fault algorithm and Circle algorithm which are used in creating height maps with random values were utilised. Fault algorithm creates random lines which divide the height map into two parts. Algorithm increases the height values in one part and decreases height values of other part. Circle algorithm is similar to Fault algorithm but it creates circles having specific radiuses on the height map. The Second method is modelling real-world data. For this aim, various geographic files prepared in different formats are utilised. Today, USGS geographic map files prepared with the extensions of DEM and DTED are the most commonly used geographic map files. These types of files are provided by USGS via web page for free. These files have low resolution and this causes disadvantage. But in this study, users are able to create high resolution height maps from low resolution height maps. These files are modelled in 3D after reading process. Unlike other studies, users are allowed to make modifications in real time on this model. Users can increase or decrease the height of any point on this model. Besides, users can save their changes on this model with the same file format and reuse them whenever they want.

REFERENCES

[1] GÜNDOGDU, K. S. (2003) “ Sayisal Yükseklik Modellerinin Arazi Boy kesitlerinin Çikarilmasinda Kullanimi” Uludag Üniversitesi, Ziraat Fakültesi Dergisi, 17(1): 149-157, Bursa. [2] Kamal, K. R. and Udin, Y. S. (2007) “Parametrically controlled terrain generation. In GRAPHITE'07: Proceedings of the 5th international conference on Computer graphics and interactive techniques in Australia and Southeast Asia”, pages 17-23, New York, NY, USA, ACM. [3] T. Lechner, P. Ren, B. Watson, C. Brozefski, and U. Wilenski. ( 2006) “Procedural modeling of urban land use”. In International Conference on Computer Graphics and Interactive Techniques. ACM New York, NY, USA, [4] Doran, J. and Parberryy, I. “Controlled Procedural Terrain Generation Using Software Agents” Dept. of Computer Science & Engineering University of North Texas January 19, 2010 [5] Li, Q. ; Wang, Q.; Zhou, F.; Tang X. and Yang, K. “Example-based realistic terrain generation”. Lecture Notes in Computer Science, 4282:811, 2006. [6] Brosz, J. ; Samavati, F.F. and Sousa, M.C. (2006) “Terrain Synthesis By- Example,” Proc. First Int’l Conf. Computer Graphics Theory and Applications (GRAPP ’06). [7] GÖKTAS, H.H.; ÇAVUSOGLU, A.; SEN, B. ; GÖRGÜNOGLU, S. ; (2006) "Simülasyon Sistemleri Için 3 Boyutlu Sanal Sehirlerin GIS Haritalari Üzerinde Olusturulmasi", Teknoloji/Z.K.Ü. Karabük Teknik Egitim Fakültesi Dergisi, 9(1); 27-38;. [8] GÖKTAS, H. , ÇAVUSOGLU, A., SEN, B., (2009)"AUTOCITY : A System for Generating 3D Virtual Cities for Simulation Systems on GIS Map", AutoSoft - Intelligent Automation and Soft Computing, Vol.15, No:1, pp.29-39, USA,.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

144

2 nd International Symposium on Computing in Science & Engineering

[9] Wiggenhagen, M. (2000)” Development Of Real-Time Visualization Tools For The Quality Control Of Digital Terrain Models And Orthoimages” International Archives of Photogrammetry and Remote Sensing. Vol. XXXIII, Part B3. Amsterdam. [10] Christopher J Bradley “The Algebra of Geometry Cartesian, Areal and Projective Co-ordinates” ISBN 9781-906338-00-8. [11] http://mathworld.wolfram.com/BarycentricCoordinates.html [12] http://rockyweb.cr.usgs.gov/nmpstds/demstds.html [13] http://www.usgs.gov/ [14] http://www.opengl.org [15] http://www.lighthouse3d.com/

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

145

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/73

GIS Application Design for Public Health Care System Using Dijkstra Algorithms Fatma PATLAR,Istanbul Kültür University,Computer Engineering,Istanbul,Turkey, [email protected] Akhan AKBULUT,Istanbul Kültür University,Computer Engineering,Istanbul,Turkey, [email protected]

Keywords: GIS Application, CAD, Dijkstra Algorithms, Public Health Care, Navigation, Fastest Path, Shortest Path

ABSTRACT

The aim of this project is to integrate a system to a command and control center and ambulances, for automatize the managerial functions of ambulance redirection, using Geographic Information Systems (GIS) supported with Computer-Aided Design (CAD). To achieve this, first the execution phases of the entire model is determined. Service-based (oriented) architecture (SOA) was chosen for the integration of the systems. Thereby, a webservice is implemented to ensure communication with GPRS. When an on-site assignment is made from Command and Control Center, this web-service notifies the coordinates of the event taking place to the GIS application. This application specifies the ambulance's geographical position with GPS information and draws the shortest-path to the destination's location. Selected two-dimensional map is composed of nodes and links. The main idea is to determine the shortest path or fastest path between these nodes.

INTRODUCTION

In 1736, graph was used by Euler to solve the Koningsberg seven bridges problem and then graph theory came into being. But it was not until middle and later ages in 20th century did mathematics and computer scientists attach much importance to graph theory with the help of appearance and developing of computer [1][2]. Nowadays, many applications have been developing while using graph theory, which is applied to many fields such as GIS network successfully. GIS can be used to analyzing, processing, capture, store, handle, and geographically integrate large amounts of information from different sources, programmers, and sectors; including epidemiological surveillance, census, environment, and others[3][4].

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

146

2 nd International Symposium on Computing in Science & Engineering

With the popularity of the computer and the development of the geographic information science, GIS has been increasingly extensive and in-depth applications for its powerful functions[5] and also usage of Health Care Systems. Public Healthy policy covers decisions in any sector or at any level of government and is characterized by an explicit concern for health and accountability for health impact [6]. It is an established fact that most developing countries are not spending more than 2% of their gross national product (GNP) on health, resulting in poor coverage of public health services[7]. In this paper, several technological keys are established the road topology structure are discussed, and a algorithm for finding the shortest and fastest paths for public health care system based on GIS which integrated the communication service. Then, many tests were conducted to show the propose system performance.

LITERATURE REVIEW

Nowadays, GIS aided systems are used all areas as health sector. Using computer-aided systems on the contrary performing all the processes needed in a health center on paper provides in terms of work power gain and to minimize the possibility of making mistakes. In addition, it is much more easy and quick to have access to any information and data held on computers. Due to, this type of advantages the computer-aided systems have been used in the health sector. Health care processes require a race against time and adapting computers instead of traditional methods into our business and our lives. Based on this view, GIS application was implemented for ambulances which integrated with control center.

METHODS

Proposed system has been performed by using Dijkstra’s algorithm with two different path algorithms these are shortest and fastest path algorithms. The first algorithm is the shortest path that only takes into account the length between any two nodes. The second one is the fastest path by introducing present speeds into the paths between the nodes. We select of local area road network that has lots of different area of Istanbul on the electronic map. For checking the performance of the system, two algorithms was examined according to working time and accuracy on different condition while traffic is moving freely or smoothly.

FINDINGS & CONCLUSION

As a result, we chose Istanbul as the tests subject has traffic problem, we had to use fastest-path algorithm more efficient. We observe that fastest-path algorithm mostly selects the highways. Although the total distance lengthens, the duration of transport decreased. During the high-traffic hours; it will be the most optimal solution

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

147

2 nd International Symposium on Computing in Science & Engineering

to follow the route drawn by the shortest path algorithm. the reason for this is that, shortest path algorithm decides the route by the distance of roads provides us the most direct route will be selected. An user-friendly interface was designed for the application to have a system that anyone can use. All processes automatically done without touching the user's hand gives us the performance and usability.

REFERENCES

[1] Wang yuan-yuan,1984,” Li Shang-fen. Discrete Mathematics”, Beijing: Science Press [2] CHAI Deng-feng, ZHANG Deng-rong ,2001, “Algorithm and Its Application of N Shortest Paths Problem”, 0-7803-7010-4/0,IEEE. [3] Mubushar Hussain, Mudassar Hassan Arsalan & Mohammed Raza Mehdi ,2008,”Role of GIS in Public Health Management in Pakistan”, IEEE A&E SYSTEMS MAGAZINE [4] YIN Xuri, 2010 ,“GIS-based Simulation System for Wartime Military Highway Transportation”, Computer Science and Education (ICCSE), 2010 5th International Conference on, 978-1-4244-6002-1 [5] Zhang Fuhao, Liu Jiping, 2009,“An algorithm of shortest path based on Dijkstra for huge data”, IEEE Computer Society [6] Ottawa Charter for Health Promotion. Charter adopted at the First International Conference On Health Promotion: The move towards a new public health, November 17–21, 1986 Ottawa, Ontario, Canada. 1986. Geneva. World Health Organization, 1986 (WHO/HPR/HEP/95.1). [7] WHO, 1986,Ottawa Charter for Health Promotion, Charter adopted at the First International Conference on Health Promotion: The move towards a new public health, November 17-21, 1986, Ottawa, Ontario, Canada, World Health Organization, (WIIO/HPRIHEPI95.l).

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

148

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/74

A Case Study about Being a CMMI-Level 3 Awarded Organization in One-Year Time Yasemin KARAGÜL, Dogus University, Department of Computer Engineering, Istanbul, Turkey, [email protected] Semih BILGEN, Middle East Technical University, Electrical and Electronics Engineering Department, Ankara, Turkey, [email protected]

Keywords: Software Process Improvement Duration, CMMI, Success Factors

ABSTRACT

Starting from a review of available literature, the effects of various factors on the duration of software process improvement programs are investigated. The hypotheses formulated based on the experiences reported in the literature provide a baseline for the qualitative research carried out on the CMMI Level 3 organization. Within the framework of this study it has been observed that Staff Involvement, Awareness, Management Commitment and Management Involvement have had a shortening effect on SPI duration.

INTRODUCTION

The last decade has seen many organizations striving to achieve software development process maturity through certification within the Capability Maturity Model (CMM) framework. The resources required for such improvement have been studied extensively in the literature (see e.g. [1,2,3]) but the duration of process improvement and the factors that effect the time span for reaching the next level still seem to be relatively less investigated. There are some successful cases that have managed to decrease the time to move up from one CMM level to another drastically [4, 5, 6]. Analysis of these success stories may be helpful in identifying a relationship between various factors and SPI duration. Such a relationship may help managers in planning their SPI effort; strengths and weaknesses of the organization may be determined and resource allocation for the program can be enhanced. There have been a number of studies that try to identify factors that effect SPI success. However, these studies do not provide answers to questions about the effect of these factors on SPI program duration. This study aims to

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

149

2 nd International Symposium on Computing in Science & Engineering

study the factors that effect SPI duration. The results of this study may provide guidelines to organizations that wish to accelerate their CMMI work. This paper is organized as follows. The relevant literature is reviewed and the hypotheses derived from the literature are presented in Section 2. Details of the case study design like sample profile, data collection and data analysis methods are explained in Section 3. Findings of the case study carried out to examine the hypotheses derived from the literature are presented in Section 4. Section 5 concludes the paper.

LITERATURE REVIEW

Wilson et al. [7] propose a framework for the evaluation of SPI success. They adapt and apply a framework which was previously developed for evaluation of metrics programs, to SPI. They identify the success factors as the management commitment, respect, initial process definition and explanations. Niazi et al. [8] aim to propose a maturity model for the implementation of SPI programs. Niazi et al. [9] in a follow-up study, propose a framework that will provide companies with an effective SPI implementation strategy. In the maturity model and framework proposed in these two studies, critical success factors are identified as management commitment, training, staff involvement, awareness, experienced staff, formal methodology, and reviews.Petterson et al. [10] have developed a light-weight process assessment framework. While developing the framework, within the several critical success factors mentioned in the previous studies reviewed, the ones related to the study are given as SPI initiation threshold and commitment and involvement.Cares et al. [11] propose an agent-oriented process model and identify the most frequently cited thirteen critical success factors, whereas Berander and Wohlin [12] name the key factors for successful management and evolution of the software process as baselining, synchronization, user involvement, management commitment, change management, and documentation.Dyba [13] has proposed a conceptual research model to investigate the relationship between SPI success and the factors defined in the model. Based on that model, it is concluded that employee participation, business orientation, concern for measurement, exploitation of existing knowledge, involved leadership, and exploration of new knowledge effect the success of SPI. In addition to the studies discussed above, experience reports about CMM/CMMI studies also provide detailed information about the settings and conditions in which various SPI exercises have been carried out. Six success and two failure stories reported in those studies are examined in detail [4,5,6,14,15,16,17,18]. The common points reported in these research papers were observed as quality environment, management commitment, SPI awareness, participation, training, and the existence of experienced people involved with the process improvement endeavors. On the other hand, the reasons that lead to failure were identified as lack of management commitment, lack of quality environment, lack of SPI awareness, and lack of training. The aim of the present study is to investigate the relationship between these factors and SPI duration, so the dependent variable of our model is selected as “time to complete SPI successfully”. Based on the literature review, the independent variables were selected as Management Commitment, Awareness, Staff Involvement, Training, Experienced Staff, Metrics and Measurement, Process Documentation, and Quality Environment.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

150

2 nd International Symposium on Computing in Science & Engineering

METHODS

The case was selected from the CMMI-Level 3 awarded companies in Turkey. Organization A, which is a global firm that provides solutions and services for information and communication technologies, was awarded as CMMI-Level 3 in 12 months time. Data was collected in two phases. The first phase consisted of formulating the initial hypotheses based on the literature. In the second phase of the research, data was collected through semistructured interview that lasted around 45 minutes. There was at least one question in each interview about the existence of each identified factor in the organization. These factors were Quality Environment, Experienced Staff, Management Commitment, Awareness, Staff Involvement, Process Documentation, Training and Metrics, as selected based on the literature review. In addition to these factors, sub-factors were also identified to analyze reasons for acceleration of CMMI. For example, Quality Environment is associated with the sub-factors Parallelism between Standards, Frequency of Assessments, Gap Analysis and Class-B Appraisal. Sub-factors were defined based on the literature review and interviews. During the analysis of the interview, for each factor and sub-factor, a score taking either one of the three values: none-low, medium, or high was assigned according to the answers to interview questions.

FINDINGS – CONCLUSION

In this paper, factors that affect the duration of CMMI certification programs were investigated. First of all, factors of SPI success were identified from the literature. In the second phase, interview with a CMMI-Level 3 company was held in order to identify the factors that effect CMMI program duration. Findings of the analysis of the interviews are discussed below. Organization A had no previous CMM-CMMI experience. However, the interviewee from Organization A mentioned that even though they were not experienced about CMM/CMMI, but they took the advantage of being were enthusiastic about SPI processes and knowledgeable about software development processes. When the factor Management Involvement is In Organization A management actively participated in CMMI meetings and provide necessary feedbacks when necessary. It can be concluded that when the management take on responsibility, an accelerated CMMI program can be achieved. When it comes to Management Commitment, Organization A stated that management provided a strong leadership and commitment for SPI program and approved the changes in the budget when necessary. Within the framework of this study it has been observed that Staff Involvement, Awareness, Management Commitment and Management Involvement have had a shortening effect on SPI duration. A qualitative research method has been applied throughout the study. Further validation of effect of factors on CMMI duration would be possible through extensive and quantitative research, which would provide valuable guidelines to the industry, but it is outside the scope of the current study.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

151

2 nd International Symposium on Computing in Science & Engineering

REFERENCES

[1] M. Diaz and J. Sligo, “How Software Process Improvement Helped Motorola”, IEEE Software, vol. 14,1997, pp. 75-81. [2] J. Herbsleb, A. Carleton, J. Rozum, J. Seigel, and D. Zubrow, “Benefits of CMM-Based Software Process Improvement: Initial Results”, Technical Report, Carnegie Mellon University, Pittsburg, Pennsylvania., 1994 [3] J. Herbsleb, D. Zubrow, D. Goldenson, W. Hayes, and M. Paulk, M, “Software Quality and the Capability Maturity Model”, Communications of the ACM, vol. 40, 1997, pp. 30-40. [4] F. Akmenek, and A. Tarhan, “The Leap to Level3: Can It Be a Short Journey?”, SEI- ESEPG Conference, June 2003, London, England. [5] Z. Tufail, J. Kellum, and T. Olson, “Rapidly Defining a Lean CMMI Maturity Level 3 Process”, Paper presented at the 6th Annual CMMI Technology Conference & User Group, 2006, Denver, Colorado. [6] S. S. Zeid, “Moving from CMM level-2 to CMM level-3”, Egypt -SPIN Newsletter, (Issue 6), 2004, pp. 3-8. Retrieved April 17, 2008 from http://www.secc.org.eg/SPIN%20Newsletter.asp [7] D. H. Wilson, T. Hall, and N. Baddoo, “A Framework for evaluation and prediction of software process improvement success”, The Journal of Systems and Software, vol. 59, 2001, pp. 135-142. [8] M. Niazi, D. Wilson, and D. Zowghi, “A maturity model for the implementation of software process improvement: an empirical study”, The Journal of Systems and Software, vol. 74,2005, pp 155-172. [9] M. Niazi, D. Wilson, and D. Zowghi, “A framework for assisting the design of effective software process improvement implementation strategies”, The Journal of Systems and Software, vol 78, 2005, pp. 204-222. [10] F. Petterson, M. Ivarsson, T. Gorschek, and P. Ohman, “A practitioner’s guide to light weight software process assessment”, The Journal of Systems and Software, vol. 81, 2007, pp. 972-995. [11] C. Cares, X. Franch, E. Mayol, and E. Alvarez, “Goal-Driven Agent oriented Software Processes”, Proceedings of the 32nd EUROMICRO Conference on Software Engineering and Advanced Applications, Cavtat/Dubrovnik, Croatia, 2006, pp. 336-342 [12] P. Berander, and C. Wohlin, “Identification of Key Factors in Software Process Management – A Case Study”, Proceedings of the 2003 International Symposium on Empirical Software Engineering (ISESE’03), 2003, pp. 316-325 [13] T. Dyba, “An Empirical Investigation of the Key Factors for Success in Software Process Improvement”, IEEE Transactions on Software Engineering, vol. 31, 2003, pp. 410-424. [14] F. Guerrero, and Y. Eterovic, “Adopting the SW-CMM in a Small IT Organisation”, IEEE Software vol. 21,2004, pp. 29-35. [15] T. G. Olson, and M. Sachlis, “Aggressively Achieving CMM Level 3 in One Year”, Presentation, SEPG 2002, Phoenix, AZ, 2002 [16] G. Jackelen, “CMMI Level 2 Within Six Months? No Way!” CrossTalk The Journal of Defense Software Engineering, Feb. 2007, pp.13-16.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

152

2 nd International Symposium on Computing in Science & Engineering

[17] J. Iversen,. and O. Ngwenyama, “Problems in measuring effectiveness in software process improvement: A longitudinal study of organizational change at Danske Data”, International Journal of Information Management, vol. 26, 2006, pp. 30–43. [18] K. Balla, T. Bemelmans, R. Kusters, J. Trienekens, “Quality Through Managed Development and Implementation of a Quality Management System for Software Company”, Software Quality Journal, vol. 9, 2001, pp. 177-193.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

153

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/77

A Comparative Analysis of Evolution of Neural Network Topologies for the Double Pole Balancing Problem Asil ALKAYA, Celal Bayar University, Department of Business Administration, Manisa, Turkey, [email protected]

Keywords: Evolutionary Algorithms, Neural Networks

INTRODUCTION

Controlling unstable nonlinear systems with neural networks can be problematic. The successful application of classic control design techniques usually requires an extensive knowledge of the system to be controlled, including an accurate model of its dynamics. In some situations, this information may be difficult or even impossible to obtain. The challenge is to control a system without a priori information about its dynamics. One way to achieve this is to evolve neural networks to control the system using only sparse feedback from the system. The classic pole balancing problem is no longer difficult enough to serve for measuring the learning efficiency of these systems.The double pole case, where two poles connected to the cart must be balanced simultaneously is much more dif?cult, especially when velocity information is not available. In this article, NeuroEvolution of Augmenting Topologies (NEAT), is used to evolve a controller for the standard double pole task and a much harder, non-Markovian version by unidentifying the velocity information. NEAT provides a principled methodology for implementing a complexifying search from a minimal starting point in any such structures. NEAT is designed to take advantage of structure as a way of minimizing the dimensionality of the search space of connection weights.If structure is evolved such that topologies are minimized and grown incrementally, significant performance gains result.

LITERATURE REVIEW

Most neuroevolution (NE) systems that have been tested on pole balancing evolve connection weights on networks with axed topology (Gomez and Miikkulainen 1999; Moriarty and Miikkulainen 1996; Saravanan and Fogel 1995; Whitley et al.1993; Wieland 1991). On the other hand, NE systems that evolve both network topologies and connection weights simultaneously have also been proposed (Angeline et al. 1993; Gruau et al. 1996; Yao 1999). A major question in NE is whether such Topology and Weight Evolving Arti?cial Neural

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

154

2 nd International Symposium on Computing in Science & Engineering

Networks (TWEANNs) can enhance the performance of NE. On one hand, evolving topology along with weights might make the search more dif?cult. On the other, evolving topologies can save the time of having to ?nd the right number of hidden neurons for a particular problem (Gruau et al. 1996). In a recent study, a topology-evolving method called Cellular Encoding (CE; Gruau et al., 1996) was compared to a ?xed-network method called Enforced Subpopulations (ESP) on the double pole balancing task without velocity inputs (Gomez and Miikkulainen 1999). Since ESP had no a priori knowledge of the correct number of hidden nodes for solving the task, each time it failed, it was restarted with a new random number of hidden nodes. However,even then, ESP was ?ve times faster than CE. This study aims to demonstrate the opposite conclusion:if done right, evolving structure along with connection weights can signi?cantly enhance the performance of NE. (Stanley and Miikulainen,2002)

METHODS

NEAT is a system for evolving both the connection weights and topology of ANNs simultaneously. It does so by means of crossover and three types of mutation: •

Modify connection weight mutation



Add connection mutation



Add neuron mutation

The first type of mutation uniformly perturbs the weight of an existing connection. The second type adds connections between unconnected neurons (or self-connections to neurons

in

the

case

of

recurrent

networks).The third replaces an existing connection with a neuron and a single incoming and outgoing connection. To implement simplification, a fourth mutation was added to the NEAT framework: • The

Delete connection mutation delete

connection

mutation

rate

determines

the

percentage

of

existing connections to be

removed. Connection genes are sorted, and then deleted, in ascending order. Neurons and associated substructures stranded due to this mutation are removed from the topology. This methodology was chosen under the assumption that connections with weights closer to zero are less influential, and thus better candidates for deletion.

FINDINGS & CONCLUSION

The state of this system is defined by six state variables: the angle of each pole from vertical, the angular velocity of each pole, the position of the cart on the track, and the velocity of the cart The long pole is always set to 1 meter. Three different experiments were conducted using this con?guration with the following two goal tasks: 1.

with velocity information.

2.

without velocity information.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

155

2 nd International Symposium on Computing in Science & Engineering

All of the pole balancing experiments were implemented using the Runge-Kutta fourth-order method with a step size of 0:01 s. The state variables were scaled to [-1.0, 1.0] before being input to the network. During simulation the networks output a force value every 0:02 seconds inthe range [-10, 10]N. For tasks 1 and 3, the initial angle for the long pole was set to 1 (so that the networks could not control the system by simply outputting values close to zero), and fitness was determined by the number of time steps a network could keep both poles within (28, 28) degrees from vertical and keep the cart between ends of a 4.8 meter track. A task was considered solved if a network could balance the polesfor 10000 time steps. Neuron chromosomes were encoded as strings of floating point numbers. Arithmetic crossover was used to generate new neurons. Each chromosome was mutated with probability 0.2, replacing a randomly chosen weight value with a random value within the range [-7.0, 7.0]. The techniques and parameters were found effective experimentally; small deviations from them produce roughly equivalent results.

REFERENCES

Comer. F. J . and R. Miikulainen (1999). Solving non-markovian tasks with neuroevolution. In T. Dean (Ed.). Proceeding of the Sixteenth International Joint Conference on Artificial Intelligence (IJCAI). Stockholm, Sweden. pp. 1356-361. Morgan Kaufmann. Moriany. D. E. and R. Miikulainen (1996). Efficient reinforcement learning through symbiotic evolution. Machine Leanring 22. 11-32 Stanley. K.

0. and R .

Miikulainen

(2002). Evolving neural networks through augmenting topologies.

Evolutionary Computation 10(2). 99-127. Xu, X.. H. He, and D. Hu (2002). Efficient reinforcement learning using recursive least-squares methods. Journal Of Artificial Intelligence Research, 16.259-292. [Gomez and Miikulainen, 1997] Faustino Gomez and Risto Miikkulainen. Incremental evolution of complex general behavior.Adaptive Behavior, 5:317–342, 1997. Stanley K.O., and Miikulainen, R., “Competitive Coevolution Through Evolutionary Complexification,” Journal of Artificial Intelligence Research, 21, 63-100. Gomez, F., and Miikulainen, R. (1999). Solving non-Markovian control tasks with neuroevolution.

In

Proceedings of the16th International Joint Conference on Arti?cial Intelligence. Denver, CO: Morgan Kaufmann. Gomez, F., and Miikulainen, R. (2001). Learning robust non-linear control with neuroevolution. Technical Report AI01-292, Department of Computer Sciences, The University of Texas at Austin. Moriany. D. E.. A. C. Schultz. and J . J .

Grefenstette (1999). Evolutionary Algorithms for Reinforcement

Learning. Journal of Artificial Intelligence Research 11. 199-229. Mandischer. M. (2002). A comparison of evolution strategies and backpropagarinn for neural network training. Neurocomputing 42(1-4). 87-117. Beyer. H-G. and Schwefel (2002). Evolution strategies: A comprehensive introduction. Natural Computing I( 1), 3-52.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

156

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/80

Metrics Threshold Values vs. Machine Learners: A Preliminary Study of Cross-Company Data in Detecting Defective Modules Cagatay CATAL, TUBITAK, Information Technologies Institute,Kocaeli, Turkey, [email protected] Kerime BALKAN, TUBITAK, Information Technologies Institute,Kocaeli, Turkey, [email protected] Ipek TÜRCAN, TUBITAK, Information Technologies Institute,Kocaeli, Turkey, [email protected]

Keywords: Software Engineering, Software Quality, Cross-company Prediction Models, Receiver Operating Characteristic (ROC) Curve, Software Metrics Threshold Values, Software Fault Prediction, Software Quality Assurance

INTRODUCTION

Software fault prediction is one of the quality assurance activities such as formal verification, testing, and inspection within Software Quality Engineering discipline [3]. Most of the studies used software metrics and fault data altogether to build the model. If there is not any fault data available, supervised learning algorithms cannot be used. Researchers used data from the other companies when there is no data in cost estimation problem [8]. Even though the cost estimation is a different software engineering problem compared to the software fault prediction, a similar scenario would work for this problem too. In this study, we aimed to design and implement a fault prediction technique based on software metrics threshold values instead of machine learners-based predictive models. Our analyses aim at answering the following research questions: • Can we use Cross-company (CC) data to calculate metrics threshold values and apply these threshold values in a comprehensive software fault prediction technique instead of complex machine learning-based approaches? • Is our threshold-based approach better than the popular Naive Bayes based fault predictors that use CC data? According to our technique, a module is predicted as fault-prone if the majority of the software metric values exceed their corresponding threshold values.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

157

2 nd International Symposium on Computing in Science & Engineering

The main contribution of our paper is twofold: 1. A novel high-performance fault prediction technique based on metrics threshold values. 2. New evidence that CC data is useful for building fault prediction models.

LITERATURE REVIEW

Nearly all of the software fault prediction studies (supervised learning approaches and statistical models) used within-company data as previous fault data. Although most of the companies do not collect within-company data in practice, there are only a few studies that explain how to predict the fault-proneness of modules when there is no previous fault data. One approach is to use CC data as previous fault data, and run supervised learning algorithms on CC data. Turhan et al. [22] analyzed seven NASA projects and they regarded these datasets as data of different companies that work as contractor for NASA. For each dataset, they used the rest of datasets as CC data and run a supervised learning algorithm, Naive Bayes, on the rest of CC data. Zimmerman et al. [27] stated that use of CC data is a serious problem and they explained that there is suspicion to consider NASA projects as CC data because all projects of NASA had to follow stringent ISO-9001 industrial practices. For that reason, they used data from both open source community and commercial applications of Microsoft Corporation. Most of the authors in Zimmerman et al.’s [27] paper work in Microsoft Research and therefore, they could easily access the data of applications developed by Microsoft Corporation.

METHODS

This paper involves case studies of six public NASA data that was collected from NASA contractors across the United States. NASA projects that are located in the PROMISE repository were used for our experiments. We used six public datasets called KC2, KC3, CM1, MW1, MC2 and PC1. NASA datasets include 21 method-level metrics. However, some researchers use only thirteen metrics from these datasets [20]. We eliminated four metrics from these thirteen metrics. Probability of false alarm (PF), Probability of detection (PD), and Area under the ROC curve (AUC) were used for benchmarking. Our benchmarking used two techniques, Naive Bayes with logNum and our threshold based fault prediction approach. For each dataset, the other five datasets were combined and considered as training dataset. For Naive Bayes approach, the training dataset and test dataset were filtered with logNum function. After filtering, training was performed with Naive Bayes. After prediction, performance evaluation parameters were calculated. For threshold-based fault prediction, training dataset was used to calculate threshold values. After the corresponding thresholds were calculated by our threshold calculating method, prediction was done on the test dataset. Performance evaluation parameters (PD, PF, AUC) were calculated for these two techniques, and then these results were compared.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

158

2 nd International Symposium on Computing in Science & Engineering

FINDINGS & CONCLUSION

Our threshold-based fault prediction technique achieved larger PD value than Naive Bayes based approach. For mission critical applications, PD values are more important than PF values because all of the faults should be removed before deployment. Therefore, we suggest companies to use our threshold-based approach for mission critical applications in addition to Naive Bayes based approach. The findings in this study support the findings in Turhan et al.’s study [22] that suggests using CC data. Our contribution in this study is two-fold. First, we showed new evidence that CC data is useful for building fault prediction models and second, we proposed new high-performance fault prediction technique based on metrics threshold values. For future work, we will try to improve our threshold-based approach to achieve smaller PF values and higher PD values. In addition, we will investigate the effect of projects from different domains. In this study, all of the projects were from the aerospace domain but finding project data from a similar domain is not an easy task. Using projects from different domains and different processes may not lead to accurate prediction models and we will consider the effect of these issues on our models.

REFERENCES

[1] X. B. Cao, Y.W. Xu, D. Chen, and H. Qiao, Associated evolution of a support vector machine-based classifier for pedestrian detection. Inf. Sci. 179 (8) (2009) 1070-1077. [2] C. Catal, B. Diri, A systematic review of software fault prediction studies, Expert Systems with Applications 36 (4) (2009) 7346-7354. [3] C. Catal, B. Diri, Investigating the effect of dataset size, metrics sets, and feature selection techniques on software fault prediction problem, Information Sciences 179 (8) (2009) 1040-1058. [4] K. El Emam, S. Benlarbi, N. Goel, W. Melo, H. Lounis, S. Rai, The optimal class size for object-oriented software, IEEE Transactions on Software Engineering 28 (5) (2002) 494–509. [5] D. Elworthy, Does baum-welch re-estimation help taggers?, Proceedings of the 4th Conference on Applied Natural Language Processing, Stuttgart, Germany, 1994, pp. 53-58. [6] K. Erni, C. Lewerentz, Applying design-metrics to object-oriented frameworks, Proceedings of the Third International Symposium on Software Metrics: From Measurement to Empirical Results, 1996, pp. 64–74. [7] M. Halstead, Elements of Software Science, Elsevier, New York, 1977. [8] B. A. Kitchenham, E. Mendes, and G. H. Travassos, Cross versus within-Company cost estimation studies: A systematic review, IEEE Trans. Softw. Eng. 33 (5) (2007) 316-329. [9] S. Lessmann, B. Baesens, C. Mues, and S. Pietsch, Benchmarking classification models for software defect prediction: A proposed framework and novel findings, IEEE Trans. Software Eng. 34 (4) (2008) 485-495. [10] J. Liu, Q. Hu and D. Yu, A weighted rough set based method developed for class imbalance learning. Inf. Sci. 178 (4) (2008), 1235-1256. [11] T. McCabe, A complexity measure, IEEE Transactions on Software Engineering 2 (4) (1976) 308–320.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

159

2 nd International Symposium on Computing in Science & Engineering

[12] E. Menahem, L. Rokach and Y. Elovici, Troika - An improved stacking schema for classification tasks. Inf. Sci. 179 (24) (2009) 4097-4122. [13] T. Menzies, B. Turhan, A. Bener, G. Gay, B. Cukic, and Y. Jiang, Implications of ceiling effects in defect predictors, 4th International Workshop on Predictor Models in Software Engineering, Leipzig, Germany, 2008, pp. 47-54. [14] T. Menzies, J. Greenwald, and A. Frank, Data mining static code attributes to learn defect predictors, IEEE Transactions on Software Engineering 33 (1) (2007) 2-13. [15] K. Nigam, A. K. McCallum, S. Thrun, and T. Mitchell, Text classification from labeled and unlabeled documents using EM, Machine Learning 39 (2000) 103–144. [16] L. Rosenberg, Applying and interpreting object oriented metrics, in: Software Technology Conference, Salt Lake City, Utah, 1998. [17] N. Seliya, Software quality analysis with limited prior knowledge of faults, Grad. Seminar, Wayne State University,

Dep.

of

Comp.

Science,

www.cs.wayne.edu/graduateseminars/gradsem_f06/Slides/seliya_wsu_talk.ppt. 2006. [18] N. Seliya, T. M. Khoshgoftaar, Software Quality Analysis of Unlabeled Program Modules with Semisupervised Clustering, IEEE Transactions on Systems, Man and Cybernetics-Part A: Systems and Humans, 37 (2) (2007) 201-211. [19] N. Seliya, T. M. Khoshgoftaar, S. Zhong, Semi-Supervised Learning for Software Quality Estimation, Proc. 16th IEEE Intl. Conf. on Tools with Artificial Intelligence, Boca Raton, FL, 2004, pp. 183-190. [20] N. Seliya, T.M. Khoshgoftaar, Software quality estimation with limited fault data: a semi-supervised learning perspective, Software Quality Journal 15(3) (2007) 327–344. [21] R. Shatnawi, W. Li, J. Swain, and T. Newman, Finding software metrics threshold values using ROC curves, Journal of Software Maintenance and Evolution: Research and Practice, 2009. [22] B. Turhan, T. Menzies, A. B. Bener, and J. D. Stefano, On the relative value of cross-company and withincompany data for defect prediction, Empirical Software Engineering, 14 (2009) 540-578. [23] K. Ulm, A statistical method for assessing a threshold in epidemiological studies. Statistics in Medicine 10 (1991) 341–349. [24] M. Zhang, J. M. Peña and V. Robles, Feature selection for multi-label naive Bayes classification, Information Sciences 179 (19) (2009) 3218-3229. [25] S. Zhong, T. M. Khoshgoftaar, and N. Seliya, Unsupervised learning for expert-based software quality estimation, Proc. of the 8th Intl. Symp. On High Assurance Systems Eng., Tampa, FL, 2004, pp. 149-155. [26] S. Zhong, T. M. Khoshgoftaar, N. Seliya, Analyzing Software Measurement Data with Clustering Techniques, IEEE Intelligent Systems 19 (2) (2004) 20-27. [27] T. Zimmermann, N. Nagappan, H. Gall, E. Giger, and B. Murphy, Cross-project defect prediction: a large scale experiment on data vs. domain vs. process, In Proceedings of the 7th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering on European Software Engineering Conference and Foundations of Software Engineering Symposium. ACM, New York, NY, 2009, pp. 91-100.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

160

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/82

Complexity of Extremal Set Problem Mustafa ATICI, Western Kentucky University, Department of Mathematics and Computer Science, Bowling Green KY, USA, [email protected]

Keywords: Extremal Set, Hash Function, Np-complete, Complexity

INTRODUCTION

Let set [n]={1,2,...,n} be given. Find the minimum cardinality of a collection S of subsets of [n] such that, for any two distinct elements x,y in [n], there exists disjoint subsets A, B in S such that x in A, y in B, and A n B=Ø. If S = 2^[n], that is, S is power set of [n], then one can find such a set S in polynomial time. If S subset 2^[n], then finding such set S is an NP-Complete problem.

LITERATURE REVIEW

Extremal set problem was originally motivated by a problem in graph theory [1,2]. In simple graph G, the distance between two vertices u,v of G is the least number of edges in a path joining u and v. Any such shortest path is called a geodesic. A set U of vertices of G is called geodetic set if the union of all the geodesics joining pairs of points of U is the whole graph G. Let g(G) denote the minimum number of vertices in a geodetic set for G, and call g(G) the geodetic number of $G$. Similarly edge geodetic number, g'(G)$ of graph defined in [1,2]. The problem concerning geodetic sets in graphs is directly related to the following extremal set problem. Lower bound on geodetic number and edge geodetic number are given in those two papers by using the extremal set problem. The following theorem gives lower bound on

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

161

2 nd International Symposium on Computing in Science & Engineering

geodetic number g(G) of a graph G.

Theorem: If ?(G) is the clique number and g(G) the geodetic number of G, then g(G) = 3 log(?(G)).

METHODS

Here we first give the algorithm to find extremel set S if we can pick any elements from power set of [n]. In fact this algorithm is polynomial. But if we can pick S from a strict subset of power ser [n], then problem becomes so difficult. In fact it is shown that it is NP-complete problem. That is, there is no known polynomial algorithm

FINDINGS & CONCLUSION

Algorithm to find S ES([n],k) [ 1.] Determine m(There is a theorem to .

[ 2.] Set N=3^{m} or 4.3^{m-1} or 2.3^{m} i.e. N is multiplication of q_{1},q_{2},..., and q_{k}

[ 3.] Construct matrix M=(m_{ij})_{N x k}. Each row of M is alpha_{1}, alpha_{2},...,alpha_{k} where 1= alpha_{i} = q_{i} [ 4.] Define A_{ij}={r: m_{ri}=j, 1= i = k,1= j = q_{i},1= r = N} Define S(N)={A_{ij}|1= j = q_{i}, 1= i = k} [ 5.] Define S= {B|B=A-{x : n+1 = x = N} where A is in S(N)

[ 6.] If lS| < k$ return "NO"

Else return "YES"

Example: n=17 so [17]={1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17} N=2x3^2=2x9=18=2x3x3 so q_{1}=2, q_{2}=q_{3}=3 matrix M is 1 1 1 1 2 1 1 2 3 1 1 3 4 1 2 1

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

162

2 nd International Symposium on Computing in Science & Engineering

5 1 2 2 6 1 2 3 7 1 3 1 8 1 3 2 9 1 3 3 10 2 1 1 11 2 1 2 12 2 1 3 13 2 2 1 14 2 2 2 15 2 2 3 16 2 3 1 17 2 3 2 18 2 3 3

so A_11={1,2,3,4,5,6,7,8,9} A_12={10,11,12,13,14,15,17,17,18} A_21={1,2,3,10,11,12} A_22={4,5,6,13,14,15} A_23={7,8,9,16,17,18} A_31={1,4,7,10,14,16} A_32={2,5,8,11,15,17} A_33={3,6,9,12,16,18}

Hence S={B1,B2,B3,B4,B5,B6,B7,B8} where B1={1,2,3,4,5,6,7,8,9} B2={10,11,12,13,14,15,16,17} B3={1,2,3,10,11,12} B4={4,5,6,13,14,15} B5={7,8,9,16,17} B6={1,4,7,10,14,16} B7={2,5,8,11,15,17} B8={3,6,9,12,16}

Now let us pick two different elements of [17]={1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17}

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

163

2 nd International Symposium on Computing in Science & Engineering

Say 3 and 8. Then we find two elements of set S that containing 3 and 8 respectively but their intersection is empty. B3 and B8 are two such sets. If you pick any two distinct elements of set [17] we can always find such two element of S satisfying the property. If we pick S from a strict subset of power set of [n], then we have the following theorem.

Theorem: If S is chosen strict subset of power set 2^[n] in , then decision problem ES is NP-Complete problem. Determining such exremel set S has several applications: As indicated above, it will give an lower bound on the geodetic number of a given graph G. It also has an application on the construction of hash function for data retrievel as well as determining reliability of computer networks in Computer Science.

REFERENCES

[1] Atici, M., and Vince, A.: Geodetics in Graphs, an Extremal Set Problem, and Perfect Hash Families, Graphs and Combinatorics 18, 403-413 (2002) [2] Atici M.: On the Edge Geodetic Number of a Graph, Intern. J. Computer Math., Vol 80, No 7, 853-861(2003) [3] Garey M. R. and Johnson D. S.:Computers and Computers and Intractability: A Guide to the Theory of NP-completeness. Freeman (1979)

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

164

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/83

Simulating Annealing Based Parameter Selection in Watermarking Algorithms Dr. Ersin ELBASI, TÜBITAK

Keywords: Watermarking, Simulating Annealing, Wavelet

INTRODUCTION

A recent DWT image watermarking paper embeds a PRN sequence as a watermark in three bands, excluding the low pass subband, using coefficients that are higher than a given threshold. During watermark detection, all the coefficients higher than another threshold are chosen for correlation with the original watermark. Our another paper, we extend the idea to embed the same watermark in two bands (LL and HH). There are two parameters in all this algorithms: Scaling Factor and Threshold. This parameters have been determined by Try and Use method. In this work we used simulating annealing method to determine BEST scaling factor and threshold values for high degree of robustness and invisibility. In the end, automated selection gives better result than Try and Use method in semi-blind PRN embedding algorithms in gray and color image watermarking.

LITERATURE REVIEW

A digital watermark is a pattern of bits inserted into a multimedia element such as a digital image, an audio or video file. The name comes from the barely visible text or graphics imprinted on stationery that identifies the manufacturer of the stationery.

There are several proposed or actual watermarking applications : broadcast

monitoring, owner identification, proof of ownership, transaction tracking, content authentication, copy control, and device control. In applications such as owner identification, copy control, and device control, the most important properties of a watermarking system are robustness, invisibility, data capacity, and security. An embedded watermark should not introduce a significant degree of distortion in the cover image. The perceived degradation of the watermarked image should be imperceptible so as not to affect the viewing experience of the image. Robustness is the resistance of the watermark against normal A/V processes or intentional attacks such as addition of noise, filtering, lossy compression, resampling, scaling, rotation, cropping, and A-to-D and D-to-A conversions. Data capacity refers to the amount of data that can be embedded without affecting perceptual transparency.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

165

2 nd International Symposium on Computing in Science & Engineering

METHODS

Watermark embedding process: 1.First apply simulating analysis method and select optimal scaling factor and threshold values. 2.Compute the DWT of an NxN gray scale image I. 3.Exclude the low pass DWT coefficients. 4.Embed the watermark (i.e., a PRN sequence) into the DWT coefficients higher than a given threshold T1: T = {ti}, t’i = ti + a|ti|xi, where i runs over all DWT coefficients > T1. 5.Replace T = {ti} with T’ = {t’i} in the DWT domain. 6.Compute the inverse DWT to obtain the watermarked image I’.

FINDINGS & CONCLUSION

Our experiments show that for one group of attacks (JPEG compression, cropping, and resizing), the extractions are better in the lower bands. For another group of attacks (Gaussian noise, intensity adjustment, sharpening, histogram equalization, and gamma correction), the extractions are better in the higher bands. Automated selected parameters based detection process more robust to common attacks.

REFERENCES

[1] R. Dugad, K. Ratakonda and N. Ahuja, “A New Wavelet-Based Scheme for Watermarking Images,” Proceedings of 1998 International Conference on Image Processing (ICIP ‘98), Vol. 2, Chicago, IL, October 4-7, 1998, pp. 419-423. [2] C. Hsu, J. Wu, “DCT-Based Watermarking for Video,” IEEE Transaction on Consumer Electronics, Vol. 44, No. 1, February 1998, pp. 206-216 [3] G. Doerr, J. Dugelay, “A Guide Tour of Video Watermarking,” Signal Processing: Image Communication, Vol. 18, No.4 , April 2003, pp. 263-282. [4] F. Hartung, B. Girod, “Digital Watermarking of Raw and Compressed Video,” Proc. European EOS/SPIE Symposium on Advanced Imaging and Network Technologies, Berlin, Germany, October 1996. [5] Pik-Wah Chan, Michael R. Lyu and Roland T. Chin, “Copyright Protection on the Web: A Hybrid Digital Video Watermarking Scheme,” Proceedings of the 13th International World Wide Web Conference (WWW ‘04), New York , May 17-22, 2004 , pp.354-355. [6] P. Tao and A. M. Eskicioglu, “A Robust Multiple Watermarking Scheme in the DWT Domain,” Optics East 2004 Symposium, Internet Multimedia Management Systems V Conference, Philadelphia, PA, October 25-28, 2004, pp. 133-144.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

166

2 nd International Symposium on Computing in Science & Engineering

[7] E. Elbasi and A. M. Eskicioglu, “MPEG-1 Video Semi-Blind Watermarking Algorithm in the DWT Domain,” IEEE International Symposium on Broadband Multimedia Systems and Broadcasting 2006, Las Vegas, NV, April 6-7, 2006. [8] H. Wang, Z. Lu, J. Pan, S. Sun, “Robust Blind Video Watermarking with Adaptive Embedding Mechanism,” International Journal of Innovative Computing, Information and Control, Vol. 1, Number 2, June 2005. [9] Pik-Wah Chan and Michael R. Lyu, “Digital Video Watermarking with a Genetic Algorithm,” Proceedings International Conference on Digital Archives Technologies Technologies (ICDAT ‘05), Taipei, Taiwan, June 16-17, 2005, pp. 139-153. [10] Rabiner, L.R, “A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition,” Proceedings of the IEEE, Volume 77, Issue 2, Feb 1989 Page(s):257 – 286.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

167

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/84

M-SVD Based Image and Video Quality Measures Dr. Ersin ELBASI, TÜBITAK Dr. Peining Tao, The City University of New York Prof. Dr. Ahmet M. Eskicioglu, Brooklyn College

Keywords: Multimedia, Quality Measurement, Watermarking

INTRODUCTION

In subjective evaluation of distorted images, human observers usually consider the type of distortion, the amount of distortion, and the distribution of error. We recently proposed an image quality measure, M-SVD, for grayscale images that can express the quality of distorted images numerically or graphically. As a graphical tool, it predicts the distortion based on the three factors used by human observers. As a numerical tool, it evaluates the overall visual quality of the distorted image by computing a single value. It performs better than two state-ofthe-art metrics, Q and MSSIM, especially when we compute the correlation with mean opinion score across different types of noise. Each test image was degraded using six types of noise (JPEG, JPEG 2000, Gaussian blur, Gaussian noise, sharpening and DC-shifting), each with five different levels of intensity. The measure was later extended to full color images using a color model which decouples the color and gray-scale information in an image.

Our experiments show that using only the luminance component, the measure outperforms Q and

MSSIM. When we also use the two chrominance layers, the performance of M-SVD becomes slightly higher whereas the performance of Q and MSSIM is degraded. This indicates that the color components may also contribute to the performance of the proposed measure. In the proposed work, the applicability of M-SVD to watermarked images and video sequences.

LITERATURE REVIEW

Measurement of image quality is a challenging problem in many image processing fields ranging from lossy compression to printing. The quality measures in the literature can be classified into two groups: Subjective and objective. Subjective evaluation is cumbersome as the human observers can be influenced by several critical factors such as the environmental conditions, motivation, and mood. The most common objective evaluation tool, the Mean Square Error (MSE), is very unreliable, resulting in poor correlation with the human visual system (HVS). In spite of their complicated algorithms, the more recent HVS-based objective measures do not

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

168

2 nd International Symposium on Computing in Science & Engineering

appear to be superior to the simple pixel-based measures like the MSE, Peak Signal-to-Noise Ratio (PSNR) or Root Mean Squared Error (RMSE). It is argued that an ideal image quality measure should be able to describe: amount of distortion, type of distortion, and distribution of error. Undoubtedly, there is a need for an objective measure that provides more information than a single numerical value. Only a few multi-dimensional measures exist in the relevant literature today. Image quality measures can be classified using a number of criteria such as the type of domain (pixel or transform), the type of distortion predicted (noise, blur, etc.) and the type of information needed to assess the quality (original image, distorted image, etc.). Table 1 gives a classification based on these three criteria, and includes representative examples of recently published papers. Measures that require both the original image and the distorted image are called “full-reference” or “non-blind” methods, measures that do not require the original image are called “no-reference” or “blind” methods, and measures that require both the distorted image and partial information about the original image are called “reduced-reference” methods. Every real matrix A can be decomposed into a product of 3 matrices A = USVT, where U and V are orthogonal matrices, UTU = I, VTV = I and S = diag(s1, s2, ...). The diagonal entries of S are called the singular values of A, the columns of U are called the left singular vectors of A, and the columns of V are called the right singular vectors of A. This decomposition is known as the Singular Value Decomposition of A. It is one of the most useful tools of linear algebra with several applications to multimedia including image compression and watermarking. The proposed graphical measure is a bivariate measure that computes the distance between the singular values of the original image block and the singular values of the distorted image block: Di = where are the singular values of the original block, are the singular values of the distorted block, and n is the block size. If the image size is k, we have (k/n) x (k/n) blocks. The set of distances, when displayed in a graph, represents a “distortion map.” The numerical measure is derived from the graphical measure. It computes the global error expressed as a single numerical value depending on the distortion type: M-SVD = where represents the midpoint of the sorted s, k is the image size, and n is the block size.

METHODS

The measure was applied to 512x512 gray scale Lena, one of the most widely used test images. In our experiments, we used six distortion types (JPEG, JPEG 2000, Gaussian blur, Gaussian noise, sharpening and DC-shifting) at five distortion levels using 8x8 blocks. The distortion types, the distortion levels, and the associated parameters are shown in Table 1. For each distorted image, the measure has two outputs: •

local error expressed as a 3-dimensional graph (provides the amount and type of error as well as its

distribution in the image). •

global error expressed as a single numerical value (provides overall error based on the distortion).

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

169

2 nd International Symposium on Computing in Science & Engineering

High quality print-outs of distorted images were subjectively evaluated by approximately 15 observers. In the experiments, the observers were chosen among the undergraduate/graduate students and professors from the Department of Computer and Information Science at Brooklyn College.

FINDINGS & CONCLUSION

In this research, we extended the SVD-based image quality measure to watermarked images and video sequences. Watermarked images: We have been working with several wavelet based watermarking algorithms. We use the quality measure M-SVD to compare the watermarked images obtained by these algorithms. Video sequences: The measure also applied to commonly used watermarked test video clips and its performance tested. Experimental results show that M-SVD is considerably better than PSNR and models.

REFERENCES

1. A. M. Eskicioglu and P. S. Fisher, “Image quality measures and their performance,” IEEE Transactions on Communications, Vol. 43, pp. 2959-2965, December 1995. 2. A. M. Eskicioglu, “Quality measurement for monochrome compressed images in the past 25 years,” Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 4, pp. 1907-1910, Istanbul, Turkey, June 5-9, 2000. 3. D. Van der Weken, M. Nachtegael and E. E. Kerre, “A new similarity measure for image processing,” Journal of Computational Methods in Sciences and Engineering, Vol. 3, No. 2, pp. 209-222, 2003. 4. A. Beghdadi and B. Pesquet-Popescu, “A new image distortion measure based on wavelet decomposition,” 7th International Symposium on Signal Processing and Its Applications, Paris, France, July 1-4, 2003. 5. A. C. Bovik and S. Liu, “DCT-domain blind measurement of blocking artifacts in DCT-coded images,” Proceedings of International Conference on Acoustics, Speech, and Signal Processing, Salt Lake City, UT, May 7-11, 2001. 6. Z. Wang, A. C. Bovik and B. L. Evans, “Blind measurement of blocking artifacts in images,” Proceedings of IEEE 2000 International Conferencing on Image Processing, Vancouver, BC, Canada, September 10-13, 2000. 7. Z. Wang, H. R. Sheikh and A. C. Bovik, “No-reference perceptual quality assessment of JPEG compressed images,” Proceedings of IEEE 2002 International Conferencing on Image Processing, Rochester, NY, September 22-25, 2002. 8. P. Marziliano, F. Dufaux, S. Winkler and T. Ebrahimi, “A no-reference perceptual blur metric,” IEEE 2002 International Conference on Image Processing, Rochester, NY, September 22-25, 2002. 9. E.-P. Ong, W. Lin, Lu, Z. Yang, S. Yao, F. Pan, L. Jiang and F. Moschetti, “A no-reference quality metric for measuring image blur,” 7th International Symposium on Signal Processing and Its Applications, Paris, France, July 1-4, 2003 10. L. Meesters and J.-B. Martens, “A single-ended blockiness measure for JPEG-coded images,” Signal Processing, Vol. 82, pp. 369-387, 2002.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

170

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/87

Farsi / Arabic Printed Page Segmentation Using Connected Component and Clustering Zahra Bani, Islamic Azad University, South Tehran Branch, Young Researchers Club, Tehran, Iran, [email protected] Ali Broumandnia, Islamic Azad University, South Tehran Branch, Tehran, Iran, [email protected] Maryam Khademi, Islamic Azad University, South Tehran Branch,Tehran, Iran, [email protected]

Keywords: Segmentation, Connected Component , Clustering, Classification, Bounding Box

INTRODUCTION

Document image analysis has been the topic of research for almost three decades. A large number of the research in this subject have already been published on Latin, Chinese and Japanese characters but there is a little work on Persian or Arabic documents and scripts because of their complexity. Therefore, this area of research is still an open field. With the progress in information and communication

technology and the increased call for information, the

amount of documents containing information has increased more and more. Nonetheless the use of electronic documents has increased, the amount of printed or handwritten documents has never decreased and most of the people prefer to read the printed documents. Increasing of the printed documents causes to a lot of problems appeared into the storing and retrieving these documents. So it is important to find ways to segmentation and finally recognition printed documents and words in order to change these documents into electronic document. There are some approaches with different algorithms for segmentation of printed documents. We can classify these methods into three categories such as top-down, bottom-up and spectral. In this paper we will propose a method as a bottom-up method for printed page segmentation by using connected component and clustering. [1, 2, 3]

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

171

2 nd International Symposium on Computing in Science & Engineering

LITERATURE REVIEW

Document analysis and understanding are relevant techniques for the automatic processing of paper documents. Extraction of the structure or the layout from a document is referred to as document analysis and mapping the layout structure into a logical structure is referred to as document understanding. These techniques allow the recognition of document contents and the simplification of a number of complex tasks, such as re-editing, storage, maintenance, retrieval and transmission. Document image analysis plays an important role in the field of document processing and provides techniques for partitioning a document into a hierarchy of physical components (pages, columns, paragraphs, words, tables, figures, halftones, etc.) [16] So far different document image segmentation methods have been proposed in the literature, both for printed pages [4, 5] and for hand written pages [6]. In our work, at the first case, with connected component the page is divided into bounding box and with clustering algorithm that are successively merge into big blocks, in an iterative fashion, to obtain the final text, graphics or image segments. In connected component analysis, single pixels are gathered on the basis of a low level analysis, to constitute blocks that can be merged into successively larger blocks [7,8,9,10,11]. A number of heuristics approaches may be devised to produce region segmentation and a classical approach is to employ generic rules, based on thresholding [12]. The adoption of fuzzy logic has been proposed as a complementary approach to existing methods [13, 14]. In fact, the concept of a fuzzy set allows a gradual transition from membership to non-membership, providing the management of a greater degree of abstraction, thus overcoming the threshold-based dimension of classical approaches. A number of works demonstrate that the introduction of fuzzy techniques can be very successful in the area of document image processing [13,15].

METHODS

In this paper, we propose a methodology that aims at segmenting a Farsi/Arabic document image into coherent and homogeneous regions containing text, graphics and backgrounds. Following a bottom-up approach base on connected component and clustering, the overall methodology executed of consecutive steps. Initially, a connected component analysis is applied to the document image to classify each connected components to bounding box part of text, graphics or background. Successively, clustering procedures group the identified bounding box into coherent regions using a set of local feature vector of connected component of each bounding box. To refine the classification results, an additional region level analysis is performed, based on shape regularity and a region skew angle. At the end of stage, a final accurate classification of the document regions is obtained.

FINDINGS & CONCLUSION

In this paper, an effective method for Farsi/Arabic document image segmentation and classification has been proposed. The bottom up method based on connected component and clustering are exploited to perform

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

172

2 nd International Symposium on Computing in Science & Engineering

classification tasks on two different levels of details. The final results, obtained by segmenting pages from journal and newspapers articles, correspond with human perception and compare well with those reported in the literature. Moreover, the segmentation process is completely unchanged by page skew, as the skew angle of the document image is automatically determined during the region analysis step. As guidelines for future work, the overall methodology could be further enhanced by employing a wavelet coefficient analysis as feature vector for clustring. Moreover, it could be useful to introduce more classes during the connected component classification step (titles, headings, graphs, etc.). This could avoid the misclassification of article headings and titles as graphics, instead of text.

REFERENCES

[1]

J.shanbezadeh, A.broumandnia, "segmantation of printed Farsi dicuments", the fourth conference on M.V.I.P 2007,

Ferdosi Mashhad university,Iran (In persian) [2] [3]

A.Broumandnia, J.Shanbehzadeh, M. Nourani, ”Segmentation of Printed Farsi/Arabic Words” ,IEEE2007,761-766. A. Broumandnia, J. Shanbehzadeh, M. Rezakhah Varnoosfaderani, “Persian/arabic handwritten word recognition

using M-band packet wavelet transform”, ELSEVIER,Image and Vision Computing 26 (2008) 829–842. [4]

R. Kasturi, L. O’Gorman, Document image analysis techniques, Mach.Vis. Appl. 6 (2–3) (1993) 67–68.

[5]

Y.Y. Tang, S.W. Lee, C.Y. Suen, Automatic document processing: asurvey, Pattern Recognit. 29 (1996) 1931–1952.

[6]

Y. Sun, T.S. Butler, A. Shafarenko, R. Adams, M. Loomes, N. Davey,Word segmentation of handwritten text using

supervised classification techniques, Appl. Soft Comput. 7 (1) (2007) 71–88. [7]

A. Fletcher, R. Kasturi, A robust algorithm for text string separation from mixed text/graphics images, IEEE Trans.

Pattern Anal. Mach. Intell. 10(1998) 294–308 [8]

A.K. Jain, B. Yu, Document representation and its application to page decomposition, IEEE Trans. Pattern Anal.

Mach. Intell. 20 (1998) 294–308. [9]

K. Kise, A. Sato, M. Iwata, Segmentation of page images using the area Voronoi diagram, Comp. Vis. Image

Understanding 70 (1998) 370–382. [10]

L. O’Gorman, The document spectrum for page layout analysis, IEEE Trans. Pattern Anal. Mach. Intell. 15 (1993)

1162–1173. [11] F. Wahl, K. Wong, R. Casey, Block segmentation and text extraction in mixed text/image documents, Graph. Models Image Process 20 (1982) 375–390. [12]

L. Cinque, L. Lombardi, G. Manzini, A multiresolution approach for page segmentation, Pattern Recognit. Lett. 19

(1998) 217–225. [13]

F. Giorgini, A. Verrini, S. Dellepiane, A fuzzy approach to segment document images, in: Proceedings of the 10th

International Conference on Image Analysis and Processing ICIAP’99, 1999. [14]

Z. Shi, V. Govindaraju, Line separation for complex document images using fuzzy runlength, in: Proceedings of the

First InternationalWorkshop on Document Image Analysis for Libraries (DIAL’04), 2004. [15] W.-S. Chou, Classifying image pixels into shaped, smooth and textured points, Pattern Recognit. 32 (1999) 1697–1706. [16] L. Caponetti, C. Castiello, P. Gorecki, “Document page segmentation using neuro-fuzzy approach”, ELSEVIER, Soft Computing 8 (2008) 118–126 .

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

173

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 100/88

Investigation of Several Parameters on Goldbach Partitions Derya ÖZKAN,Istanbul University,Computer Engineering Department,Istanbul,Turkey, [email protected] Bahar ILGEN,Istanbul Kültür University,Computer Engineering Department,Istanbul, [email protected]

Keywords: Goldbach Partitions, Goldbach Conjecture

INTRODUCTION

Prime numbers are very important elements in the research area of the Cryptology. These numbers are used to improve encoding techniques. Goldbach conjecture is seen as one of the biggest problem of the prime numbers and mathematics. Although it seems to be correct, it is still an open problem which has not been proved yet. Goldbach conjecture arose in a letter which had written from Christian Goldbach to Euler at the 7th of June 1942. By this letter, mathematician Goldbach wanted Euler to prove the premise of the Goldbach conjecture if it is completely correct or false in at least one case (Richstein, 2000). This conjecture claims that the given premise: “…every even number which is greater than number two is summation of the two prime numbers.” Several researchers have been tried to solve this conjecture up until now. With the support of computers, several experiments were done up to the number of 1.1x1018 (Silva, 2008) to determine the correctness of the conjecture. But it has not been proved yet. It keeps on being one of the biggest problem of Mathematics for approximately three centuries. It has been intended to examine new patterns that are related to Goldbach conjecture by using Goldbach conjecture data of Hartoka (Hartoka, 2004) on numbers up to 109. The equivalence classes that constituted by set of even numbers were determined. Then various parameters of these equivalence classes has been examined to determine if they match with a certain pattern or not.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

174

2 nd International Symposium on Computing in Science & Engineering

LITERATURE REVIEW Goldbach partitions shows the different ways of obtaining even number 2N from prime summations. The examples of expression of the even numbers between 4 and 14 are shown below as summation of the prime numbers. Example 1: 4=2+2 6=3+3 8=3+5 10=3+7=5+5 12=5+7 14=3+11=7+7

Goldbach partitions which is also expressed with the symbol G(2N) was formulated as below: G(2N) = # { ( p,q ) | 2N = p+q, p= q, p ve q = prime number } Table 1: Goldbach partition values for several even numbers. 2N

G(2N)

4

1

10

2

100

6

1.000

28

10.000

127

100.000

810

1.000.000

5.402

10.000.000

38.807

The graphical view of G(2N) for the even numbers between [4, 100000] are shown below. The characteristics of the obtained graphical view looks like a stray comet which firstly realized by Fliegel and Robertson (1989). Because of this property, it is called Goldbach Comet. Table 2: G(2N) graphic for 2N=[4,100000] range. Generally it is obtained that G(2N) values increase with increasing 2N values. On the other hand, an idea (Jörg Richstein, 2000) about the relation between 2N~G(2N) is explained below: “…it is very likely the value of G(2N) depends on prime factors of 2N. For example, when 120 = 2.2.2.3.5 then G(120) = 12. On the other hand when its’ close neighbours 118 = 2.59 , 122 = 2.61 then G(118) = 6 and G(122) = 4 respectively.”

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

175

2 nd International Symposium on Computing in Science & Engineering

The Formula of the results which expresses the relation between G(2N) and the prime factors has already defined by Sylvester and its’ expression is given below (Richstein, 2000). In the expression, when value “p” is the greatest one of the equivalence classes, the effect of (p-1)/(p-2) is small and it is big when “p” is the smallest one. The value of C2 is 0,6601618 as being constant of twin prime number. G(2N)=2 C2 (2N/log22N-2log(2N)) ? (p-1)/(p-2) METHODS An equivalence class, a mathematical concept, is a subset of given whole set that induced by an equivalence relation on that given set. (If the given set is empty, then the equivalence relation is empty, and there are no equivalence classes; otherwise, the equivalence relation and its’ concomitant equivalence classes are all nonempty.) 2N even numbers with same number of Goldbach partitions create a Goldbach equivalence class. Goldbach equivalence classes are formulated as below: Gi={2N|G(2N)=i, i = natural number} Famous Goldbach conjecture can be expressed as G0={ }after this definition. Some of the Goldbach equivalence classes are shown as below: Example 2: G1={4, 6, 8, 12} G2={10, 14, 16, 18, 20, 28, 32, 38, 68} G3={22, 24, 26, 30, 40, 44, 52, 56, 62, 98, 128} G4={34, 36, 42, 46, 50, 58, 80, 88, 92, 122, 152} G5={48, 54, 64, 70, 74, 76, 82, 86, 94, 104, 124, 136, 148, 158, 164, 188} G6={60, 66, 72, 100, 106, 110, 116, 118, 134, 146, 166, 172, 182, 212, 248, 332} G7={78, 96,112, 130, 140, 176, 178, 194, 206, 208, 218, 224, 226.... 326, 398} G8={84, 102, 108, 138, 142, 154, 160, 184, 190, 200, 214, 242, 256, 266.… 362, 368} G9={90, 132, 170, 196, 202, 220, 230, 236, 238, 244, 250, 254, 262, 268….458, 488} G10={114, 126, 162, 260, 290, 304, 316, 328, 344, 352, 358, 374, 382, 416, 542, 632} G100={2.700, 2.856, 3.108, 3.708, 3.798, 3.834, 3.954, 4.044, 4.146….10.796, 11.456} G1000={49.500, 63.252, 77.058, 77.466, 78.072, 78.984, 79.218…. 190.516, 190.562} Goldbach equivalence classes were determined and some parameters of them were examined. Then tables of Goldbach equivalence classes were generated. From the obtained table, it is generally observed for Gi>Gi-1 that all examined parameters of the equivalence classes of Goldbach partitions has increased. The first odd case in which all Gi-1 parameters are smaller than those of Gi's has been observed for G18. min(G19)18) max(G19)18) T1918 max(G19)/min(G19)18)/min(G18) T19/|G19|18/|G18|

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

176

2 nd International Symposium on Computing in Science & Engineering

The similar extraordinary case has been observed for G356, G508, G518, G555 and G659 when Gi N surjective is verified for 1 2. It is obvious that the smallest distance between any two k-subsets is two while the distance between a k-subset and a (k+1)-subset is at least one. While the distance between consecutive subsets is two when they are of the same size, and is one otherwise. So that’s the reason the order is called optimal Banker. In full paper it is proofed that this generating algorithm is optimal according changes and Rank, Unrank and Successor Algorithms for Optimal-Banker are introduced next visual results are done for finding portray but these can’t be shown in 200 words. FINDINGS & CONCLUSION It is depicted that Gray codes and Lexicographic order for finding optimal for mentioned CSPs, that to find the smallest subset of a set which satisfy the constraints seek all subsets of S. it is shown traditional generating algorithms like Gray codes and Lexicographic order can work efficient. So for Gray codes and Lexicographic order and Optimal banker, a Graphical plot is proposed in a 2-d plot that X axis show time and Y axis show number of subset’s members that can be testes to gratification some condition. Also it is shown although gray code can be generated with at least changes but for these CSPs, optimal banker work better. REFERENCES [1] Patrick Suppes, Axiomatic Set Theory, D. Van Nostrand Company, Inc., 1960. [2] Gian-Carlo Rota, Studies in Combinatorics. MAA Studies in Mathematics. 17. Mathematical Association of America. p. 3. ISBN 0-88385-117-2, 1978. [3] Jech, Thomas, Set Theory. Springer-Verlag. ISBN 3-540-44085-2, 2002. [4] Donald L. Kreher, Douglas Robert Stinson, Combinatorial algorithms: generation, enumeration, and search, CRC press LLC 1999. [5] Steven S. Skiena, The Algorithm Design Manual, Second Edition, Springer-Verlag London Limited 2008. [6] Donald E. Knuth. The Art of Computer Programming. Addison-Wesley Publishing Company, Reading, Massachusetts, 1973. [7] J. Loughry, J.I. van Hemerty, L. Schoofsz, Efficiently Enumerating the Subsets of a Set, December 2000. [8] Hamming, Richard W., "Error detecting and error correcting codes", Bell System Technical Journal 29 (2): 147–160, MR0035935, 1950. [9] Tsang, Edward (1993). Foundations of Constraint Satisfaction. Academic Press. http://www.bracil.net/edward/FCS.html. ISBN 0-12-701610-4 [10] Peter Eades, Brendan D. McKay: An Algorithm for Generating Subsets of Fixed Size With a Strong Minimal Change Property, Information Processing Letters, Volume 19, Number 3, 19 October 1984.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

526

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/21

A Trial Equation Method and Its Applications to Nonlinear Equations Meryem ODABAŞI,Ege University,Tire Kutsan Technical Vocational School of Higher Education,Tire,Izmir,Turkey, [email protected] Emine MISIRLI,Ege University,Department of Mathematics,Izmir,Turkey, [email protected]

Keywords: Trial equation method,nonlinear differential equation,traveling wave solution INTRODUCTION Mathematical modeling of physics and engineering problems usually results in nonlinear differential equations. The nonlinear phenomena exist in all the fields including either the scientific work or engineering fields, such as fluid mechanics, plasma physics, optical fibers, biology, solid state physics, chemical kinematics, chemical physics, and so on. It is important to find the traveling wave solutions of nonlinear evolution equations. To find the traveling wave solutions of the nonlinear equations a lot of methods have been proposed such as the inverse scattering method [1], the tanh method [2-5], Hirotas bilinear transformation [6,7], sine-cosine method [8,9], homogeneous balance method [10], (G′/G)-expansion method [11-13], expfunction method [14-20], and so on. Ma and Fuchssteiner proposed a powerful approach for finding exact solutions to nonlinear differential equations [21]. Their key idea is to expand solutions of given differential equations as functions of solutions of solvable differential equations, in particular, polynomial and rational functions. Recently, Liu [22-25] proposed trial equation method to find the exact solutions to nonlinear differential equations. In order to describe Liu’s method, we consider a differential equation of u. We always assume that its exact solution satisfies a solvable equation (u′)2=F(u). Therefore our task is just to find the function F. Liu has obtained abundant exact solutions of many nonlinear differential equations when F(u) is a polynomial or a rational function. Hence, Du took F as an irrational function and proposed a new trial equation method to solve these kinds of equations [26]. Also, Y Liu proposed a new version of the trial equation method to nonlinear partial differential equations with variable coefficients [27]. In the present study, we apply the trial equation method to look for exact solutions of nonlinear equations. The equations that have this form are transformed to the nonlinear ordinary differential equations, it has been obtained the term of (u′)2. This is inconvenient term for these equations. For this reason we work Y Liu’s approach. Using this method, we obtain some new traveling wave solutions to the Liouville Equation and the Sine-Gordon equation: uxt+eu=0 , (1) (2) utt-uxx+sinu=0. LITERATURE REVIEW Mathematical modeling of many physical phenomena in various fields of physics and engineering generally leads to nonlinear ordinary or partial differential equations. It is known that investigating and constructing exact solutions of these equations are of great importance in applied mathematics. Therefore, in recent years, many effective methods [1-20] have been proposed for obtaining exact solutions to nonlinear partial differential equations. In this study we apply the trial equation method [22-25] to seek exact solutions of the Liouville equation [28] and the sine-Gordon equation[29-30]. METHODS According to Ma-Fuchsseiter’s idea and Liu’s trial equation method to nonlinear evolution equations, we consider a trial equation method which can be suitable to the nonlinear partial differential equations. The main

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

527

2 nd International Symposium on Computing in Science & Engineering

steps of a trial equation method for the nonlinear differential equation are outlined as follows. Step 1. We consider the following nonlinear partial differential equation: (3) N(u,ut,ux,utt,uxx,...)=0. Under the traveling wave transformation u=u(ξ), ξ=x-ωt (4) Equation (3) becomes the following ordinary differential equation: P(t,x,u,u′,u″,...)=0, (5) where prime denotes with respect to ξ. Step 2. Take trial equation (6) (u′)2=F(u)=∑si=0aiui, where s and ai are constants to be determined. Substituting equation (6) and other derivative terms such u′ as or u″ and so on into equation (5) yields a polynomial G(u) of u. According to the balance principle we can determine the value of s.Setting the coefficients of G(u) to zeros, we get systems of ordinary differential equations. By solving the nonlinear ordinary differential equation, we determine ω and values of a0,a1,...,as. Step 3. Rewrite the equation (6) by integral form (7) ±(ξ-ξ0)=∫(1/√F(u))du. According to the complete discrimination system of polynomial, we classify the roots of F(u), and solve integral (7). Thus we obtain the exact solution of equation (3). FINDINGS &CONCLUSION In this study, the trial equation method is applied to solve nonlinear differential equations. Using the trial equation method, some exact traveling wave solutions of the Liouville equation and the sine-Gordon equation are obtained. We believe that new exact solutions to nonlinear evolution equations will be found by this method. Also this method can be extended to other high-dimensional nonlinear evolution equations. REFERENCES [1] Ablowitz, M. J. and Clarkson, P. A. : Solitons, Non-linear Evolution Equations and Inverse Scattering Transform, Cambridge University Press, Cambridge (1991) [2] Malfliet, W.: Am. J. Phys. 60, 650 (1992) [3] Fan, E. G.: Phys. Lett. A 277, 212 (2000) [4] Abdou, M. A.: Appl. Math. Comput. 190, 988 (2007) [5] Wazwaz, A. M.: Appl. Math. Comput. 187, 1131 (2007) [6] Hirota, R. J.: Math. Phys. 14, 805 (1973) [7] Hirota R. and Satsuma, J. : Phys. Lett. A 85, 407 (1981) [8] Wang, M. L.: Phys. Lett. A 213, 279 (1996) [9] Wazwaz, A. M.: Math. Comput. Model. 40(5-6), 499 (2004) [10] Wang, M.: Phys. Lett. A 199, 169 (1995) [11] Wang, M., Li, X and Zhang, J.: Phys. Lett. A 372, 417 (2008) [12] Zhang, S., Dong, L., Ba, J. M. and Sun, Y. N. Pramana - J. Phys. 74(2), 207 (2010) [13] Abazari, R.: Math. Comput. Model. 52, 1834 (2010) [14] He, J. H. and Wu, X. H.: Chaos Soliton. Fract. 30, 700 (2006) [15] Dai, Z. D., Wang, C. J., Lin, S. Q., Li, D.L. and Mu, G.: Nonl. Sci. Lett. A: Math. Phys. Mech. 1, 77 (2010) [16] Zhang, S.: Nonl. Sci. Lett. A: Math. Phys. Mech. 1, 143 (2010) [17] Misirli, E. and Gurefe, Y.: Nonl. Sci. Lett. A: Math. Phys. Mech. 1, 323 (2010) [18] Misirli, E. and Gurefe, Y.:, Appl. Math. Comput. 216, 2623 (2010) [19] Gurefe, Y. and Misirli, E.: Comput. Math. Appl. doi:10.1016/j.camwa.2010.08.060 (2010) [20] Misirli, E. and Gurefe, Y.:, Math. Comput. Appl. 16, 258 (2011) [21] Ma, W. X., Wu, H. Y. and He, J. S.: Phys. Lett. A 364, 29 (2007) [22] Liu C. S.: Acta Phys Sin.:54; 2505(in Chinese) (2005) [23] Liu C. S.: Acta. Phys. Sinica. 2005; 54(10):4506-4510. [24] Liu C. S.: Communications in theoretical physics. 45(2):219-223, (2006) [25] Liu C. S.: Communications in theoretical physics. ;45(3):395-397, (2006) [26] Du, X. H.: Pramana J. Phys. 75(3), 415 (2010) [27] Liu, Y.: Appl. Math. Comput. 217, 5866 (2011) [28] Wazwaz, A.M.: Commun. Nonlinear Sci. Numer. Simul. 13, 584–592, (2008) [29] Wazwaz, A.M.: Appl. Math. Comput. 167,1196–1210, (2005) [30] Fabian, A.L., Kohl, R., Biswas, A.: Commun. Nonlinear Sci. Numer. Simul. 14, 1227–1244, (2009)

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

528

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/22

On the Numerical Solution Of Fractional Parabolic Partial Differentıal Equations with The Dirichlet Conditıon Allaberen Ashyralyev, Department of Mathematics, Fatih University, Istanbul, Turkey, Permanent address: Department of Mathematics, ITTU, Ashgabat, Turkmenistan, [email protected] Zafer Cakir, Department of Mathematical Engineering, Gumushane University, Gumushane, Turkey, [email protected] Keywords: Fractional parabolic equations, initial boundary value problems, difference schemes, stability. INTRODUCTION It is well known that fractional partial differential equations can not be solved analytically and stability and convergent are important problems for using classical numerical methods. One of the useful methods for solving these partial differential equations is difference method. This work is devoted to study the difference schemes of fractional multi-dimensional parabolic differential equations. The main characteristics of difference schemes are their accuracy and stability. We consider fractional multi-dimensional parabolic differential equations and the first and second orders of accuracy difference scheme for the approximate solutions of this initial boundary value problem is presented. LITERATURE REVIEW Some of the areas of present-day applications of fractional models include fluid flow, diffusive transport akin to diffusion, material viscoelastic theory, electromagnetic theory, dynamics of earthquakes, control theory of dynamical systems, optics and signal processing, economics, probability and statistics, and so on. Methods of solutions of problems for fractional differential equations have been studied extensively by many researchers. The connection of fractional derivatives with fractional powers of positive operators is presented and the formula for fractional difference derivative was obtained by Ashyralyev [12]. Momani and Al-Khaled [10] have used Adomian decomposition method to solve systems of nonlinear fractional differential equations and a linear multi-term fractional differential equation by reducing it to a system of fractional equations each of order at most unity. Moreover, they have shown how the method can be applied to a general linear multi-term equation and solve several applied problems. Nonlocal boundary value problems (BVP) for degenerate elliptic differential-operator equations (DOE), that were defined in Banach-valued function spaces, where boundary conditions contain a degenerate function and a principal part of the equation possess varying coefficients were investigated by Shakhmurov [17]. The role played by stability inequalities (well posedness) in the study of boundary-value problems for parabolic partial differential equations is well known. In this study, the initial value problem for the fractional parabolic differential equation in a Banach space with a positive operator A is considered. The stability estimate for the solution of this problem is established. In the present paper, the mixed boundary value problem for the multidimensional fractional parabolic equation is considered. The first and second orders of accuracy in t and the second order of accuracy in space variables for the approximate solution of this problem are presented. The stability and almost coercive stability estimates for the solution of these difference schemes and its first order difference derivative are established

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

529

2 nd International Symposium on Computing in Science & Engineering

METHODS The construction difference schemes are based on approximation formulas for fractional derivative and first and second orders of difference schemes for parabolic equations. The stability of difference scheme is based on the theory of positive operator in Banach space. The implementation of this difference schemes is based on the Gauss elimination method developed by Samarskii and Nikolaev [19]. FINDINGS & CONCLUSION In the present paper the first and second orders of accuracy stable difference schemes for the numerical solution of the mixed problem for the multidimensional fractional parabolic equation are presented. Stability and almost coercive stability estimates for the solution of these difference schemes and for the first order difference derivative are obtained. The theoretical statements for the approximate solution of the difference scheme is supported by the results of numerical experiments. We have the second-order difference equation with respect to n matrix coefficients. A procedure of modified Gauss elimination method is used for solving this difference scheme in the case of one-dimensional fractional parabolic partial differential equations. REFERENCES [1] I. Podlubny, Fractional Differential Equations, Academic Press, New York,(1999). [2] S. G. Samko, A. A. Kilbas and O. I. Marichev, Fractional Integrals and Derivatives, Gordon and Breach Science Publishers, London,(1993). [3] A. A. Kilbas, H.M. Sristava and J. J. Trujillo, Theory and applications of fractional differential equations, North-Holland Mathematics Studies,(2006) [4] J. L Lavoie, T. J. Osler, R. Tremblay, Fractional derivatives and special functions, SIAM Review 18(2), 240-268,(1976). [5] V. E. Tarasov, Fractional derivative as fractional power of derivative, International Journal of Mathematics, 18, 281-299,(2007). [6] E.M. El-Mesiry, A.M.A. El-Sayed, H.A.A. El-Saka, Numerical methods for multi-term fractional (arbitrary) orders differential equations, Appl. Math. Comput. 160(3), 683-699,(2005). [7] 2. Ashyralyev A. Well-posedness of the Basset problem in spaces of smooth functions, Applied Mathematics Letter, Vol. 24, Issue 7-8,(2011). [8] A.M.A. El-Sayed, E. M. El-Mesiry, H. A. A. El-Saka, Numerical solution for multi-term fractional (arbitrary) orders differential equations, Comput. Appl. Math. 23(1), 33-54,(2004). [9] A.B. Basset, On the descent of a sphere in a viscous liquid. Quart.J.Math. 42,pp.369–381,(1910). [10] Shaher Momani, Kamel Al-Khaled, Numerical solutions for systems of fractional differential equations by the decomposition method. Applied Mathematics and Computation 162( 3),1351-1365,(2005). [11] A. Ashyralyev, F. Dal, Z. Pinar, On the numerical solution of fractional hyperbolic partial differential equations, Mathematical Problems in Engineering, 2009, Article ID 730465,(2009). [12] A. Ashyralyev, A note on fractional derivatives and fractional powers of operators, Journal of Mathematical Analysis and Applications, 357(1), 232-236,(2009). [13] A. Ashyralyev, F. Dal, Z. Pinar, A note on the fractional hyperbolic differential and difference equations, Appl. Math. Comput. 217(9), 4654-4664,(2011). [14] I. Podlubny, A. M. A. El-Sayed, On Two Definitions of Fractional Calculus, Solvak Academy of Science-Institute of Experimental Phys,(1996). [15] A. Ashyralyev and P.E. Sobolevskii, Well-Posedness of Parabolic Difference Equations, Operator Theory Advances and Applications, Birkhäuser Verlag, Basel, Boston, Berlin,(1994). [16] Ph. Clement and S. Guerre-Delabrire, On the regularity of abstract Cauchy problems and boundary value problems, Atti Accad. Naz. Lincei Cl. Sci. Fis. Mat. Natur. Rend. Lincei (9) Mat. Appl. 9(1998), no.4,245266,(1999). [17] V.B. Shakhmurov, Coercive boundary value problems for regular degenerate differential-operator equations, J. Math. Anal. Appl.,292(2),605-620,(2004). [18] A. Lunardi, Analytic Semigroups and Optimal Regularity in Parabolic Problems, Operator Theory Advances and Applications, Birkhäuser Verlag, Basel, Boston, Berlin,(1995). [19] A. A. Samarskii and E. S. Nikolaev, Numerical Methods for Grid Equations, vol. 2 of Iterative Methods, Birkh¨auser, Basel, Switzerland, (1989).

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

530

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/24

On the Modified Crank-Nicholson Difference Schemes for Parabolic Equation Arising in Determination of a Control Parameter Allaberen ASHYRALYEV,Fatih University,Department of Mathematics,Istanbul,Turkey,[email protected] Öznur DEMIRDAĞ,Fatih University,Department of Mathematics, Istanbul, Turkey, [email protected] Keyword: Modified Crank-Nicholson difference schemes; Parabolic equation; Determination of a control parameter INTRODUCTION In the present paper, we consider the boundary value problem of determining the parameter for the multidimensional parabolic equation. Our goal in present paper is to investigate difference schemes for the approximate solution of the problem of determining the parameter for parabolic equations. Modified CrankNicholson schemes for the approximate solution of problems have been studied extensively by many researchers (see, [20]). They are applicable in the case when we have problem with application of the CrankNicholson scheme and the second order of accuracy implicit difference scheme. LITERATURE REVIEW The differential equations with parameters play a very important role in many branches of science and engineering. Some examples were given in temperature over-specification by Dehghan, M.[2], chemistry (chromatography) by Kimura T., and Suzuki,T. [3], physics (optical tomography) by Gryazin, Y.A., Klibanov, M.V., and Lucas, T.R. [4]. The differential equations with parameters have been studied extensively by many researchers (see, [1], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19] and the references there in). However, such problems were not well-investigated in general. As a result, considerable efforts have been expanded in formulating numerical solution methods that are both accurate and efficient. Methods of numerical solutions of parabolic problems with parameters have been studied by researchers (see, [5], [6] and the references there in). METHODS The construction is based on the modified Crank- Nicholson difference scheme for the approximate solution of the parabolic problem. The investigation is based on the new stability inequalities. The stability and coercive stability estimates in the various Banach norms for the solutions of these difference schemes for multidimensional parabolic equation are obtained. This investigation is based on the spectral theory of selfadjoint positive definite operators in a Hilbert space. The implementation of these difference schemes is based on the Gauss elimination method. FINDINGS & CONCLUSION In the present paper, we consider the boundary value problem of determining the parameter for the multidimensional parabolic equation. Stable numerical method is developed and solved by using the CrankNicholson schemes for the approximate solution of this problem. The stability estimates for the solution of these difference schemes are obtained. The theoretical statements for the solution of these difference schemes are supported by the results of numerical examples.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

531

2 nd International Symposium on Computing in Science & Engineering

REFERENCES [1] V. I. Gorbachuk, M. L. Gorbachuk. Boundary value problems for operator differential equations, Springer, 1990. [2] Dehghan, M., 2001, Determination of a control parameter in the two-dimensional diffusion equation, Appl. Numer. Math., 37, 4, 489-502. [3] Kimura T., and Suzuki, T., 1993, A parabolic inverse problem arising in a mathematical model for chromatography", SIAM J.Appl. Math., 53, 6, 1747-1761. [4] Gryazin, Y.A., Klibanov, M.V., and Lucas, T.R., 1999, Imaging the diffusion coefficient in a parabolic inverse problem in optical tomography, Inverse Probl., 2, 5, 373-397. [5] Chao-rong Ye and Zhi-zhong Sun, 2007, On the stability and convergence of a difference scheme for an one-dimensional parabolic inverse problem, Appl. Math. Comput., 188, 1, 214-225. [6] Dehghan, M., 2003, Finding a control parameter in one-dimensional parabolic equations, Appl. Math. Comput., 135, 1-2, 491-503. [7] Eidelman, Y.S., 1984, Boundary Value Problems for Differential Equations with Parameters, PhD Thesis, Voronezh State University, Voronezh.(Russian). [8] Eidelman, Y.S., 1983, Two-point boundary value problem for differential equations with a parameter, Dopovidi Akademii Nauk Ukrainskoi RSR Seriya A-Fiziko-Matematichni ta Technichni Nauki, 4, 15-18. (Russian). [9] Prilepko, A. I., 1973, Inverse problems of potential theory, Mat. Zametki, 14, 755--767; English transl Math. Notes. [10] Iskenderov, A.D., and Tagiev, R.G., 1979, The inverse problem of determining the right-hand sides of evolution equations in Banach space, Nauchn. Trudy Azerbaidzhan. Gos. Univ, 1, 51--56. (Russian). [11] Rundell, W., 1980, Determination of an unknown nonhomogeneous term in a linear partial differential equation from overspecified boundary data, Applicable Anal., 10, 231--242. [12] Prilepko, A. I., and Vasin, I.A., 1991, Some time-dependent inverse problems of hydrodynamics with final observation, Dokl. Akad. Nauk SSSR, 314 (1990), 1075--1078; English transl Soviet Math. Dokl, 42 (1991). [13] Prilepko, A. I., and Kostin, A.B., 1992, On certain inverse problems for parabolic equations with final and integral observation, Mat. Sb., 183, 4, 49--68; English transl Russian Acad. Sci. Sb. Math, 75 (1993). [14] Prilepko A. I., and Tikhonov, I.V., 1992, Uniqueness of the solution of an inverse problem for an evolution equation and applications to the transfer equation, Mat. Zametki, 51, 2, 77--87; English transl Math. Notes, 51 (1992). [15] Orlovskii, D.G., 1990, On a problem of determining the parameter of an evolution equation, Differentsialnye Uravneniya, 26, 1614--1621; English transl. Differential Equations,26 (1990). [16] Eidelman,Yu.S., 1990, Conditions for the solvability of inverse problems for evolution equations, Dokl. Akad. Nauk Ukrain. SSR Ser.A, 7, 28--31. (Russian). [17] Ashyralyev, A, and Sobolevskii, P.E., 1994, Well-Posedness of Parabolic Difference Equations, Operator Theory Advances and Applications, Birkhäuser Verlag, Basel, Boston, Berlin, 1994. [18] Sobolevskii, P.E., 1971, The coercive solvability of difference equations, Dokl. Acad. Nauk SSSR, 201, 5, 1063--1066. (Russian). [19] Ashyralyev, A., 2010, On a problem of determining the parameter of a parabolic equation, Ukrainian Mathematical Journal, 62, 9, 1200-1210. [20] Ashyralyev A., Erdogan A.S. and Arslan N.,2010, On the modified Crank-Nicholson difference schemes for parabolic equation with non-smooth data arising in biomechanics, International Journal for Numerical Methods in Biomedical Engineering, Vol. 26, No. 5, 501-510

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

532

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/26

On the Approximate Solution of Ultra Parabolic Equation Allaberen Ashyralyev, Fatih University , Department of Mathematics,İstanbul,Turkey,[email protected] Serhat Yılmaz,Fatih University,Department of Mathematics,İstanbul,Turkey,[email protected] Keyword: Difference schemes; ultra parabolic equations; non-polynomial splines; matlab implementation; numerical solutions; stability estimates INTRODUCTION In the present work we present fully discrete finite difference method for the approximation solution of the initial boundary value problems for multidimensional ultra parabolic equations. Mathematical models for a number of natural phenomena can be formulated in terms of ultra parabolic equations. Especially problems of this kind arise in the study of the dynamics of a population subject to birth, death and diffusion, in a given finite domain. The solutions of these problems give us functions represent the density of members depend on age, time and position. Hence the function space is the natural state space for the age-spatial density of the population since the norm gives the total population. The death term and birth process are assumed to be in terms of the morality rate and the fertility rate. Since only in limited classes of the studies existence of solutions of ultra parabolic equations are shown, numerical solutions of these equations are practical importance. LITERATURE REVIEW In the present work the construction and investigation of difference schemes for approximating the solutions of local problems for ultra parabolic equations are obtained. The construction is based on an exact difference scheme and Pade approximation of exponential functions. And, it gives us possibility to extend essentially a class of problems where the theory of difference methods is applicable. Namely, it is possible to investigate differential equations with Dirichlet conditions. The investigation is based on new coercivity inequalities . The stability and coercive stability estimates in various Hilbert norms for solution of first and second order of accuracy difference schemes of the local problem for ultra parabolic equations are obtained. Our study based on the theory of positive operators , theory of semigroup operators and theory of interpolation of linear operators [11]. Also we use spectral theory of self-adjoint positive definite operators in a Hilbert space. And, this approach permits us to study the stability of the simple difference schemes for ultra parabolic equations. Here the first and second order of accuracy difference schemes for ultra parabolic equations are considered. The stability estimates for the solution of these difference schemes for boundary value problems for multidimensional ultra parabolic equations are established. The results are supported by the result of numerical examples those solved by Gauss-elimination method. Moreover, results of a numerical method based on nonpolynomial splines in the space direction are given and compared. Existence of mild solutions, as well as other properties, for various classes of age-structured models with diffusion have been investigated by a number of authors [ 1], [2], [3], [4], [5], [6], [7], [8], [9] and [10]. METHODS In recent years, a tremendous amount of research has been done for existence and uniqueness of solutions of various problems. But in our study, analytic solution and numerical analysis have emerged as essential steps. Numerical analysis is here understood as the part of mathematics that describes and analyzes the numerical schemes that are used on computers; its objective consists in obtaining a clear, precise, and faithful, representation of all the information contained in a mathematical model; as such, it is the natural extension of

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

533

2 nd International Symposium on Computing in Science & Engineering

more classical tools, such as analytic solutions, special transforms, functional analysis, as well as stability and asymptotic analysis. FINDINGS & CONCLUSION This work is devoted to study the difference schemes of multidimensional ultra parabolic equations. In the sequel absolutely stable difference schemes are obtained and the theorems on stability estimates for the solutions of multidimensional ultra parabolic equations with Dirichlet conditions are presented. Moreover, the matlab implementation of the first and second order of difference schemes for ultra parabolic equations with Dirichlet conditions in one-dimensional space variable is presented respectively. Finally, it is shown that the stability estimates for the solutions of the high order of accuracy difference schemes of the local problem for ultra parabolic equations can be obtained. REFERENCES [1] S. Busenburg and M. Iannelli, A degenerate nonlinear diffusion problem in age-structured population dynamics, Nonlinear Anal., 7 (1983), 1411-1429. [2] G. Da Prato and P. Grisvard, Sommes d`opèratuers linèaries et èquations difèrentielles opèrationelles, J. Math. Pures et Appl., 54 (1975), 305–387. [3] Q. Deng and T. G. Hallam, An age structured population model in a spatially heterogeneous environment: Existence and uniqueness theory, Nonlinear Anal., 65 (2006), 379–394. [4] G. Di Blasio and L. Lamberti, An initial boundary value problem for age–dependent population diffusion, SIAM J. Appl. Math., 35 (1978), 593–615. [5] G. Di Blasio, Nonlinear age–dependent diffusion, J. Math. Biol., 8 (1979), 265–284. [6] J. Dyson, E. Sanchez, R. Villella–Bressan and G. F. Webb, An age and spatially structured model of tumor invasion with haptotaxis, Discrete Continuous Dynam. Systems - B, 8 (2007), 45–60. [7] K. Kunisch, W. Schappacher and G. F. Webb, Nonlinear age–dependent population dynamics with random diffusion, Comput. Math. Appl., 11 (1985), 155–173. [8] P. Magal and S. Ruan, On integrated semigroups and age structured models in Lp spaces, Diff. Int. Eqns., 20 (2007), 197–239. [9] P. Marcati and R. Serafini, Asymptotic behaviour in age dependent population dynamics with spatial spread, Boll. Un. Mat. Ital. B, 16 (1979), 734–753. [10] A. Rhandi and R. Schnaubelt, Asymptotic behaviour of a non–autonomous population equation with diffusion in L1, Discrete Continuous Dynam. Systems, 5 (1999), 663–683. [11] A. Ashyralyev and P. E. Soboloski, New Difference Schemes for Partial Differential Equations, Operator Theory Advance and Applications Vol. 148

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

534

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/27

Some Numerical Methods on Multiplicative Calculus Emine MISIRLI,Ege University,Department of Mathematics,Izmir,Turkey,[email protected] Yusuf GÜREFE,Bozok University,Department of Mathematics,Yozgat,Turkey,[email protected] Keywords: Multiplicative Calculus,Numerical Algorithms,Computational Methods,Stability Analysis INTRODUCTION Differential calculus is used in a lot of problems in which the mathematical modelling is required. The mathematical modelling of most phenomena in science and engineering are based on an evolutionary description. Therefore it is natural to model such phenomena through differential equations. Some of these problems may involve difficult approaches when the classical concepts are used for mathematical formulation. For example, Riza et al. [5] shows that, the problem involving growth rate can be effectively expressed using multiplicative calculus althought it can be also expressed using classical concepts. This will necessitate a lot more effort. M. Grossman and R. Katz alternatively defined some new calculi in [1, 2, 3, 4, 5] and have shown that each of these calculi can be effectively used for mathematical approaches of some problems. Therefore, we developed some multiplicative algorithms, for the numerical approximation of the solutions of these multiplicative differential equations. For the numerical solutions of multiplicative initial value problems and boundary value problems, multiplicative Runge–Kutta algorithms in [2], and multiplicative finite difference algorithm in [5] were respectively given. The well known classical Adams Bashforth–Moulton algorithms were also defined in [15]. LITERATURE REVIEW Differential and integral calculus, the most applicable mathematical theory, was created independently by I. Newton and G. W. Leibnitz in the second half of the 17th century. Later L. Euler redirect calculus by giving a central place to the concept of function, and thus founded analysis. Two operations, differentiation and integration, are basic in calculus and analysis. In fact, they are the infinitesimal versions of the subtraction and addition operations on numbers, respectively. In the period from 1967 till 1970 M. Grossman and R. Katz gave definitions of a new kind of derivative and integral, moving the roles of subtraction and addition to division and multiplication, and thus established a new calculus, called multiplicative calculus. Sometimes, it is called an alternative or non-Newtonian calculus as well. Unfortunately, multiplicative calculus is not so popular as the calculus of Newton and Leibnitz although it perfectly answers to all conditions expected from a theory that can be called a calculus. We, the authors of this paper, think that the gap is insufficient advertising of multiplicative calculus. We can account only two related papers [1, 2]. Multiplicative calculus has a relatively restrictive area of applications than the calculus of Newton and Leibnitz. Indeed, it covers only positive functions. Certain dynamical systems cannot be described with common differential calculus. For example, when fractals are employed to model processes and effects occurring in nature, models contain labile fractal dimension. Additive derivative of dimensional function does not exist, since it is not possible to define differential quotient [2]. In this case, the multiplicative calculus introduced by Volterra [11] can be applied. This calculus is called Volterra type multiplicative calculus. However, some relevant studies were presented by using this multiplicative differential calculus. Aniszewska and Rybaczuk [12] contains derivation of stability theory of the Lyapunov type for system of autonomous multiplicative differential equations. For the multiplicative Lorenz system described with multiplicative derivatives, the largest Lyapunov exponent was obtained. Also, the paper [13] is devoted to fractal model of fatigue defects growth. The paper [14] presents other arguments supporting multiplicative calculus. According to experience from field theory or quantum physics, such derivatives give good results (see [14]) and in problems involving multiple scale of length approach. So, multiplicative differential calculus becomes very important.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

535

2 nd International Symposium on Computing in Science & Engineering

METHODS Numerous approximating and interpolating methods, in numerical analysis, can be applied by using the polynomial, rational function, trigonometric function and exponential function methods. In this section, using the exponential function methods we construct the exponential backward division formula, similar to backward difference formula in polynomial case, for numerical solution of the multiplicative differential equation. Here multiplication, division and power, in multiplicative calculus, take the role of summation, difference and multiplication in ordinary calculus. For this reason, we observe that forward difference, backward difference and divided difference formula, in the polynomial case, are respectively replaced by forward division, backward division and power division formula in exponential case. For more details, we refer the reader to [8]. According to [6, 7], we can now construct the multiplicative methods for ordinary multiplicative differential equation. The behaviour of numerical methods on stiff problems can be analyzed by applying these methods to the standard test problem [9, 10]. Also an initial value problem is solved by using the developed methods. Then the obtained results are presented in Tables and the solutions are compared with the analytic solution. FINDINGS – CONCLUSION In this study, we presented multiplicative methods to solve the first order multiplicative differential equations. These methods are tested for the multiplicative initial value problem. Comparing the obtained approximate numerical solutions with the other methods, we observe that the presented algorithms give more approximate results than the others. This case can be obviously seen with the relative error analysis, since the maximum relative errors from the proposed algorithms are smaller than those from the other algorithms. Our algorithms for the first order multiplicative differential equations are very efficient, effective and numerically unconditional stable. Numerical results are presented which exhibit the high accuracy of the proposed algorithms. Consequently it can be said that many problems in engineering and sciences might be solved by these developed methods. REFERENCES [1] Bashirov, A.E., Misirli Kurpinar, E., Ozyapici, A.: Multiplicative calculus and its applications. J. Math. Anal. Appl. 337, 36–48 (2008) [2] Aniszewska, D.: Multiplicative runge-kutta methods. Nonlinear Dyn. 50, 265–272 (2007) [3] Grossman, M.: Bigeometric Calculus, a System with a Scale-free Derivative. Archimedes Foundation, Rockport (1983) [4] Grossman, M., Katz, R.: Non-Newtonian Calculus. Lee Press, Pigeon Cove, Massachusats (1972) [5] Riza, M., Ozyapici, A., Misirli, E.: Multiplicative Finite Difference Methods. Q. Appl. Math. 67, 745–754 (2009) [6] Suli, E., Mayers, D.F.: An Introduction to Numerical Analysis. Cambridge University Press, Cambridge (2003) [7] Butcher, J.C.: Numerical Methods for Ordinary Differential Equations. Wiley, Chichester (2003) [8] Misirli, E., Ozyapici, A.: Exponential approximations on multiplicative calculus. Proc. Jangjeon Math. Soc. 12, 227–236 (2009) [9] Dahlquist, G.G.: A special stability problem for linear multistep methods. BIT 3, 27–43 (1963) [10] Ehle, B.L.: On Pade approximations to the exponential function and a-stable methods for the numerical solution of initial value problems. Report 2010, University of Waterloo (1969) [11] Volterra, V., Hostinsky, B.: Operations Infinitesimales Lineares. Herman, Paris (1938). [12] Aniszewska, D., Rybaczuk, M.: Lyapunov type stability and Lyapunov exponent for exemplary multiplicative dynamical systems. Nonlinear Dyn. 54, 345–354 (2008) [13] Rybaczuk, M., Stoppel, P.: The fractal growth of fatigue defects in materials. Int. J. Fract. 103, 71–94 (2000) [14] Nottale, L.: Scale, relativity and fractal spacetime: applications to quantum physics, cosmology and chaotic systems. Chaos Soliton Fract. 7, 877–938 (1996) [15] Misirli, E., Gurefe, Y.: Multiplicative Adams Bashforth–Moulton methods, Numer. Algor. doi:10.1007/s11075-010-9437-2 (In Press)

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

536

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/30

Cubic B-spline Collocation Method for Space-splitted one- dimensional Burgers’ Equations Gülnur YEL,Mugla University, Department of Mathematics,Mugla,Turkey, [email protected] Zeynep Fidan KOÇAK,Mugla University, Department of Mathematics,Mugla,Turkey,[email protected] Keyword: B-spline,Burgers' equation,collocation method,space-splitted INTRODUCTION In this study, we consider one-dimensional Burgers’ equation . The equation is non-linear partial differential equation which can be solved analytically for the arbitrary initial and boundary conditions. The equation has a coefficient of the kinematic viscosity, first and second order derivatives. The function in this equation is the velocity for space x and time t. Initial and boundary conditions are chosen as homogeneous. We consider the spatial domain [0,1] is partitioned into N finite element as a uniform. Burgers’ equation is splitted in space then cubic B-spline collocation method is applied. For this purpose, we write an other function in state of first order derivative of orijinal function. This technique gives a first order coupled system. The system with this equation involves first order derivatives and second order derivatives. Those derivatives are computed from cubic B-spline collocation method at knots. The equation is solved with many numerical techniques and analytically. The equation is solved in timesplitting quartic B-spline collocation method, space-splitting quadratic B-spline collocation method , timesplitting cubic B-spline method and so forth. Now, in addition to those techniques which we use space-splitting cubic B-spline collocation method. We will seek whether cubic B-spline collocation method is beter than the other methods or not. LITERATURE REVIEW Burgers’ equation was first introduced by Bateman [19]. Burgers’ extensive work on Burger equation as a mathematical model for the turbulence. For this reason, this equation is known Burgers’ equation. The equation is used wide field as heat conduction [1], gas Dynamics [2], shock waves [3], number theory [4] and so forth. Burgers’ equation is solved exactly for an arbitrary initial and boundary conditions [5, 6, 7]. For getting solution of differential equation, spline functions are used in numerical methods. The term B-spline was coined by Isaac Jacob Schoenberg (1946) and is short for “basis spline”. B-spline functions are piecewise value and their derivatives of lower degree are continous Numerical methods with spline functions in getting the numerical solution of the differential equations lead to band matrices. Solutions of this matrix systems are very easy and useful for computer programming. Many authors have used a variety method of numerical techniques for getting numerical solution of Burgers’ equation.Great numbers of those techniques for solving Burgers’ equation spline and B-spline functions are used in variable methods. For instance, cubic B-spline collocation method is suggeted for one-dimensional Burgers’ equation [8,9,10] The cubic spline function technique and quasilinearisation for the numerical solutions of the Burgers’ equation have been used in one space variable at low Reynolds numbers [11]. The equation is solved numerically by the collocation method with cubic B-spline interpolation functions over uniform elements [12]. A finite element solution of the Burgers’ equation based on Galerkin method using Bsplines as both element shape and test functions is developed in the studies [13, 14,15]. Least-squares formulation using quadratic B-splines as trial function is given over the finite intervals [16]. A Petrov–Galerkin method is used by a quadratic B-spline spatial finite elements in [17], and it is also used a least-squares technique using linear space-time finite elements in [18].

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

537

2 nd International Symposium on Computing in Science & Engineering

METHODS In this study, cubic B-spline collocation finite element method for calculating numerical solution of onedimensional Burgers’ equation is used. Space-splitted Burgers’ equation is solved numerically. Boundary and initial conditions are presented. The numerical methods with cubic B-spline functions in getting the numerical solutions of the partial differential equation lead to tridiagonal matrices which are solvable using the Thomas Algorithm. In the method, we have (2N+2)x(2N+2) dimensional matrix system. Before the solution process begin iteratively, initial parameters must be determined by using initial and boundary conditions. Fourier stability analysis method shows that a numerical scheme based on Crank-Nicolson approximation is unconditionally stable. In the end, the method is compared with previous methods. FINDINGS & CONCLUSION In this section, we will make comparison between space-splitting technique for one dimensional Burgers’ equation and the other methos for solving the equation. The comparisons will be given tables and graphics. At knots over the interval [0, 1] will be divided into N uniform finite element points. We will show that how the result is changing when N number increase or decrease. The technique is shown that when the differential equation involves higher derivatives, space-splitted scheme accompanied with low-order polynomial enables to construct some approximate functions in the numerical tecniques. After all, the space-splitted numerical methods can be preferable in getting the numerical solution of those differential equations due to providing easy algorithm. REFERENCES [1] Cole, J. D. ( 1951). On a quasi-linear parabolic equation occurring in aerodynamics; Q.Appl.Math.9, 225– 236. [2] Lighthill, M. J. (1956). Viscosity effects in sound waves of finite amplitude; Surveys in Mechanics (G. K. Batchlor and R. M. Davies, eds.), Cambridge University Press, Cambridge, , pp. 250–351 (2 plates). [3] Burgers, J.M. (1948). A mathematical model illustrating the theory of turbulence; AdvancesinApplied Mechanics; Academic Press, New York, pp. 171–199. [4] Van der Pol, B. (1951). On a non-linear partial differential equation satisfied by the logarithm of the Jacobian theta-functions, with arithmetical applications, Proc.Acad.Sci.Amsterdam 013. [5] Burgers,J.M. (1948). A mathematical model illustrating the theory of turbulence; Advances in applied mechanics. New York: Academic Press; p. 171–99 [6] Hopf, E. (1950). The partial differential equation Ut + UUx mUxx = 0; Commun Pure Appl Math.3:201– 30. [7] Cole, J.D. (1951). On a quasi-linear parabolic equations occurring in aerodynamics; Quart Appl Math.9:225–36. [8] Rubin, S.G. and Graves, RA. (1975). Viscous flow solutions with a cubic spline approximation; Comput Fluids.3:1–36 [9] Rubin, S. G. and Khosla, PK. (1976). Higher-order numerical solutions using cubic splines; AIAA J.14:851–8. [10] Caldwell, J. Applications of cubic splines to the nonlinear Burgers’ equation; In: Hinton E et al., editors. Numerical Methods for Nonlinear Problems, 3. p. 253–61. [11] Rubin, S. G.; and Graves, R. A. (1975). Cubic spline approximation for problems in fluid mechanics, Nasa TR R-436 .District of Columbia [12] Ali, A.H.A.; Gardner, G.A.; and Gardner.L.R.T. (1992). A collocation solution for Burgers’ equation using cubic B-spline finite elements; Comput. Method Appl. M. 100 no. 3, 325–337. [13] Ali, A. H., A.; Gardner, L. R.T.; and Gardner,G.A. (1990). A Galerkin Approach to the Solution of Burgers’ Equation; University College ofNorthWales, Bangor, Maths Preprint Series, no. 90.04. [14] Davies, A.M. (1978). Application of the Galerkin method to the solution of the Burgers’ equation; Comput. Method Appl. M. 14, 305–321. [15] Gardner, L. R.T.; Gardner, G. A.; and Ali, A. H. A. (1991). A method of lines solutions for Burgers’ equation. Proceeding of the Asian Pacific Conference on Computational Mechanics; A.A. Balkema/ Rotterdam/ Brookfield, Hong Kong,

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

538

2 nd International Symposium on Computing in Science & Engineering

[16] Kutluay, S.; Esen, A.; and Dağ, İ. (2004). Numerical solutions of the Burgers’ equation by the leastsquares quadratic B-spline finite element method. J. Comput. Appl. Math. 167, no. 1, 21–33. [17] Gardner, L.R.T.; Gardner, G. A.; and Dogan, A. (1996). A least-squares finite element scheme for Burgers’ equation; University of Wales, Bangor, Mathematics, Preprint 96.01 [18] Gardner, L.R.T.; Gardner, G. A.; and Dogan, A. (1997). A Petrov–Galerkin finite element scheme for Burgers’ equation; Arab.J. Sci.Engrg. 22, 99–109. [19] Bateman, H. (1915). Some recent researches on the motion of the fluids; Monthly Weather Rev. 43:163– 70.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

539

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/31

A New Hermite Collocation Method for Solving Differential Equations of Lane-Emden Type Prepared for ISCSE 2011 Hatice YALMAN, Mugla University,Department ofMathematics,Mugla,Turkey,[email protected] Yalçın ÖZTÜRK, Mugla University,Department of Mathematics,Mugla,Turkey,[email protected] Mustafa GÜLSU, Mugla University,Department of Mathematics,Mugla,Turkey,[email protected] Mehmet SEZER, Mugla University,Department of Mathematics,Mugla,Turkey,[email protected]

Keywords: Lane-Emden equation, collocation method, Hermite polynomials INTRODUCTION In recent years, the studies of singular initial value problems in the second order ordinary differential equations(ODEs) have attracted the attention of many mathematicians and physicists. One of the equations describing this type is the Lane–Emden-type equations with initial conditions. Since, Lane–Emden type equations have significant applications in many fields of scientific and technical world, a variety of forms have been investigated by many researchers. A discussion of the formulation of these models and the physical structure of the solutions can be found in the literature. LITERATURE REVIEW The standart Lane-Emden equation that was used to model the thermal behavior of a spherical cloud of gas acting under the mutual attraction of its molecules and subject to the classical laws of thermodynamics and is the isothermal gas sphere equation, where the temperature remains constant .In literature, there were few people who studying in order to find numerical approximate solution of this equation such as Adomian Decomposition, homotopy perturbation method and etc. METHODS The purpose of this study is to give a Hermite polynomial approximation for the solution of Lane-Emden equation. For this purpose, a new Hermite collocation method is introduced. This method is based on taking the truncated Hermite expansion of the function. For this purpose, we convert the Lane-Emden equation to matrix equation. Using collacation points, the result matrix equation can be solved and the unknown Hermite coefficients can be found approximately. In addition, examples that illustrate the pertinent features of the method are presented, and the results of study are discussed. FINDINGS & CONCLUSION In this study a new method for the solution of the Lane-Emden equation has been proposed and investigated. The method is introduced as an alternative and approximate solution of this equation. The method is based on Hermite polynomials and at the end we find the coefficient of our mehod for solution. A considerable advantage of the method is that the Hermite polynomial coefficients of the solution are found very easily by using computer programs in Maple 9. Shorter computation time and lower operation count results in reduction of cumulative truncation errors and improvement of overall accuracy. In addition, an interesting feature of this method is to find the analytical solutions if the equation has an exact solution that is a polynomial functions. Suggested approximations make this method very attractive and contributed to the good agreement between approximate and exact values in the numerical example.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

540

2 nd International Symposium on Computing in Science & Engineering

REFERENCES [1] D.C.Biles, M.P. Robinson, J.S. Spraker, A generalization of the Lane-Emden equation, J. Math. Anal. Appl. 273:654-666 (2002). [2] N.T. Shawagfeh, Nonperturbative approximate solution for Lane–Emden equation, J. Math. Phys. 34:4364– 4369 (1993). [3] H.T. Davis, Introduction to Nonlinear Differential and Integral Equations, Dover, New York, 1962. [4] M.Kumar, N. Singh, Modified Adomian Decomposition Method and computer implementation for solving singular boundary value problems arising in various physical problems, Comp. Chem. Eng.,In Press(2010). [5] S.K.Varani, A. Aminataei, On the numerical solution of differential equations of Lane-Emden type. Comp. Math. Appl. 59:2815-2820 (2010). [6] K.Parand, M. Dehghan, A. R. Rezaei, S. Ghaderi, An approximation algorithm for the solution of the nonlinear Lane–Emden type equations arising in astrophysics using Hermite functions collocation method, Comp. Phys. Comm.181:1096-1108(2010). [7] O.P.Singh, R. K. Pandey, V. K. Singh, An analytic algorithm of Lane–Emden type equations arising in astrophysics using modified Homotopy analysis method, Comp. Phys. Comm.180:1116-1124(2009). [8] A.Yıldırım, T. Öziş, Solutions of singular IVPs of Lane–Emden type by the variational iteration method, Nonlinear Analysis, 70:2480-2484(2009). [9] Y.Q. Hasan, L. M. Zhu, Solving singular boundary value problems of higher-order ordinary differential equations by modified Adomian decomposition method, Commun. Nonlinear Sci. Numer. Simulat. 14:25922596(2009). [10] D. Benko, D. C. Biles, M. P. Robinson, J.S. Spraker, Numerical approximation for singular second order differential equations, 49:1109-1114(2009) [11] A.Aslanov, A generalization of the Lane–Emden equation, Int. Jour. Comp. Math.,85:1709-1725(2008) [12] A. Yıldırım, T. Öziş, Solutions of singular IVPs of Lane–Emden type by homotopyperturbation method, Phys. Letters A, 369:70-76(2007). [13] A. Wazwaz, A new algorithm for solving differential equations of Lane-Emden type, Appl. Math. Comp. 118:287-310 (2001).

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

541

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/33

The Analytical and a Higher-Accuracy Numerical Solution of a Free Boundary Problem in a Class of Discontinuous Functions Bahaddin SİNSOYSAL,Beykent University,Department of Mathematics and Computing,Istanbul,Turkey Keywords: Stephan type problem, free boundary problem, effect localization, auxiliary problem for weak solution, numerical solution in a class of discontinuous functions INTRODUCTION It is known that many practical problems such as distribution of heat waves, melting glaciers, filtration of a gas in a porous medium etc. are described by nonlinear equations of the parabolic type as (1) ut = φxx(u), in R2+ with following initial (2) u(x,0) = u0(x) = 0, in I =[0,∞) and boundary (3) u(0,t) = u1(t) = u0tn, t > 0 conditions, where R2+= I×[0,T), u0 and n are real known constants. In order to study the properties of the exact solution of the problem (1)-(3) for the sake of simplicity the case φ(u) = uσ, is considered. Suppose that the function φ(u) is any function that satisfies the following conditions: (i) φ(u)єC2(R2+), (ii) φ'(u) ≥ 0, for u ≥ 0 and σ ≥ 0, (iii) for σ ≥ 2, φ"(u) have alternative signs on the domain when u(x,t) ≠ 0. LITERATURE REVIEW In [1], at first the effect of localization of the solution of the equation describing the motion of perfect gas in a porous medium is observed and the solution in the traveling wave form is structured. Then, the mentioned properties of the solution for the nonlinear parabolic type equation are studied in [2]. These problems also are called free boundary problems. Therefore, it is necessary to obtain the moving unknown boundary together with the solution of a differential problem. Its nature raises several difficulties for finding analytical as well as numerical solutions of this problem. In some special cases it is possible to obtain the analytical solution of the problem in the traveling wave form. As can be seen from finding the solution, the differentiability order of this solution is less than the order of the differentiability which is required by the equation from the solution. This property forces us to generalize the concept of a classical solution and include a weak solution for the interested problem. In addition to this, the equation may degenerate when the effect of localization exists. METHODS In [3] and [4], it is shown that the problem (1)-(3) has the solution in the traveling wave form as u(x,t) = (Dσ-1/σ)1/ σ-1(Dt-x) 1/ σ-1 if 0 < x < Dt, u(x,t) = 0 if x ≥ Dt. (4) Via simple calculation we get the function u(x,t) and w(x,t) = -uσx= Du(x,t) are continuous in Dℓ(t) = {(x,t) | 0 ≤ x ≤ Dt, 0 ≤ t ≤ T}, but ut and ux does not exist when σ > 2. In order to find the weak solution of the problem (1)-(3), according to [5] the special auxiliary problem vt = ((vx)σ)x, (5)

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

542

2 nd International Symposium on Computing in Science & Engineering

v(x,0) = v0(x), (6) u(x,0) = vx(0,t) = u0tn (7) is introduced. Here, the function v0(x) is any solution of the equation (8) (v0(x))x = 0. The problem (5)-(7) has the solution v(x,t) = -D1/ σ-1((σ-1)/σ)(Dt-x) σ/ σ-1 if x < Dt, v(x,t) = 0, if x ≥ Dt (9) in the traveling wave form [3]. As it is seen from (9) the differentiability property of the function v(x,t) is more than the differentiability property of the solution u(x,t). In addition to this, from (9) we get u(x,t) = vx(x,t). FINDINGS & CONCLUSION The new method is suggested for obtaining the regular weak solution for the free boundary problem of the nonlinear parabolic type equation. The auxiliary problem which has some advantages over the main problem is introduced and permits us to find the exact solution with singular properties. The auxiliary problems introduced above allow us to develop the higher resolution method where the obtained solution correctly describes all physical features of the problem, even if the differentiability order of the solution of the problem is less than the order of differentiability which is required by the equation from the solution. REFERENCES [1] Antoncev, S.N., On the Localization of Solutions of Non-linear Degenerate Elliptic and Parabolic Equations, Soviet Math Dokl., Vol.24, pp.420-424, 1981. [2] Barenblatt, G.I., Vishik, M.I., On the Finite Speed of Propagation in the Problems of Unsteady Filtration of Fluid and Gas in a Porous Medium, Prikladnaya Mat. Mech.(Applied Mathematics and Mechanics), Vol.20, No.3 pp.411-417, 1956. [3] Godunov, S. K., Equations of Mathematical Physics, Moscow, 1979. [4] Rasulov, M. A., A Numerical Method of Solving a Parabolic Equation with Degeneration, Dif. Equations, Vol. 18, No.8, pp.1418-1427, 1992. [5] Sinsoysal, B., A New Numerical Method for Stefan-Type Problems in a Class of Unsmooth Functions, Int. J. Contemp. Math. Sciences, Vol.5, N0.27, pp.1323-1335, 2010.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

543

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/35

A Note on the Difference Scheme of Multipoint Nonlocal Boundary Value Problems for Elliptic-Parabolic Equations Allaberen Ashyralyev,Fatih University,Mathematics,Istanbul,Turkey,[email protected] Okan Gercek,Fatih University,Mathematics,Istanbul,Turkey,[email protected] Keyword: Elliptic-parabolic Equation; Nonlocal boundary value problems; Difference Scheme; Stability INTRODUCTION This work is devoted to the study of second order of accuracy difference scheme generated by CrankNicholson difference scheme for the approximate solutions of the nonlocal boundary problem for ellipticparabolic differential equations. In general, such type problems cannot solve exactly. It is clear that that the elliptic-parabolic problem with nonlocal boundary conditions can be solved by Fourier series, Laplace transform, and Fourier transform methods. However, these analytic methods can be used only in the case of constant coefficients. It is well known that one of the most general methods for solving partial differential equations with dependent coefficients in t and in the space variables is difference method, which is basically realized by digital computers and known to be numerical method. Modern computers allow the implementation of various difference schemes. Nevertheless, the stability of different difference schemes used in numerical methods need to be proved or justified theoretically. The main characteristics of difference schemes are their accuracy and stability. Therefore, the construction and investigation of a high order of accuracy difference schemes are important in application. LITERATURE REVIEW In recent years, more and more mathematicians have been studying on nonlocal problems for ordinary differential equations and partial differential equations because of their existence in many applied problems which are included by applied sciences. Several types of problems in fluid mechanics (dynamics of reactiondiffusion equations, modelling processes of exploitation of gas places and applied problems of theoretical gas hydrodynamics) and other areas of physics (thermal conductivity, thermo elasticity, heat transfer problems, dynamics of a magnetically confined plasma, flow and transport systems in porous media, combustion theory) and mathematical biology (modelling the growth of a tumor cord) lead to partial differential equations of elliptic-parabolic type. Theory and numerical methods of solutions of the nonlocal boundary value problems for partial differential equations were investigated by many researchers (see, e.g., [1]-[15] and the references therein). In the present paper, we are interested in studying second order of accuracy difference scheme for approximately solving multipoint nonlocal boundary value problem. The well-posedness of this difference scheme in Hölder spaces is established. In applications, coercivity inequalities for approximate solutions of the multipoint nonlocal boundary value problems for mixed type equations are obtained. The method is illustrated by numerical examples. The Matlab implementation of these difference schemes for elliptic-parabolic equation is based on the method in the paper of A. Bitsadze and A. Samarskii, [16]. METHODS The construction of the second order of difference scheme is based on the exact difference scheme. Investigation of stability of this difference scheme is based on the spectral theory of self-adjoint positive definite operator in a Hilbert space. To solve the difference equation, we have applied a procedure of modified Gauss elimination method. This type of system was used by Samarskii and Nikolaev [16] for difference

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

544

2 nd International Symposium on Computing in Science & Engineering

equations. FINDINGS & CONCLUSION We consider the second order of accuracy difference scheme generated by Crank-Nicholson difference scheme for the approximate solution of the boundary value problem under an assumption. The well-posedness of this difference scheme in Hölder spaces is established. In applications, the stability, almost coercivity inequalities, coercivity inequalities for the solutions of difference scheme for the approximate solution of this nonlocal boundary value problem for mixed type equation are obtained. Applications and two theorems are given. The theoretical statements for the solution of first and second order of accuracy schemes for one-dimensional elliptic-parabolic differential equation are supported by numerical examples and the results show that the second order of accuracy difference scheme is more accurate than the first order of accuracy difference scheme. REFERENCES [1] Salakhitdinov, M. S., Equations of Mixed-Composite Type, Fan: Tashkent, (Russian), 1974. [2] Bazarov, D. and Soltanov, H., "Some Local and Nonlocal Boundary Value Problems for Equations of Mixed and Mixed-Composite Types, " Ylim: Ashgabat, (Russian), 1995. [3] Glazatov, S. N., "Nonlocal boundary value problems for linear and nonlinear equations of variable type", Sobolev Institute of Mathematics SB RAS, Preprint No. 46, 26p., 1998. [4] Ashyralyev, A., "A note on the nonlocal boundary value problem for elliptic-parabolic equations," Nonlinear Studies, vol. 13, no. 4, pp. 327-333, 2006. [5] Karatopraklieva, M. G., "On a nonlocal boundary value problem for an equation of mixed type" Differensial'nye Uravneniya, Vol. 27, No.1, pp. 68-79, (Russian), 1991. [6] Nakhushev, A. M., Equations of Mathematical Biology, Textbook for Universities, Vysshaya Shkola: Moskow, (Russian), 1995. [7] Ashyralyev, A. and Soltanov, H., "On one difference schemes for an abstract nonlocal problem generated by the investigation of the motion of gas on the homogeneous space" , in: Modeling Processes of Exploitation of Gas Places and Applied Problems of Theoretical Gas hydrodynamics, Ilim, Ashgabat, pp. 147-154, (Russian), 1998. [8] Ewing, R. E., Lazarov, R. D. and Lin, Y. "Finite volume element approximations nonlocal reactive flows in porous media," Numerical Methods for Partial Differential Equations, Vol. 16, pp. 285-311, 2000. [9] Diaz, J., Lerena, M., Padial, J. and Rakotoson, J., "An elliptic-parabolic equation with a nonlocal term for the transient regime of a plasma in a Stellarator", Journal of differential equations, Vol. 198, No. 2, pp. 321-355, 2004. [10] Ashyralyev A. and Gercek O., Nonlocal boundary value problems for elliptic-parabolic differential and difference equations, Discrete Dynamics in Nature and Society,1-16, 2008. [11] Ashyralyev, A. and Gercek, O., "Numerical solution of nonlocal boundary value problems for elliptic-parabolic equations", Further progress in analysis: Proceedings of the 6th International ISAAC Congress Ankara, Turkey 13 18 August 2007, World Scientific, pp. 663-670, 2009. [12] Ashyralyev A. and Gercek O., On second order of accuracy of the approximate solution of nonlocal ellipticparabolic problems, Abstract and Applied Analysis, 2010. [13] Ashyralyev A. and Gercek O., Finite difference method for multipoint nonlocal elliptic-parabolic problems, Computer and Mathematics with Applications, Vol. 2010, No. 7, pp. 2043-2052, 2010. [14] Ashyralyev A. and Gercek O., Wellposedness of multipoint elliptic-parabolic problems, Malaysian Journal of Mathematical Sciences. (under review) [15] Ashyralyev A. and Gercek O., On multipoint nonlocal elliptic-parabolic difference problems,Vestnik of Odessa National University. Mathematics and Mechanics. (accepted) [16] Samarskii, A. A. and Nikolaev, E. S., Numerical Methods for Grid Equations, vol. 2: Iterative Methods, Birkhauser, Basel, Switzerland, 1989. [17] Sobolevskii, P. E, The theory of semigroups and the stability of difference schemes, Operator Theory in Function Spaces (Proc. School, Novosibirsk, 1975), Nauka, Sibirsk. Otdel. Akad. Nauk SSSR, Novosibirsk, pp. 304337, 1977. [18] Sobolevskii, P. E., "On the stability and convergence of the Crank-Nicolson scheme" in: Variational-Difference Methods in Mathematical Physics, Vychisl.Tsentr Sibirsk. Otdel. Akad. Nauk SSSR, Novosibirsk, pp.146-151, (Russian), 1974. [19] Sobolevskii, P.E., Difference Methods for the Approximate Solution of Differential Equations, Voronezh State University Press, Voronezh, 1975

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

545

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/38

Testing the Validity of Babinet's Principle in the Realm of Quantum Mechanics with a Numerical Case Study of an Obstacle Mustafa Demirplak,Fatih University,Department of Chemistry,Büyükçekmece,34500,İstanbul,Turkey,[email protected] Osman Çağlar Akın,Fatih University,Department of Physics,Büyükçekmece,34500,İstanbul,Turkey,[email protected] Keyword: Numerical analysis, physical optics, Babinets principle, quantum mechanics ABSTRACT Babinet's principle is a well known and a widely applied subject in physical optics. The principle states that complementary objects in configuration space will end up having the same diffraction pattern. A diffraction pattern obtained from a circular object is expected to have the same diffraction pattern as a circular aperture. The same pattern can be obtained by using first principles calculation based on Huygens' principle, too. Yet there is no obvious reason why these principles should also be valid for material particles as well, since both Babinet's principle and Huygens' principle were set for optics and are not based on electromagnetic theory and hence remain somewhat axiomatic. In this study we work on a quantum mechanical application to test the validity of Babinet's principle in the realm of quantum mechanics. Although there seems to be no enforcing condition between the Huygens' principle in optics and the first principles of quantum mechanics, similar results are obtained. Possible applications in surface growth science and technology are discussed. INTRODUCTION To this day in optics the calculation diffraction patters of two-dimensoinal objects have been calculated by means of Rayleigh-Sommerfeld formulation of diffraction integrals or Huygens-Fresnel or sometimes known as Huygens-Kirchoff integrals based on methods stemming from Huygens principle. This principle is rather axiomatic, in the same sense as the Fermats principle or Hamiltons principle in analytical mechanics. It is only natural to ask whether th implications of the same principle hold for wave mechanics in quantum Mechanics or not where the essential players are not electromagnetic waves but waveicles, (wave-particles) that are driven by entirely different Differential Equations like the Schrödinger Equation rather than Maxwells Equations that determine the behavior of electromagnetic waves. It is also worthy of mention to note that Huygens principle is older than both the Schrödinger equation and Maxwells equations. LITERATURE REVIEW When Fresnel applied Huygens principle to diffration of light by two-dimensional apertures, Poisson applied the same method to a circular obstacle instead of a circular aperture and found out that there appears a bright spot right behind the obstacle [Lucke, Lipson, Born]. Poisson used this result as a mockery against Fresnels theory, and may be, for this reason, his name is coined with this seemengly unreasonable phenomenon after the observation of the predicted spot by Arago, as the Poisson spto or the Poisson-Arago spot (Moeller). Poissons spot may also be considered as the result of a different principle in physical optics known as Babinet's principle (Goodman, Balanis, Martinez-Anton). The applications of these principles in optoelectronics is ubiqutous (Lucke, Nussbaum, Juffman, Wang, Frances). Up to this day, there are no known rigorous quantum mechanical calculations for the general theory of a Poisson spot with material particles although some experiments with molecules have been performed to prove the existence of similar structures. In this paper we perform this previously missing task with rigor.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

546

2 nd International Symposium on Computing in Science & Engineering

METHODS The time dependent Scrödinger equation (TDSE) is solved via the split operator technique of Flack and coworkers. This technique has applications to both rectengular and spherical coordinates. The derivatives required in the propagator are evaluated by the pseudo spectral fast fourier transform (FFT) technique. Absorbing boundary conditions at the extremities of the box are assumed while inside the box around the barriers reflecting boundary conditions are implemented. Initial wave packet is assumed to be a Gaussian bell curve with central momentum and spread that mimic the physical conditions of the experiment performed by Reisinger et al. CONCLUSIONS We make tha calculations for a realistic case where the incident flux may as well have a certain spread, and we do the numerical calculations for a wide range of possiblities. We see that similar results as with the Poisson spot case may be obtained and we take the discussion to even further to where the application of Huygens principle is not obvious. The results are calling for a quantum mechanical version of the Huygen's principle. REFERENCES [1] Goodman J., Introduction to Fourier Optics, 2nd edition, McGrawHill, 1996 [2] Lucke L. R., Rayleigh-Sommerfeld diffracton and Poisson's Spot, Eur.J.Phys. 27,193-204, 2006. [3] Balanis C., Advanced Engineering Electromagnetics, John Wiley and Sons, 1989. [4] Nussbaum A., Optical system design, Prentica Hall PTR, NJ, 1998. [5 ] Lipson S.G., Optical Physics, Cambrigde University Press, 3rd Edition, 1998. [6] Juffmann T. et.al., New prospects for de Broglie interferometry, arXiv : 1009.1569v1 [quant-ph] 8 Sept 2010 [7] Wang P. et.al., Analytic expression for Fresnel Diffraction, J.Opt.Soc.Am.A Vol 15, No 3, p 684, March 1998. [8] Frances J., Rigorous interference and diffraction analysis of diffractive optical elements using the finite difference time domain method, Computer Physics Communications, 181, 1963-1973, 2010 [9] Born M., Wolf E., Principles of Optics, 7th expanded edition, Cambridge University Press, 1999 [10] Martinez-Anton J.C. et. al., On Babinets principle and a diffraction-interferometric technique to determine the diameter of cylindrical wires, Metrologia, V: 38, N:2, p.125, 2001. [11] Herman M., and Fleck J.A., Phys. Rev. A, 38, 6000-6012 (1988). [12] Gottlieb D. and Orszag S.A., Numerical Analysis of Spectral Methods: Theory and Applications, SIAM, Pennsylvania (1993). [13] Neuhasuer D. and Baer M., J. Chem. Phys. 90, 4351 (1988). [14] Reisinger T. et al., Poisson spot with molecules, Phys. Rev. A 79, 053823 (2009). [15] K.D. Moeller, Optics: Learning by computing, with examples using Maple, Matcad, Matlab, Mathematica (R) , Maple (R), 2nd Edition, Springer, 2007.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

547

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/39

A Computational Study of the Linear and Nonlinear Opotical Properties of Aminopyridines, Aminopyrimidines and Aminopyrazines Hamit ALYAR Saliha ALYAR Keyword: Nonlinear optic, B3LYP, hyperpolarizability,heterocycyclic aromatic amines ABSTRACT In this study, we investigated the linear and nonlinear optical properties of 22 heterocycyclic aromatic amines including aminopyridines, aminopyrimidines and aminopyrazines. All calculations performed with the BPV86/6-311++G(3d,3p) and B3LYP/6-311++G(3d,3p) levels of theory by using GAUSSIAN 03W software. The results show that these compounds exhibit very low nonlinear optical properties. We also consider semi-empirical polarizability and molecular volume calculations at the AM1 level of theory together with QSAR-quality empirical polarizability calculations using Miller’s scheme. Least-squares correlations between the various sets of results show that these less costly procedures are reliable predictors of for these molecules, but less reliable for the larger molecules. INTRODUCTION The search for nonlinear optical (NLO) materials has been of great interest in recent years because of their potential applications in laser spectroscopy and laser processing [1,2], optical communications, data storage and image processing [3] and terahertz (THz) wave generation technology[4] which use in the fields of semiconductor, tomographic imaging, label free genetic analysis, cellular level imaging, biological sensing and so on [5]. Organic materials are optically more nonlinear than inorganic materials due to weak Van der walls and hydrogen bonds which posses a high degree of delocalization. Amino acids and their complexes belong to a family of organic materials that have aplications in NLO [6,7]. Amino acids are interesting materials for NLO applications as they contain a proton donor carboxyl acid (COO) group and the proton acceptor amine (NH2) group with them [8,9]. 2-Aminopyridines are promising substituted pyridines which have been shown to be biologically active molecules [10-11]. Additionally, because of their chelating abilities, 2-aminopyridines are commonly used as ligands in inorganic and organometallic chemistry [12,13]. If substituted with optically active groups, they could potentially serve as chiral auxiliaries or chiral ligands in asymmetric reactions. For these reasons, 2-aminopyridines are valuable synthetic targets. 4aminopyridine is a voltage-gated K+ (Kv) channel blocker that is useful clinically in the treatment of spinal cord injuries [14] and multiple sclerosis [15]. In addition to its therapeutic applications, 4-aminopyridine is routinely used to isolate different types of Kv channels expressed in native tissues based on their affinities for the drug [16,17]. The polarizability and hyperpolarizability of 4-aminopyridine have been studied by Z. Latajka et al. with semi-emprical PM3 and time-dependent Hartree Fock(TDHF) methods . Although the fluorescence properties of 22 heterocyclic aromatic amines including aminopyridines, amiminopyrimidines aminopyrazines were studied by K. Yamamoto et al., there do not appear to be any corresponding experimental data in the literature, neither are there any other ab initio calculations for title molecules.In earlier studies, we calculated the torsional barrier and nonlinear optical properties of phenylpyridines, phenyltriazines and thalidomide. Now, we study the nonlinear optical properties of aminopyridines, aminopyrimidines and aminopyrazines. Studied molecules are presented at the Fig.1

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

548

2 nd International Symposium on Computing in Science & Engineering

METHODS In the present study, we utilized three hybrid density functionals (B3LYP, BPV86, PBE0) methods with 6311++G(3d,3p) basis set to estimate molecular static polarizability, anisotropy of polarizability and first static hyperpolarizability (β) of 22 heterocyclic aromatic amines including aminopyridines, amiminopyrimidines aminopyrazines. All calculations were performed using Gaussian 03W program package. FINDINGS - CONCLUSION In this study are presented values of the static mean polarizability, ˂α˃, and anisotropy of polarizability, Δα , as defined in the following equations : = 1/3 (αxx + αyy + αzz ) Δα =1/21/2 [(αxx - αyy)2 + [(αxx – αzz)2 + [(αyy – αzz)2]1/2 The first static hyperpolarizability (βtot) reported here is defined as βtot = [(βxxx + βxyy + βxzz)2 + (βyyy + βyzz + βyxx)2 + (βzzz + βzxx + βzyy)2]1/2 In Table 1-3, the calculated and available theoretical values of ground state electronic energies, the static mean polarizabilities, polarizability of anisotropies, first static hyperpolarizabilities,(EHOMO-ELUMO) molecular energy differencies and molecular dipole moments are shown. Table 1. Electronic energy, static mean polarizability, polarizability of anisotropy, first static hyperpolarizability and dipole moment values of aminopyridine molecules. Molecule

E/a.u.

αave/a.u.

Δα/a.u.

βtot/ a.u.

EH-L/eV

μ/D

1

-303.73848502

73.90

29.24

22.08

6.16

1.4656

2

-343.06750790

87.20

33.54

98.00

5.98

1.2464

3

-343.08120051

90.16

37.50

265.35

5.39

2.1536

4

-343.08056058

90.41

42.47

244.73

5.12

2.2246

5

-508.32637064

97.24

4 .21

420.40

3.93

3.2807

6

-508.32523026

99.75

61.77

1345.29

4.33

6.6741

7

-492.39715432

98.17

46.91

258.61

4.50

1.8353

8

-492.39835298

100.51

59.12

840.50

5.05

3.0836

9

-547.70943655

113.11

69.31

705.62

5.02

3.4381

10

-379.00055288

5.11

2.6988

11

-763.37706562

91.24

40.32

284.39

5.12

0.8245

12

-382.41488983

104.29

40.83

209.90

5.33

2.0324

13

-547.65277735

112.60

62.49

1183.02

4.34

6.5680

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

549

2 nd International Symposium on Computing in Science & Engineering

Table 2. Electronic energy, static mean polarizability, polarizability of anisotropy, first static hyperpolarizability and dipole moment values of aminopyrimidine molecules. Molecule

E/a.u.

αave/a.u.

Δα/a.u.

βtot/ a.u.

EH-L/eV

μ/D

14

-319.80293907

70.92

32.04

238.66

5.27

0.7969

15

-470.32494911

82.40

37.80

183.78

5.65

2.1574

16

-398.47153399

98.96

8. 0

2 3.24

5.31

0.9020

17

-434.39044688

90.55

37.60

238.50

5.67

1.7154

18

-473.71596307

104.19

41.98

166.39

5.61

1.4685

Table 3. Electronic energy, static mean polarizability, polarizability of anisotropy, first static hyperpolarizability and dipole moment values of aminopyrazine molecules. Molecule

E/a.u.

αave/a.u.

Δα/a.u.

βtot/ a.u.

EH-L/eV

μ/D

19

-319.78850511

71.83

33.38

230.14

4.99

1.9456

20

-508.43592109

94.15

46.41

459.81

4.31

1.0725

21

-550.91033414

147.68

74.18

271.32

4.49

2.1176

22

-550.91135775

154.00

101.59

734.17

4.46

2.0702

This study reveals that these molecular systems have large first static hyperpolarizabilities and may have potential applications in the development of NLO materials. REFERENCES [1] D.S. Chemla, J. Zyss (Eds.), Nonlinear Optical Properties of Organic Molecules and Crystals, Academic Press, Orlando, 1987. [2] Y. Shen, The Principles of Nonlinear Optics, J. Wiley, New York, 1984. [3] P.N. Parasad, D.J. Williams, Introduction to Nonlinear Optical Effects in Molecules and Polymers, JohnWiley & Sons, New York, 1991. [4] V. Krishnakumar, R. Nagalakshmi, Physica B 403 (2008) 1863–1869. [5] G. Ramesh Kumar, S. Gokul Raj, R. Mohan, R. Jayavel, Cryst. Growth Des., 6 (2006) 1308. [6] D. Xu, M. Jiang, Z. Tan, Acta Chem. Sin., 41 (1983) 570. [7] M. Kitazawa, R. Higuchi, M. Takahashi, App. Phys. Lett., 64 (1994) 2477 [8] S.B. Monaco, L.E. Davis, S.P. Velsko, F.T. Wang, D. Eimerl, A.J. Zalkin, J. Cryst. Growth, 85 (1987) 252. [9] G. Ramesh Kumar, S. Gokul Raj, R. Mohan, R. Jayavel, Cryst. Growth Des., 6 (2006) 1308. [10] F. Manna, F. Chimenti, A. Bolasco, B. Bizzarri, W. Filippelli, A. Filippelli, L. Gagliardi, Eur. J. Med.Chem. 34 (1999) 245. [11] S. R. Schwid, M. D. Petrie, M. P. McDermott, D. S.Tierney, D. H. Mason, A. D.Goodman, Neurology 48 (1997) 817. [12] R. Kempte, S.Brenner, P. Arndt, Organometallics 15 (1996) 1071. [13] H. Fuhrmann, S. Brenner, P. Arndt, R. Kempe, Inorg. Chem. 35 (1996) 6742. [14] D.L. Wolfe, K.C. Hayes, J.T. Hsieh, P.J. Potter, J Neurotrauma 18 (2001) 757. [15] C.T. Jr Bever, P.A. Anderson, J. Leslie, H.S. Panitch, S. Dhib-Jalbut, O.A. Khan, R. Milo, J.R. Hebel, K.L. Conway, E. Katz, et al. Neurology 47 (1996)1457. [16] S. Grissmer, A.N. Nguyen, J. Aiyar, D.C. Hanson, R.J. Mather, G.A. Gutman, M.J. Karmilowicz, D.D. Auperin, K.G. Chandy, Mol Pharmacol 45(1994)1227. [17] C.C. Shieh, G.E. Kirsch Biophys J 67(1994) 2316.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

550

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/40

An Application of an Analytical Technique for Solving Nonlinear Evolution Equations Serife Muge Ege,Ege University,Department of Mathematics,Bornova-İzmir,Turkey,[email protected] Emine Misirli,Ege University,Department of Mathematics,Bornova-İzmir,Turkey,[email protected] Keyword: First integral method, Division Theorem , symbolic computation. INTRODUCTION The numerical solutions of partial differential equations has an intense period over the last decades from both theoritical and the practical points of view. The investigation of the travelling wave solutions for nonlinear evolution equations arising in mathematical physics plays an important role in the study of nonlinear physical phenomena. Nonlinear wave phenomena appears in various scientific and engineering fields, such as fluid mechanics, plasma physics, optical fibers, biology, chemical kinematics, chemical physics and geochemistry.Improvements in numerical techniques, together with rapid advance in computer technology, have meant that many of partial differential equtions arising from engineering and scientific applications which were previously intractable can now be routinely solved. LITERATURE REVIEW In recent years, many powerful methods have been developed to construct explicit analytical solution of nonlinear wave equations have been proposed , such as Exp-function method, homogenous balance method ,the tanh- method ,extended tanh method ,the Jacobi elliptic function expansion method and so on. A feature common to all the above methods is that when solving solutions of nonlinear evolution equations, they must need the aid of the Computer Algebra system-Matlab or Mathematica. The aim of this paper is extended the first integral method which has been proposed by Feng and developed to study the travelling wave solutions of various nonlinear evolution equations to find the exact solutions of some nonlinear evolution equations. METHODS We considered the nonlinear partial differential equation: (1) F(u,ut,ux,uxx,uxt,...)=0, Where u(x,t) is the solution of the equation is above. We use the transformations u(x,t)=f(ξ), (2) where ξ=x-ct. Using the chain rule we obtain (3) ut=-cfξ , ux= fξ , uxx=fξξ , .... . We use (3) to change the partial differential equation (1) to ordinary differential equation: (4) G(f,fξ ,fξξ,....). Next, we introduce a new independent variable (5) Xξ(ξ)=f(ξ) , Yξ(ξ)=fξ(ξ) which leads a system of ordinary differential equations Xξ(ξ)=Y(ξ), (6) Yξ(ξ)=F(X(ξ),Y(ξ)) By the qualitative theory of differential equations , if we can find the integrals to (6) under the same conditions, then ξthe general solutions to (6) can be solved directly. However, in general, it is really difficult for us to realize this even for one first integral, because for a given plane autonomous system, there is no systematic theory that can tell us how to find its first integrals, nor is there a logical way for telling us what these first integrals are. We applied the Division Theorem to obtain one first integral to (6) which reduced (4) to a first order integrable

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

551

2 nd International Symposium on Computing in Science & Engineering

ordinary differential equation. An exact solution to (1) was then obtained by solving this equation.

Division Theorem: Suppose that P(w,z) , Q(w,z) are polynomials in C[w,z] and P(w,z) is irreducible in C=[w,z] . If Q(w,z) vanishes at all zero points of P(w,z) , then there exists a polynomial G(w,z) in C[w,z] such that Q[w,z]= P[w,z]G[w,z] . FINDINGS & CONCLUSION The solution procedure of this method, by the help of symbolic computation of Matlab, Mathematica or so on, is of utter simplicity. The obtained results show that first integral method is very powerful and convenient for nonlinear evolution equations in science and engineering. REFERENCES [1] Misirli, E.; Gurefe, Y. 2010. Exact solutions of the Drinfel’d-Sokolov-Wilson equation using the Expfunction method. Applied Mathematics and Computation 216(9): 2623-2627. [2] Misirli, E.; Gurefe, Y. 2010. The Exp-function method to solve the generalized Burgers-Fisher equation. Nonlinear Science Letters A: Mathematics, Physics and Mechanics 1(3): 323-328. [3] Gurefe, Y.; Misirli, E. 2010. Exact solutions of the compound KdV-type equation with higher order nonlinearity. Submitted for publication [4] Gurefe, Y.; Misirli, E. 2010. Exp-function method for solving nonlinear evolution equations with higher order nonlinearity. Computers and Mathematics with Applications doi:10.1016/j.camwa.2010.08.060. [5] Wazwaz, A.M. 2004 The tangent method for travelling wave solutions of nonlinear equations. Applied Mathematics and Computation 154 (3) 713. [6] Wazwaz, A.M. 2007.The extended tanh method for abundant solitary wave solutions of nonlinear wave equations. Applied Mathematics and Computation 187 1131. [7] Fan, E. 2000. Extended tanh-function method and its applications to nonlinear equations. Physics Letters A 246 403 [8] Fu, Z.T.; Liu, S.K.; Liu, S.D.; Zhao, Q. 2001 .New Jacobi elliptic function expansion and new periodic solutions of nonlinear wave equations. Physics Letters A 290 72-76 [9] Feng, Z.S. 2002.The first integral method to study the Burgers-KdV equation. Journal of Physics A: Mathematical and General 35 343 [10] Feng, Z.; Wang, X. 2003.The first integral method to the two-dimensional Burgers–Korteweg–de Vries equation. Physics Letters A 173- 178 [11] Soliman, A.A.; Raslan K.R. 2009. First integral method for the improved KdV equation. International Journal of Nonlinear Science Vol.8 11-18

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

552

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/41

Optimum Design of Open Canals by Using Bees Algorithm İbrahim AYDOĞDU,Akdeniz University,Department of Civil Engineering,Antalya,Turkey,[email protected] Alper AKIN,Middle East Technical University,Department of Engineering Sciences,Ankara,Turkey,[email protected] Keywords: Bees algorithm, open canal design, hydrolics, combinatorial optimization, meta-heuristic algorithms

INTRODUCTION Engineering optimization has been used in a wide spread of fields that attracts the attention of designers. One of the engineering optimization problems is the optimization of the open canals which are mostly used as water transfer structures in water resources. Design of open canal structures is a complex task due to the fact that designer has to select suitable dimensions of sections within certain range of values and such selection should satisfy the flow requirements to convey a specific discharge in the canal. Traditionally, nonlinear programming techniques have been used in order to find optimum design of open canals. However, nonlinear programming techniques are not adequate to show good performance in many engineering optimization problems. Metaheuristic search algorithms try to optimize design problems by using certain strategies that are inspired from the nature. Bees algorithm is one of the newest meta-heuristic search techniques adopted from the food foraging behavior of swarms of honey bees when they collect nectar or pollen in order to make less effort. In this study, dimensions of cross sections of open canals are optimized by using bees algorithm in order to investigate performance of this algorithm in the optimum design of open canal problems. Four design problems are considered to optimize and results obtained from these problems are compared with results obtained from previous studies. LITERATURE REVIEW Several studies have been done under different conditions for optimum design of open canals. In majority of these studies optimum design of open canal problems is solved with different open canal geometries under uniform flow conditions [1-5]. However, there are considerably fewer research that have been done about this optimization problem under non-uniform flow conditions [6, 7]. Many optimization methods have been used in engineering optimization problems. Generally, nonlinear programming techniques such as Lagrange Multipliers have been used in order to solve optimization of open canal problems. In recent years, Meta-heuristic optimization techniques have been developed which are widely used in engineering optimization problems. Meta-heuristic search techniques are generally inspired from the nature and explore the design space by following these rules in order to determine the optimal or near optimal solutions. Genetic algorithm, evolutionary strategies, simulating annealing, tabu search, ant colony optimization, particle swarm optimization, differential evolution, firefly algorithm and bees algorithm are some of the metaheuristic techniques that are used to develop engineering optimization algorithms. Bees algorithm is one of the newest meta-heuristic search techniques first developed by Pham et al. in 2005 [8]. This optimization technique is applied in many engineering fields such as training neural networks for pattern recognition, analysis of computer vision and image, finding multiple feasible solutions to preliminary design problems, data clustering, scheduling jobs for a production machine, tuning a fuzzy logic controller for a robot gymnast, optimizing the design of mechanical components, multi-objective optimization problems and optimum design of continuous engineering problems [9-17].

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

553

2 nd International Symposium on Computing in Science & Engineering

METHODS The Bees Algorithm is biologically inspired by the natural behavior of bees. This technique is adopted from the food foraging behavior of swarms of honey bees when they collect nectar or pollen in order to make less effort which is mentioned in the following steps. Step 1: Candidate design is generated randomly. Then candidate design is evaluated and sorted in ascending order of value of the objective function. By then, certain numbers of best designs are assigned as designs of elite bees. Step 2: New designs (υi,j) are generated in the neighborhood of elite bees(xi,j) by using the formula υi,j = xi,j + Φij(xi,j - xk,j) (k is a solution in the neighborhood of i, Φ is a random number in the range [-1,1] )and evaluated according to the objective function value. If the new design vector is better than the design vector of the ith elite bee, the new design vector is assigned as the new design vector of the ith elite bee. Step 3: Remaining bees in the patch search randomly and generate new designs. Fitness of these designs are calculated. If the new design vector is better than any design vector in the patch, new design vector is replaced with worst design vector in the patch Step 2 and step 3 continues by the time the maximum iteration is reached. FINDINGS & CONCLUSION In this study, bees algorithm is applied to the optimum design of open canal problems. There are four examples solved with different canal shapes (circular triangular, rectangular and trapezoidal). Results obtained from these examples are compared to previous studies. It is concluded from these comparisons that bees algorithm is reliable, robust and effective algorithm in the optimization of open canal problems which is one of the engineering design optimization problems. It is noticed that adjustment of bees algorithm parameters is important to attain the convergence. The method may not converge at all or converges to a local optimum if unsuitable values are assigned to these parameters. Therefore, this study is a good work in order to adjust parameters for engineering optimization problems. REFERENCES [1] Chow, V.T. 1959. Open-channel hydraulics. McGraw-Hill Book Company, New York. [2] Babaeyan-Koopaei, K., Valentine, E. M., and Swailes, D. C. 2000. Optimal design of parabolic-bottomed triangle canals.” Journal of Irrigation and Drainage Engineering. 126(6), 408–411. [3] Babaeyan-Koopaei, K. 2001. Dimensionless curves for normal depth calculations in canal sections. Journal of Irrigation and Drainage Engineering. 127(6), 386–389. [4] Chahar, B.R. 2007. Optimal Design of a special Class of Curvilinear Bottomed Channel Section. Journal Hydraulic Engineering, ASCE. 133(5), 571-576. [5] Turan, M.E., and Yurdusev, M.A., 2011. Optimization of Open Canal Cross Sections by Differential Evolution Algorithm Mathematical and Computational Applications. 16(1), 77-86 [6] Swamee, P.K. 1995. Optimal irrigation canal sections. Journal of Irrigation and Drainage Engineering. 121, 467–469. [7] Pham, D.T., Afify, A.A., Koc, E., 2007 "Manufacturing cell formation using the Bees Algorithm". IPROMS 2007 Innovative Production Machines and Systems Virtual Conference, Cardiff, UK. [8] Pham, D.T., Ghanbarzadeh A., Koc E., Otri S., Rahim S. and Zaidi M. 2005 The Bees Algorithm. Technical Note, “Manufacturing Engineering Centre”, Cardiff University, UK, [9] Pham, D.T., Koç, E., Lee, J.Y., and Phrueksanant, J. 2007. Using the Bees Algorithm to schedule jobs for a machine, Proc Eighth International Conference on Laser Metrology, CMM and Machine Tool Performance, LAMDAMAP, Euspen, UK, Cardiff, p. 430–439. [10] http://en.wikipedia.org/wiki/Bees_algorithm - cite_ref-9 Pham, D.T., Ghanbarzadeh, A., Koç, E. and Otri, S. 2006. Application of the Bees Algorithm to the training of radial basis function networks for control chart pattern recognition, Proc 5th CIRP International Seminar on Intelligent Computation in Manufacturing Engineering (CIRP ICME '06), Ischia, Italy. [11] Yang X.S., 2005 "Engineering Optimizations Via Nature-Inspired Virtual Bee Algorithms". Artificial Intelligence and Knowledge Engineering Applications: A Bioinspired Approach, Lecture Notes In Computer Scıence 3562, 317-323 , Springer Berlin / Heidelberg. [12] Pham D.T., Castellani, M. and Ghanbarzadeh, A. 2007. Preliminary design using the Bees Algorithm, Proc

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

554

2 nd International Symposium on Computing in Science & Engineering

Eighth International Conference on Laser Metrology, CMM and Machine Tool Performance, LAMDAMAP, Euspen, UK, Cardiff, p. 420–429. [13] Pham, D.T., Otri, S., Afify A.A., M. Mahmuddin, and H. Al-Jabbouli, Data clustering using the Bees Algorithm, Proc 40th CIRP Int. Manufacturing Systems Seminar, Liverpool, 2007. [14] D. T. Pham, A. J. Soroka, E. Koç, A. Ghanbarzadeh, and S. Otri, Some applications of the Bees Algorithm in engineering design and manufacture, Proc Int. Conference on Manufacturing Automation (ICMA 2007), Singapore, 2007. [15] Pham D.T. and Ghanbarzadeh A., 2007 "Multi-Objective Optimisation using the Bees Algorithm"", Proceedings of IPROMS Conference [16] Pham D.T., Darwish, A.H., Eldukhri, E.E., and Otri, S. 2007 "Using the Bees Algorithm to tune a fuzzy logic controller for a robot gymnast."", Proceedings of IPROMS Conference [17] Olague, G. and Puente, C. 2006. Parisian evolution with honeybees for three-dimensional reconstruction. In Proceedings of the 8th Annual Conference on Genetic and Evolutionary Computation (Seattle, Washington, USA, July 08 – 12, 2006). GECCO '06. ACM, New York, NY, 191–198. DOI= http://doi.acm.org/10.1145/1143997.1144030

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

555

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/43

NBVP with Two Integral Conditions for Hyperbolic Equations Necmettin AĞGEZ,Fatih University,Mathematics,Istanbul,Turkey,[email protected] Allaberen ASHYRALYEV,Fatih University,Mathematics,Istanbul,Turkey,[email protected] Keywords: Nonlocal Boundary Value Problem,Difference Schemes,Stability,Integral Condition,Hyperbolic Equation INTRODUCTION It is known that, the method of difference schemes is widely used for approximating the solutions of problems of mathematical physics. New computer programs allow the implementation of highly accurate difference schemes. Hence, construction of highly accurate difference schemes for various types of boundary value problems have recieved much attention in recent years. The present work is devoted to the construction of first and second orders of accuracy difference schemes for hyperbolic boundary value problems. LITERATURE REVIEW In recent years, important progress has been done in the study of the high order of accuracy difference schemes for partial differential equations from the viewpoint of applications of an exact difference scheme. Now, it is possible to investigate the differential equations with variable coefficients that can not be solved analytically. Numerical methods and theory of solutions of the nonlocal boundary value problems for partial differential equations of variable type were carried out in (see, e.g. [1 – 5] and the references therein). Nonlocal boundary value problem with integral condition are widely used for thermo-elasticity, chemical engineering, heat conduction and plasma physics [6 – 9]. Some problems arising in dynamic of ground waters are defined as hyperbolic equations with nonlocal conditions [10 – 13]. In [14] several finite difference schemes were developed to solve a second order hyperbolic partial differential equations with a nonlocal boundary integral condition. The solution of one dimensional second order hyperbolic partial differential equations with given initial conditions and an integral condition based on the spectral approach with the finite difference schemes were investigated in [14]. The method of operators as a tool for investigation of the solution to hyperbolic equations in Hilbert and Banach spaces has been studied extensively (see, e.g. [16 – 18]). METHOD It is known that various nonlocal boundary value problems for hyperbolic equations can be reduced to boundary value problem for differential equation in a Hilbert space with self -adjoint positive definite operator A. The study of absolute stable difference schemes of a high order of accuracy for hyperbolic partial differential equations, in which stability was establised under the assumption that the magnitude of the grid steps τ and h with respect to time and space variables are connected. The first and second order of accuracy difference schemes for the approximate solution of this nonlocal boundary value problem for hyperbolic equations in a Hilbert spaces are presented. Applying the operator approach the stability estimates for the solution of these difference schemes are obtained. Investigation of the stability estimates for the solutions of the high order of accuracy difference schemes of hyperbolic equations is based on spectral theory of self-adjoint positive definite operators in a Hilbert space. This approach permits us to study the stability of the difference schemes for various partial differential equations.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

556

2 nd International Symposium on Computing in Science & Engineering

CONCLUSION In the present work, a nonlocal boundary value problem including two integral conditions for hyperbolic equations with dependent in space variable coefficients is considered. The first and second orders of accuracy stable difference schemes for the approximate solution of this nonlocal boundary value problem are presented. Stability of these difference schemes are obtained. The theoretical statements for the solution of these difference schemes for one-dimensional hyperbolic equations are supported by numerical examples in computer. In the example, the second order of accuracy difference scheme is more accurate comparing with the first order of accuracy difference scheme. Of course, techniques of this paper can be applied to study of a high order of accuracy difference scheme of nonlocal boundary value problems. REFERENCES [1] N. Gordeziani, P. Natalini, and P. E. Ricci, Finite-difference methods for solution of nonlocal boundary value problems, Computers and Mathematics with Applications, vol. 50, no. 8-9, 1333–1344, 2005. [2] D. Gordeziani and G. Avalishvili, Time-nonlocal problems for Schrödinger type equations: I. Problems in abstract spaces, Differential Equations, vol. 41, no. 5, 703–711, 2005. [3] A. Ashyralyev and O. Gercek, Nonlocal boundary value problems for elliptic-parabolic differential and difference equations, Discrete Dynamics in Nature and Society, vol.2008, Art. ID 904824, 16pp., 2008. [4] A. Ashyralyev, I. Karatay and P.E. Sobolevskii, On well-posedness of the nonlocal boundary value problem for parabolic difference equations, Discrete Dynamics in Nature and Society, vol. 2004, no. 2, 273-286, 2004. [5] A. Ashyralyev and A. H. Yurtsever, The stability of difference schemes of second-order of accuracy for hyperbolic-parabolic equations, Computers and Mathematics with Applications, vol. 52, no. 3-4, 259-268, 2006. [6] P. Shi, Weak solution to evolution problem with a nonlocal constraint, SIAM J. Anal. 24 (1993) 46–58. [7] Y.S. Choi, K.Y. Chan, A parabolic equation with nonlocal boundary conditions arising from electrochemistry, Nonlinear Anal. 18 (1992) 317–331. [8] B. Cahlon, D.M. Kulkarni, P. Shi, Stepwise stability for the heat equation with a nonlocal constraint, SIAM J. Numer. Anal. 32 (1995) 571–593. [9] A.A. Samarski, Some problems in the modern theory of differential equations, Differ. Uraven. 16 (1980) 1221–1228. [10] S. A. Beilin, Existence of solutions for one-dimensional wave equations with non-local conditions, Electron J Differential Eq 76 (2001), 1–8. [11] S. Mesloub and A. Bouziani, On a class of singular hyperbolic equation with a weighted integral condition, Intern J Math Math Sci 22 (1999), 511–520. [12] L. S. Pulkina, A nonlocal problem with integral conditions for hyperbolic equations, Electr J Differential Eq 45 (1999), 1–6. [13] L. S. Pulkina, On solvability in L2 of nonlocal problem with integral conditions for hyperbolic equations, Differets Uravn VN 2 (2000), 1–6. [14] M. Dehghan, On the solution of an initial-boundary value problem that combines Neumann and integral condition for the wave equation, Numerical Methods for Partial Differential Equations, vol. 21, 24–40, 2004. [15] M. Ramezani, M. Dehghan, and M. Razzaghi, Combined finite difference and spectral methods for the numerical solution of hyperbolic equation with an integral condition, Numerical Methods for Partial Differential Equations, vol. 24, 1-8, 2008. [16] A. Ashyralyev and N. Aggez, A note on the difference schemes of the nonlocal boundary value problems for hyperbolic equations, Numerical Functional Analysis and Optimization, vol. 25, no. 5-6, 439-462, 2004. [17] H. O. Fattorini, Second order linear differential equations in banach space, North-Holland Notas de Matematica, 1985. [18] S. Piskarev and Y. Shaw, On certain operator families related to cosine operator function, Taiwanese Journal of Mathematics 1, no. 4, 3585-3592, 1997.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

557

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/44

Reflection Investigation in Metamaterial Slab Waveguides Ercan UÇGUN, Dumlupınar University, Department of Physics, Kütahya, TURKEY, [email protected] Ali ÇETİN, Eskişehir Osmangazi University, Department of Physics, Eskişehir, TURKEY, [email protected] Keywords: Metamaterial, periodic slab, reflectance ABSTRACT In this study, reflection from through single-negative metamaterial slab waveguides is presented. Reflections versus wavelength are plotted for different number of bilayer. It is shown that reflections approximately are reached hundred percent with increasing bilayer number. INTRODUCTION Metamaterials are engineered materials that are not found naturally. Due to unique properties of these materials, those materials are attracted for scientists especially physicists and electronic engineers. In the last decade, single-negative metamaterials with negative electrical permittivity produced artificially are important in optical frequencies. Recently, a considerable amount of work has been reported on metamaterials [1-7]. LITERATURE REVIEW In 1968, the Russian physicist Victor Veselago is proposed the existence of electromagnetic materials called metamaterials with negative permittivity and permeability [8]. While single-negative metamaterials are designed with a negative permittivity or permeability, which are called SNG, double-negative metamaterials are designed to have both negative permittivity and permeability, which are called DNG. Metamaterials are produced periodically with periods much less than a wavelength. Negative permittivity or/and permeability could cause electromagnetic waves travelling in these media to exhibit unusual properties which are interested by scientists. The waves travelling in these media are refracted at negative angles according to conventional media. However, due to negative refraction index, the direction of phase and group velocity is opposite. These unusual characteristics are used for designing new devices such as optical filter, antireflection and high reflection coatings. Metamaterials are periodic structures which designed to control of propagation of electromagnetic waves. Therefore, metamaterials have the potential to manipulate the wave propagation in a manner that eludes the conventional materials due to periodic nature that can either be small-scale or resonant [9-17]. The practical applications of metamaterials are currently limited to their operational bandwidths. These periodic structures are produced with two different refractive indexes. One of dielectric material has low refractive index and the other has high refractive index. This structure is the simplest periodic structure provided that [18] n(x)= n2 for a12 and n(x)=n1 for a2 where a1 and a2 are the thicknesses of layer one and two, respectively and d is the sum of a1 and a2. The refractive index of 1D medium can be written as [19] n(x)=n(x+ld) (2) where is an integer number as known period number and d is periodic length in x-direction and called the lattice constant of the structure. The ratio between the intensity of the reflected wave and that of the incident wave is called reflectance R, and represented of the percentage of energy reflected by the periodic slab and expressed as

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

558

2 nd International Symposium on Computing in Science & Engineering

2

R=|Er|2

/

|Ei| (3) where Er is the amplitude of reflected wave and Ei is the incident wave that is thorough interfaces of media. METHODS We designed a bilayer structure that has different refractive indexes (n1 and n2) with different number of bilayer (2,4,6 and 8 number). For these structures, we were studied the reflectance versus wavelength with different number of bilayers. FINDINGS & CONCLUSION We concluded that, increasing the number of bilayer is changing the percent of reflectance and wavelength range for dielectric/dielectric bilayer. For dielectric/metamaterial bilayer, full reflection from the structure can be achieved over the full wavelength region of 300 and 800 nm. Therefore, dielectric/metamaterial bilayers can be used for antireflecting coating in different areas of scientific applications. REFERENCES [1] V. Veselago, L. Braginsky, V. Shklover, and C. Hafner, Negative Refractive Index Materials, Journal of Computational and Theoretical Nanoscience, Vol. 3, No. 2, 189-218, 2006. [2] W. J. Padilla, D. N. Basov, D. R. Smith, Negative Refractive Index Metamaterials, Materials Today, Vol. 9, No. 7-8, 2006. [3] A. Boltasseva, V. M. Shalaev, Fabrication of Optical Negative-Index Metamaterials: Recent Advances and Outlook, Metamaterials, Vol.2, No. 1, 1-17, 2008. [4] R. B. Greegor, C. G. Parazzoli, K. Li, B. E .C. Koltenbah and M. Tanielian, Experimental Determination and Numerical Simulation of the Properties of Negative Index of Refraction Materials, Optics Express, Vol. 11, No. 7, 688-695, 2003. [5] A. Diaz, J. H. Park, and I. C. Khoo, Design and Transmission-Reflection Properties of Liquid Crystalline Optical Metamaterials with Large Birefringence and Sub-Unity or Negative-Refractive Index, Journal of Nonlinear Optical Physics & Materials Vol. 16, No. 4, 533–549, 2007. [6] G. X. Yu, Y. T. Fang, T. J. Cui, Goos-Hanchen Shift from and Anisotropic Metamaterial Slab, Central European Journal of Physics, Vol. 8, No. 3, 415-421, 2010. [7] N. C. Panoiu, R. M. Osgood Jr., Numerical Investigation of Negative Refractive Index Metamaterials at Infrared and Optical Frequencies, Optics Communications, Vol. 223, No. 4-6, 331-337, 2003. [8] V. Veselago, The Electrodynamics of Substances wiht Simultaneously Negative Values of e and m, Soviet Physics Uspekhi, Vol. 10, No. 4, 509-514, 1968. [9] P. Kolinko and D. R. Smith, Numerical Study of Electromagnetic Waves Interacting with Negative Index Materials, Optics Express, Vol. 11, No. 7, 640-648, 2003. [10] J. Li, Y. Chen and V. Elander, Mathematical and Numerical Study of Wave Propagation in Negative-Index Materials, Computer Methods in Applied Mechanics and Engineering, Vol.197, No. 45-48, 3976-3987, 2008. [11] S.M. Vukovic, N. B. Aleksic, D. V. Timotijevic, Guided Modes in Left-Handed Waveguides, Optics Communications, Vol. 281, No. 6, 1500-1509, 2008. [12] Y. He, Z. Cao and Q. Shen, Guided Optical Modes in Asymmetric Left-Handed Waveguides, Optics Communications, Vol. 245, No. 1-6, 125-135, 2005. [13] M. Cheng, Y. Zhou, S. Feng, J. Lin and R. Chen, Lowest Oscillating Mode in a Nanoscale Planar Waveguide with Double-Negative Material, Journal of Nanophotonics, Vol. 3, No. 039504, 1-5, 2009. [14] Z. H. Wang, Z. Y. Xiao and S. P. Li, Guided Modes in Slab Waveguides with a Left Handed Material Cover or Substrate, Optics Communications, Vol. 281, No. 4, 607-613, 2008. [15] P. Dong and H. W. YANG, Guided Modes in Slab Waveguides with Both Double-Negative and Single-Negative Materials, Optica Applicata, Vol. 40, No. 4, 873-882, 2010. [16] R. W. Ziolkowski and E. Heyman, Wave Propagation in Media having Negative Permittivity and Permeability, Physical Review E, Vol. 64, No. 5, 1-15, 2001. [17] H. Coryand C. Zach, Wave Propagation in Metamaterial Multi-Layered Structures, Microwave and Optical Technology Letters, Vol. 40, No. 6, 460-465, 2004. [18] A. Yariv, P. Yeh, Optical Waves in Crystals, John Wiley & Sons, 2003. [19] J. Zheng, Z. Ye, X. Wang, X., D. Liu, Analytical Solution for Band-Gap Structures in Photonic Crystal with Sinusoidal Period, Physics Letters A, Vol. 321, No. 2, 120–126, 2004.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

559

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/46

Theoretical Implementation of Three Qubit Hadamard Gate for SI(S=3/2 , I=1/2) Spin System Selçuk ÇAKMAK,Department of Physics,Faculty of Science,Ondokuz Mayıs University, Samsun, Turkey, [email protected] Sevcan ÇORBACI,Department of Physics,Faculty of Science,Ondokuz Mayıs University, Samsun, Turkey, [email protected] Azmi GENÇTEN,Department of Physics,Faculty of Science,Ondokuz Mayıs University, Samsun, Turkey,[email protected] Keyword: Hadamard gate,multi qubit Hadamard gate,Quantum information theory,NMR quantum computing INTRODUCTION One of the aims of quantum information processing theory is to develop universal computer. To perform this, it uses quantum mechanical principles of physics. Unlike classical computer, quantum computer performs simulation of physics very well [1]. Quantum information theory can be implemented in practice by using spectroscopic methods such as NMR and ENDOR. In this study, by using NMR or ENDOR, we aim to find three qubit Hadamard gate pulse sequence which is not found in the literature. The Hadamard gate is very important gate for quantum information processing theory. It makes superpositios states which have equal probability of all quantum states. Because of its properties, it is used by quantum circuits and algorihms such as Entanglement states, Grover’s search algorithm, Shor’s factoring algorithm and Deutsch-Jozsa algorithm. LITERATURE REVIEW In classical computers, the information is stored as bits. In quantum computers a unit of information is called quantum bits (qubits) [2]. Qubits can be represented by the states of any quantum system such as spin states of nuclei or electrons. In magnetic resonance quantum computers, Zeeman levels of a spin 1/2 nucleus or electron, |↑> and |↓> are called a single qubit [3]. Usually, these two states are shown as |0> and |1> , respectively. Some linear superpositions of them are also called a single qubit. In quantum computers, quantum logic gates are represented by the unitary matrices. Matrix representation of single qubit Hadamard gate is H = (1/2)1/2*[(1 1),(1 -1)]2x2 (1) When this Hadamard gate is applied to one-qubit states, it creates the superposition states: H|0>=(1/2)1/2(|0>+|1>) and H|1>=(1/2)1/2(|0>-|1>) (2) Hadamard gate can be applied to n-qubit states and generates a superposition of 2n possible states: (3) Hxn|00..0>n= (1/2)n/2∑x=02^n-1|x> Where Hxn=(HxHxHx...H)n METHODS Endohedral fullerene molecule, 31P@C60 can be considered as SI(S=3/2, I=1/2) spin system [4].In this molecule electron spin of 31P in the ground state 3/2 and nuclear spin of 31P is with the abundance of 100%. For S=3/2 electron spin quantum number, two-qubit states are 3/2=|00>, 1/2=|01>, -1/2=|10> and -3/2=|11> as there are four magnetic quantum numbers. SI(S=3/2, I=1/2) spin system can be considered as three qubit states. For this spin system eight three qubit states |000>, |001>, |010>, |011>, |100>, |101>, |110>, |111> are found by direct products of two-qubit and one-qubit states. For example |000> state is obtained as following: |000> = |00>x|0>=[1,0,0,0]4x1x[1,0]2x1=[1,0,0,0,0,0,0,0]8x1 (4)

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

560

2 nd International Symposium on Computing in Science & Engineering

For a single endohedral fullerene molecule, 31P@C60 ,the total Hamiltonian in an external magnetic field is; H=ΩSSZ-ΩIIZ+ASZIZ (5) Where ΩS=gµBB, ΩI=γIB. For this Hamiltonian, spin system has (2S+1)(2I+1) = 8 different energy levels. For instance, three qubit state of |001> (MS=3/2, MI=-1/2) has the energy: E|001>=(3/2)ΩS+(1/2)IZ-(3/4)A (6) Quantum logic gates can implementation with NMR(Nuclear Magnetic Resonance) or ENDOR(ElectronNuclear Double Resonance) spectroscopy by using selective and non-selective pulse techniques. These pulses provide transition between energy levels. Two qubit Hadamard gate implemented with pulse NMR techniques using (Pi/2)-y(Pi)x pulse sequences [5]. FINDINGS & CONCLUSION In this work, three qubit Hadamard gate pulse sequence is found and theoretically implemented. The pulse sequence of there qubit Hadamard gate can be represented as Hx3=(Pi/2)2-3RF(Pi/2)0-1RF(Pi/2)0-2MW(Pi/2)6-7RF(Pi/2)4-5RF(Pi/2)4-6MW(Pi)2-4MW(Pi/2)0-2MW (7) Where n-m's represent the transitions from n to m energy levels [6]. When this pulse sequence is applied to pseudo pure state of |000>, the superposition state is obtained as following: Hx3|000> = (1/23/2)(|000>+|001>+|010>+|011>+|100>+|101>+|110>+|111>) (8) This result can be used in the first step of three qubit Grover’s search algorithm [7]. REFERENCES [1] Richard P. Feynman,Simulating Physics with Computers, International Journal of Theoretical Physics 21 (1982) 467. [2] M.A. Nielsen, I.L. Chuang, Quantum Computation and Quantum Information, (Cambridge University Press,2001). [3] I.S. Oliveira, T.J. Bonagamba, R.S. Sarthour, J.C.C. Freitas and E.R. deAzevede, NMR Quantum InformationProcessing, (Elsevier, 2007). [4] W. Harneit, Fullerene-based electron-spin quantum computer, Phys. Rev. B 65 (2002) 032322. [5] R. Das, T.S Mahesh and A. Kumar, Experimental implementation of Grover’s search algorithm using efficient quantum state tomography, Chem. Phys. Lett. 369 (2002) 8. [6] Werner Scherer and Michael Mehring,Entangled electron and nuclear spin states in 15N@C60: Density matrix tomography, Journal of Chemical Physics 128 (2008) 052305. [7] L.K. Grover, Quantum Mechanics Helps in Searching for a Needle in a Haystack, Phys. Rev. Lett. 79 (1997) 325.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

561

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/47

Data Partitioning Through Piecewise Based Generalized HDMR: Univariate Case M. Alper TUNGA,Bahcesehir University,Software Engineering Department, Istanbul, Turkey, [email protected] Metin DEMİRALP,Istanbul Technical University,Informatics Institute, Istanbul, Turkey, [email protected] Keyword: High dimensional model representation, data partitioning, interpolation, approximation INTRODUCTION High Dimensional Model Representation (HDMR) method is a divide-and-conquer method used to represent multivariate functions in terms of less-variate functions in order to reduce the complexity of the scientific computations in computer based applications. Multivariate data modelling problem is one of the research areas in which it becomes so complicated to determine analytical models through the standard interpolation methods while the multivariance of the problem increases. The data modelling problems of real life usually have a multivariate training data set of which the nodes are randomly distributed in the problem domain. Generalized HDMR method is one that can partition such training data set and allow us to construct an analytical structure for the given data modelling problem.[1] However, the method includes a linear equation system to be solved for this purpose and this work aims to bypass this unwanted structure and offers a piecewise algorithm to determine the sought analytical struıcture of the given problem. To this end, our new method covers the Generalized HDMR philosophy with only the constant term determination that excludes the disadvantages of a linear equation system in which sometimes there exist linearly dependent equations standing for unsolvable systems. In addition, using only the constant term in each subinterval and then interpolating all the constant values through whole problem domain allows us successful approximations for the data modelling problems. LITERATURE REVIEW HDMR was first proposed by Sobol in 1993 to estimate the sensitivity of a multivariate function, say f(x1,x2,...,xN), with respect to different variables or their groups by considering the Monte Carlo and quasiMonte Carlo algorithms[2]. After Sobol’s work, Professor H. Rabitz and his group developed various HDMR based methods for different purposes. These new algorithms are called ANOVA-HDMR[3] where ANOVA is an acronym for analysis of variance (ANOVA) used in statistics, cut-HDMR[4] which expresses the multivariate function by using the knowledge about its values on lines, planes and hyperplanes passing through a cut center which is a point in the input space and RS-HDMR[5] which is about random sampling. In the same time period, Professor M. Demiralp and his group have developed several HDMR based algorithms for different areas of engineering problems[6-10]. These methods were applied to algebraic eigenvalue problem modelling, Schrödinger's equation, hyperrotation based applications, optimal control of harmonic oscillator, multivariate diffusion equation, exponential matrix evaluation, evolution operators, parametric sensitivity analysis and so on. Many other scientists developed various HDMR based methods for different research areas such as probabilistic analysis[11], reliability analysis[12], modeling multiple input switching of CMOS gates[13], sensitivity analysis[14], decision making[15], black-box models[16] and so on. In addition, the HDMR method with a Dirac delta type weight[17] and the Indexing HDMR method[18] which was recently developed are the successful algorithms for multivariate data partitioning published by the authors.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

562

2 nd International Symposium on Computing in Science & Engineering

METHODS When a multivariate function is given by its values at a finite number of nodes of a hyperprismatic regular grid instead of its global analytical structure and these nodes are the elements of a cartesian product of the given individual sets of values for each independent variable, the HDMR method is used to approximately partition this given multivariate data into less-variate data sets. On the other hand, data need not be given at all nodes of hyperprismatic regular grid. Instead, it can be given at certain randomly distributed nodes. Certain level of incompleteness is encountered in HDMR method for such data sets. Generalized High Dimensional Model Representation (GHDMR) was developed for this purpose. In this method a general multivariate weight function is used instead of a product type weight function. The multivariate data is approximately partitioned into less-variate data sets under this general weight[1]. However, the GHDMR components evaluation process has some mathematical difficulties such as having a linear equation system. No solution can sometimes be obtained for this equation system because of linearly dependent equations of the system. Hence, the construction of even the univariate GHDMR components has sometimes unsolvable cases. This work aims to bypass this disadvantage by using only the constant GHDMR component. The main task is to split the given problem domain into subintervals and then to evaluate the constant term for each subinterval. Finally, interpolating all constant values over whole problem domain gives us an approximation through the constant GHDMR component. FINDINGS & CONCLUSION In this work, to bypass the disadvantage of solving a linear equation system, we developed a piecewise based GHDMR algorithm in which we deal only with the constant component by splitting the problem domain into subintervals for the univariate case. The determination of the constant term in each subinterval and the interpolation process of these values give us an approximation for the given data modelling problem. The numerical implementations show us that our new method works well for the univariate case. The performance of these approximations obtained for various testing functions are given via a number of plots. However, the results are very promising to get qualified approximations through constancy for the multivariate case sa the future work. REFERENCES [1] Tunga, M. A. & Demiralp, M. (2003). Data partitioning via generalized high dimensional model representation (ghdmr) and multivariate interpolative applications, Mathematical Research 9, 447-462. [2] Sobol,I. M. (1993). Sensitivity estimates for nonlinear mathematical models, Mathematical Modelling and Computational Experiments 1, 407-414. [3] Rabitz, H., Alış, Ö.F., Shorter, J., & Shim, K., (1999). Efficient input-output model representations, Computer Phys. Comm., 117, 11-20. [4] Li, G., Rosenthal, C., & Rabitz, H., (2001). High Dimensional Model Representations, J. Phys. Chem. A, 105, 77657777. [5] Li, G., Wang, S.-W., & Rabitz, H., (2002) Practical Approaches To Construct RS-HDMR Component Functions, J. Phys. Chem. A, 106, 8721-8733. [6] Demiralp, M. (2003). High dimensional model representation and its application varieties, Mathematical Research 9, 146-159. [7] Tunga, B. & Demiralp, M. (2003). Hybrid high dimensional model representation approximats and their utilization in applications, Mathematical Research 9, 438-446. [8] Tunga, B. & Demiralp, M. (2009). Constancy maximization based weight optimization in high dimensional model representation, Numerical Algorithms 52, 435-459. [9] İ. Yaman, İ. & Demiralp, M. (2003). High dimensional model representation applications to exponential matrix evaluation, Mathematical Research 9, 463-474. [10] Baykara, N.A., & Demiralp, M., (2003). Hyperspherical or Hyperellipsoidal Coordinates in the Evaluation of High Dimensional Model Representation Approximants, Mathematical Research, 9, 48-62. [11] Rao, B. N. & Chowdhury, R. (2008). Probabilistic analysis using high dimensional model representation and fast fourier transform, International Journal for Computational Methods in Engineering Science & Mechanics 9, 342-357. [12] Chowdhury, R. & Rao, B. N. (2009). Hybrid high dimensional model representation for reliability analysis, Computer Methods in Applied Mechanics & Engineering 198, 753-765. [13] Sridharan, J. & Chen, T. (2006).Modeling multiple input switching of CMOS gates in DSM technology using HDMR, Proceedings of Design Automation and Test in Europe 1-3, 624-629.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

563

2 nd International Symposium on Computing in Science & Engineering

[14] Ziehn, T. & Tomlin, A. S. (2008). A global sensitivity study of sulfur chemistry in a premixed methane flame model using HDMR, International Journal of Chemical Kinetics 40, 742-753. [15] Banerjee, I. & Ierapetritou, M. G. (2004). Model independent parametric decision making, Annals of Operations Research 132, 135-155. [16] Banerjee, I. & Ierapetritou, M. G. (2002). Design optimization under parameter uncertainty for general black-box models, Ind. Eng. Chem. Res. 41, 6687-6697. [17] Tunga, M. A. & Demiralp, M. (2008). A new approach for data partitioning through high dimensional model representation, Int. Journal of Computer Mathematics 85, 1779-1792. [18] Tunga, M. A. (2011). An Approximation Method to Model Multivariate Interpolation Problems: Indexing HDMR, Mathematical and Computer Modelling 53, 1970-1982.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

564

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/48

The Solution of Heat Problem With Collocation Method Using Cubic B-Splines Finite Element of the Paper Prepared for ISCSE 2011 Duygu Dönmez Demir,Celal Bayar University, Department of Mathematics, Manisa, Turkey, [email protected] Necdet Bildik,Celal Bayar University, Department of Mathematics, Manisa, Turkey, [email protected] Simge Öztunç, Celal Bayar University,Department of Mathematics, Manisa, Turkey, [email protected]

Keyword: Cubic B-splines finite element method, Collocation method, Cubic B-splines, Finite Element Method

INTRODUCTION This paper deals with the solution of one dimensional heat equation by using collocation method with cubic Bsplines finite elements. The scheme of the method is presented and the stability analysis is investigated by considering Fourier stability method. On the other hand, a comperative study between the numerical and the analytic solution is illustrated by the figure and the tables. The results demonstrate the reliability and the efficiency of the method. LITERATURE REVIEW This problem is one of the well-known linear partial differential equation [1-3]. It can be expressed as the heat flow in the rod with diffusion along the rod where the coefficient is the thermal diffusivity of the rod and is the length of the rod [4]. In this model, the flow of the heat in one-dimension that is insulated everywhere except at the two end points. Solutions of this equation are functions of the state along the rod and the time . In the past, this problem has been widely worked over a number of years by numerous authors. But it is still an interesting problem since many physical phenomena can be formulated into PDEs with boundary conditions. The cubic B-splines collocation method was developed for Burgers’ equation and used for the numerical solution of the differential equations in [5,6,7,8]. Recently, spline function theory has been extended and developed to solve the differential equations numerically by various papers [9,10,11,12,13,14,15]. Furthermore some extraordinary problems has been numerically investigated by finite element methods such as Galerkin method, least square method and collocation method with quadratic, cubic, quintic and septic B-splines [16,17,18]. In this study the cubic B-splines collocation method is used for solving the heat equation (1) subject to (2) and (3) and the solutions are compared with the exact solution.For constructing the cubic B-splines finite element method, we use collocation techniques as it was extensively used in [9,16,19,20,21-25]. In the section two, proposed method is presented and it is also given how to apply the collocation method with cubic B-splines finite element technique. In the section three, the stability analysis is investigated considering Fourier stability method. Finally, the numerical results and the related tables are given in the next section.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

565

2 nd International Symposium on Computing in Science & Engineering

METHODS Let us consider the domain [0,L] that is equally-divided with nodal points xj such that 0=x0 < x1 0 and ht>0 are the grid parameters. For the numerical solution of the linear parabolic problem (14)-(17), we use the following implicit monotone difference scheme (vi,j+1-vi,j)/ht-(a²/hx2)(vi-1,j+1-2vi,j+1+vi+1,j+1)=fi,j+1, i=1,...,Nx-1, j=0,...,Nt-1 (18) vi,0=ϕi, i=0,...,Nx; v0,j=vNx,j, vx,0j=vx,Nxj, j=0,...,Nt which has the accuracy O(hx2+ht) on the uniform grid wht [12]. FINDINGS & CONCLUSION In this paper, we studied the quasilinear parabolic problem with periodic boundary condition.We constructed an iteration algorithm to obtain numerical solution of the problem for solving the linearized problem (14)-(17) based on monotone finite difference scheme (18). The presented computational results are consistent with their theoretical results. REFERENCES [1] A. Bouziani and Mesloub S., Mixed problem with a weighted integral condition for a parabolic equation with the Bessel operator, Jour. of Appl. Math. and Stoc. Anal., 15, 3, 277-286(2002). [2] Changchun Liu, Weak solutions for a viscous p-Laplacian equation, Elec. Jour. of Diff. Equation, 63,1-11(2003). [3] D. Colton and J. Wimp, Asymptotic behaviour of the fundamental solution to the equation of heat conduction in two temperature.J.Math.Anal.Appl.2 : 411-418(1979). [4] I. Ciftci and H. Halilov, Fourier Method for a Quasilinear Parabolic Equation with Periodic Boundary Condition schemes, Hacettepe Journal of Mathematics and Statics, Volume 37(2), (2008). [5] H. Egger, V. Engl and M.V. Klibanov, Global uniquenes and Hölder stability for recovering a nonlinear source term in a parabolic equation, Aus. Nat. Scien. Foun., 013/08(2001). [6] H. Halilov, On mixed problem for a class of quasilinear pseudo - parabolic equations. Journal of Kocaeli Univ.,Pure and Applied Math. Sec.,No.3, December 1996,(pp. 1-7). [7] H. Halilov, On mixed problem for a class of quasilinear pseudoparabolic equation. Appl.Anal.75(1-2):61-71(2000). [8] K.K.Hasanov, On solution of mixed problem for a quasilinear hiperbolic and parabolic equation. Ph.D.Thesis, Baku,1961.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

594

2 nd International Symposium on Computing in Science & Engineering

[9] A. Kassam and L.N. Trefethen, Fourth-order time-stepping for stiff PDEs, SIAM J. Sci. Comput., 26(4), 12141233(2005). [10] V. R. Rao and T. W. Ting, İnitial-value problems for pseudoparabolic partial differential equations. Indiana Univ.Math. J. ,23: 131-153(1973). [11] W. Rundell, The solution of initial- boundary value problems for pseudoparabolic partial differential equations.Proc. Roy.Soc. Edin. Sect.A. ,74:311-326(1975). [12] A.A. Samarski, The theory of difference schemes, Marcel Dekker, New York, 2001.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

595

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/65

On Bernstein-Schoenberg Operator Gülter BUDAKÇI,Dokuz Eylül University,Department of Mathematics,Fen Fakültesi,Tınaztepe Kampüsü, Izmir, Turkey, [email protected] Halil ORUÇ,Dokuz Eylül University,Department of Mathematics,Fen Fakültesi,Tınaztepe Kampüsü, Izmir, Turkey, [email protected]

Keywords: Bernstein-Schoenberg Operator, B-spline, Marsden's Identity, q-integer ABSTRACT We first give a survey of Bernstein-Schoenberg operator in view of both analytical and geometric properties. Then we consider a special knot sequence based on geometric progression and define a one parameter family of Bernstein-Schoenberg operator. It is proved that this operator converges to f uniformly for all f in C[0,1] . It also inherits geometric properties of classical Bernstein-Schoenberg operator. Moreover it is shown that the error function Em,n has a particular symmetry property that is Em,n (f;x;q)=Em,n (f;1-x,1/q) provided that f is symmetric on [0,1]. INTRODUCTION A constructive proof Weierstrass’ approximation theorem which states that every continuous function can be approximated uniformly, as close as we wish, by polynomials was given in 1912 via Bernstein polynomials. These polynomials are particularly useful for both approximation and curves and surfaces design purposes. Bernstein-Bézier techniques have been fundamental in computer aided geometric design (CAGD). A striking generalization of the Bernstein polynomials is Bernstein-Schoenberg operator based on B-splines. The major breakthrough in CAGD was the B-spline methods. Note that the advantage of B-splines over Bézier methods is its locality. A spline function consists of polynomial pieces on subintervals joined together with certain continuity conditions. Spline functions are currently used in various domains of mathematics such as interpolation, approximation, wavelets, computer aided geometric design, data smoothing, numerical solution of differential and integral equations, etc. In general, analysis on B-splines is performed via equally spaced knot sequences, in particular integer knots. This work introduces a one parameter family of Bernstein-Schoenberg operator whose B-splines are formed using q-integers. The q-integer [i] is defined by if q≠1 then [i] =(1-qi)/(1-q) else [i]=i for any choice of parameter q>0. Then we investigate analytical properties in turn geometric properties of the operator. LITERATURE REVIEW Many of the early results on spline functions are due to Schoenberg. In [12] B-splines are defined as a basis for spline spaces which have smallest possible support. Then he introduced a spline approximation operator which generalized the Bernstein polynomial. The discovery of the recurrence relation of B-splines by de Boor [2] escalated research on splines. We note that in approximation theory it is often useful to have an approximation operator based on a continuous function f , not only close to f but also has a graph whose shape is similar to that of the graph of f namely shape preserving operator. Like the Bernstein polynomials, Bernstein-Schoenberg operator is variation diminishing and therefore has certain shape preserving properties. That is, if the underlying function is monotone so is its Bernstein-Schoenberg spline function. Furthermore it yields a convex function whenever f is monotone. Therefore Bernstein-Schoenberg operator mimics the shape of the function.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

596

2 nd International Symposium on Computing in Science & Engineering

METHODS Bohman-Korovkin’s Theorem is used to show the uniform convergence of Bernstein-Schoenberg operator. For a sequence of positive operator which reproduces linear functions, it is enough to show that the operator converges uniformly for the function x2 . The analysis of symmetry property of the error function makes use of divided differences of truncated power function and also Marsden’s identity. The results concerning convexity of the operator are accomplished from the Jensen’s inequality for convex functions. FINDINGS & CONCLUSION It is shown that many properties of classical Bernstein-Schoenberg operator are carried over q parametric Bernstein-Schoenberg operator and the value q=1 reduces the latter operator to the classical one. In the last two decades, the theory of uniform B-splines has led the study of spline wavelets. Because it is tedious to work on B-splines with general knot sequence except for the uniform case, further investigation on both B-splines with q-integers and q parametric Bernstein-Schoenberg operator might be worth attempting. REFERENCES [1] Andrews, G.E. (1998). The theory of partitions. Cambridge: Cambridge University Press. [2] de Boor, C. (1972). On calculating with B-splines. J Approximation Theory, 6(1), 50-62. [3] Goodman, T.N.T. (1994). Total Positivity and the Shape of Curves. Total Positivity and Its Applications, 157-186. [4] Goodman, T.N.T & Sharma, A. (1985). Aproperty of Bernstein-Schoenberg Spline Operators. Proceeding of the Edinburgh Mathematical Society (1985), 28, 333-340. [5] Koçak, Z. & Phillips, G.M. (1994). B-Splines with Geometric Knot Spacing. BIT, 34,388-399. [6] Marsden, M. & Schoenberg, I.J. (1966). On Variation Diminishing Spline Approximation Methods. Mathematica, 8(31), 61-82. [7] Marsden, M. (1970). An identity fors pline functions withs applications to variation-diminishing spline approximation. J. Approximation Theory, 3, 7-49. [8] Oruç, H. & Phillips, G.M. (2003). q-Bernstein Polynomials and Bézier Curves. Journal of Computational and Applied Mathematics, 151, 1-12. [9] Phillips, G.M. (1997). Bernstein polynomials based on the q-integers. Annals of Numerical Mathematics, 4, 511-518. [10] Phillips, G.M. (2003). Interpolation and Approximation by Polynomials. New York: Springer-Verlag. [11] Phillips, G.M. (2009). Survey of results on the q-Bernstein polynomials. IMA Journal of Numerical Analysis (2010), 30(1), 277-288. [12] Schoenberg, I.J. (1967). On Spline Functions. Proceedings of a Symposium (O. Shisha Ed. , Academic Press, New York, 1967), 255-294. [13] Webster, R. (1994). Convexity, USA: Oxford University Press Inc.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

597

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/66

A Matrix Method for Approximate Solution of Pantograph Equations in Terms of Boubaker Polynomials Tuğçe AKKAYA,Celal Bayar University,Mathematics,Manisa,Turkey,[email protected] Salih YALÇINBAŞ,Celal Bayar University,Mathematics,Manisa,Turkey,[email protected] Mehmet SEZER,Mugla University,Mathematics,Mugla,Turkey,[email protected] Keywords: Pantograph equation, Boubaker series and polynomials, Boubaker matrix method. INTRODUCTION This study is concerned with a generalization of a functional differential equation known as the pantograph equation (with constant and variable coefficients) which contains a linear functional argument (with retarded and advanced cases or with proportional delays). Functional differential equations with proportional delays are usually referred to as pantograph equations or generalized equations. The name “pantograph” originated from the work of Ockendon and Tayler on the collection of current by the pantograph head of an electric locomotive [1, 2]. In this article, we introduce a matrix method based on the Boubaker polynomials [3-7] and collocation points for the approximate solution of pantograph equation. The mentioned Boubaker polynomials were firstly by Boubaker et al. (2006) as guide for solving a one-dimensional formulation at heat transfer equation. Also, the matrix method we have used is illustrated by studying the problems with initial conditions. The obtained results and the error analysis are performed. LITERATURE REVIEW In recent years, pantograph equations have been studied by many authors, who have investigated both their analytical and numerical aspects [8-10]. These equations are characterized by the presence of a linear functional argument and play an important role in explaining many different phenomena. In particular, they arise in industrial applications [11] and in studies based on biology, economy, control theory, astro-physics, nonlinear dynamic systems, cell growth and electro-dynamic, among others [1, 2, 11, 12]. Properties of the analytical and numerical solutions of pantograph equations have been by several authors. [15, 13-17] On the other hand, The Boubaker polynomials and the BPES have been studied. This is of interest not only because of their applications in applied physics and mathematics field, but also because the used method can be applied to solve problems in chemistry, biology, mechanics and medicine. The Boubaker polynomials expansion has been used in many applied physics problems [3-7]. The Boubaker polynomials expansion scheme (BPES) (Oyodum et al., 2009; Awojoyogbe and Boubaker, 2008; Ghanouchi et al.,2008; Slama et al., 2008, 2009a, 2009b; Fridjine et al., 2009;Fridjine and Amlouk, 2009; Tabatabaei et al., 2009; Zhao et al.,2008; Chaouachi et al., 2007; Belhadj et al., 2009; Ghrib et al.,2008; Guezmir et al., 2009; Labiadh and Boubaker, 2007) is an analytical resolution protocol which has been published by Oyodum et al. (2009). It had been successfully used in many applied physics studies, i.e. the models presented by Awojoyogbe and Boubaker (2008) in the field of organic tissues modelling, the works of Ghanouchi et al. (2008) on the heat transfer modeling systems , the recent works presented by Slama et al. (2008, 2009a, 2009b) who developed a numerical model for the spatial time- dependant evolution of A3 melting point in C40 steel material during a particular sequence of resistance spot welding, the Boubaker polynomials expansion scheme-related analytical solutions to Williams–Brinkmann stagnation point flow equation at a blunt body by Zhang and Li (2009), the studies of Fridjine et al. (2009) on semiconductor materials and the related works of Tabatabaei et al. (2009)[18].

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

598

2 nd International Symposium on Computing in Science & Engineering

The investigations of the Boubaker polynomials roots remain incomplete. In fact, oppositely to the number or location of these letters, their analytical expressions seem to be difficult to establish, only a few information could be provided. METHODS Recently, the following numerical methods for the solutions of the multi-pantograph and generalized pantograph equations have been presented; the Runge-Kutta method by Li and Liu [13], Taylor methods by Sezer et al.[14-16] and the variational iterational method by Saadatmandi and Dehghan [17]. The basic motivational of this work, by considering the mentioned studies, is to develop a new numerical method based on the matrix representations of the Boubakers polynomials and to apply this method to the generalized pantograph equations with the initial conditions to obtain the solution in the truncated Boubaker series. FINDINGS – CONCLUSION A new matrix method based on the Boubaker polynomials is developed to numerically solve higher order pantograph equations and multi-pantograph equations with the initials conditions. Comparison of the results obtained by the present method and those other methods reveals that the present method is vary effective and convenient. Numerical results show that the accuracy improves when the truncation limit of Boubaker series is increased and the errors decrease more rapidly. It is observed that the method has the best advantage when the known functions in equation can be expanded to Boubaker series. Another considerable advantage of the method is that Boubaker coefficients of the solution function are found very easily by using the computer programs. Moreover, this method is applicable for the approximate solution of the pantograph-type Volterra functional integro-differential equation with variable delays, higher-order differential-difference and integro-differential-difference equations. REFERENCES [1] J.R.Ockendon and A.B.Tayler, (1971). The dynamics of a current collection system for an electric locomotive, Proc.Roy.Soc.London Ser., A322 447-468. [2] A.B. Taylor, (1986) Mathematical Models in Applied Mathematics, Clarendon Press, Oxford, pp. 40-53. [3] K. Boubaker, (2007). Trends Appl. Sci. Res. 2, 540–544. [4] J. Ghanouchi, H. Labiadh, K. Boubaker, (2008). Int. J. Heat Technol. 26 (1), 49–53. [5] O.B. Awojoyogbe, K. Boubaker, (2009). Curr. Appl. Phys. 9, 278–283. [6] K. Boubaker, (March 2007). Les Polynômes de Boubaker, Deuxièmes Journées Méditer- ranéennes de Mathématiques Appliquées JMMA02, Monastir, TUNISIE. [7] S. Slama, J. Bessrour, K. Boubaker, M. Bouhafs, (2008). COTUME 2008, pp. 79–80. [8] M. Z. Liu and D. Li, (2004). Properties of analytic solution and numerical solution of multi-pantograph equation, Applied Math. and Computation 155, 853-871. [9]. Derfel and A. Iserles, (1997). The pantograph equation in the complex plane, J. Math. Anal. Appl. 213, 117-132. [10] Y. Muroya, E. Ishiwata and H. Brunner, (2003). On the attainable order of collocation methods for pantograph integro-differential equations, J. Comput. Appl. Math. 152, 347-366. [11] W . G. Ajello, H. I. Freedman and J. Wu, (1992). A model of stage structured population growth with density depended time delay, SIAM J. Appl. Math. 52, 855-869. [12]. M. D. Buhmann and A. Iserles, (1993). Stability of the discretized pantograph differential equation, Math. Comput. 60, 575-589. [13] D.Li, M.Z.Liu, (2000). Runge-Kutta methods for multi-pantograph delay equation, App. Math. and Comput.163 (1), 383-395. [14 ]M.Sezer, S.Yalçınbaş, N.Şahin, (2008). Approximate solution of multi-pantograph equation with variable coefficients, J.Comput. and App. Math. 214 (2), 406-416. [15] M.Sezer, A.A.Daşçıoğlu, (2007). A Taylor method for numerical solution of generalized pantograph equations with linear functional argument, J.Comput. and App. Math. 200 (1), 217-225. [16] M.Sezer, S.Yalçınbaş, M.Gulsu, (2008). A Taylor polynomial approach for solving generalized pantograph equations with nonhomogenous term, Int. J. Comput. Math. 85 (7), 1055-1063. [17] A.Saadatmandi, M.Dehghan, (2009). Variational iteration method for solving a generalized pantogfaph equation, 58 (11-12) 2190-2196. [18] Dubey B, Zhao TG, Jonsson M, Rahmanov H, (2010). A solution to the accelerated-predator-prey problem using Boubaker polynomial expansion scheme, 264 (1), 154-160.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

599

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/67

Fiber Bundles in Digital Images Ismet KARACA,Ege University,Department of Mathematics,Izmir,Turkey,[email protected] Tane VERGILI,Ege University,Department of Mathematics,Izmir,Turkey,[email protected] Keywords: Digital bundles,digital homotopy groups,fiber bundles INTRODUCTION Digital Image Processing is a growing discipline with applications in industry, medicine, meteorology, geology, among other fields. Digital Topology deals with properties of two and three dimensional digital images that correspond to topological properties of objects. The main purpose of digital topology is to study topological properties of discrete objects and it plays an important role in computer vision, image processing and computer graphics. Researchers in this area (Rosenfeld, Kong, Kopperman, Kovalevsky, Malgouyres, Boxer, Chen, Rong, Han, Karaca and others) have studied to determine the properties of digital images with tools from Topology(especially Algebraic Topology). Homotopy Theory deals with properties of spaces which are invariant under continuous deformations. Translating homotopy from general topology to discrete structures raises a number of questions which are not solved in satisfactory way. Topological invariants are very useful in digital images and geometric modeling. Fiber bundles play an important role in geometry quite apart from their nice homotopy properties. It is well known that knowledge of the digital homotopy group is a very important tool for Image Analysis. A general algorithm to decide whether two digital images have isomorphic homotopy groups would be a very powerful tool for Image Analysis. Therefore, fiber bundles in digital images will be a very useful tool in the determinating of digital homotopy groups. LITERATURE REVIEW In 1935 Alexandroff and Hopf published a textbook that an axiomatic basis was given for the theory of cell complexes (so called combinatorial topology). This theory was tried to be developed in order to prevail the difficult problems of topology. In 1937 Alexandrof published a paper on the same subject where the term “discrete topology” was used in the title [1]. In 1977 Khalimsky investigated connected topological spaces. In 1979, “digital topology” term was first given by A. Rosenfeld who was a researcher of computer vision analysis. V. Kovalevsky gave a sound fundament for digital topology in [17]. Digital fundamental groups of discrete objects which was defined by Kong is extremely useful in applications of digital imaging. Boxer [4] used the classical methods of algebraic topology to construct digital fundamental groups. Digital fundamental group of the digital covering space was introduced by Han in [11]. The existence and properties of digital universal covering spaces has been derived by Boxer in [2]. Boxer and Karaca [5] have obtained digital versions of covering spaces from Algebraic Topology and have studied digital covering spaces classified by conjugacy class in [6] and some properties of the group automorphism of a digital covering space inspired by analogs in Algebraic Topology in [7]. In [9], Chen and Rong have designed linear time algorithms to determine topological invariants such as the genus and homology groups in 3D. METHODS This presentation is organized as follows. In the preliminary part we give some basic definitions such as digital k-adjacencies, digital (k0,k1)-continuous function, digital (k0,k1)-homeomorphism, digitally (k0,k1)-homotopic functions, digital k-contractible, and digital (k0,k1)-covering map. Then we introduce a “digital fiber bundle”, and “digital trivial fiber bundle”. Then we investigate the relationship between “digital covering spaces” and “digital fiber bundles”. Also we define “digital bundle map” between digital fiber bundles and an equivalence relation on digital fiber bundles with the assistance of digital fiber bundle maps. Furthermore we give a

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

600

2 nd International Symposium on Computing in Science & Engineering

definition of a “digital pullback bundle” and study the homotopy invariance of digital pullback bundles. As a result we get some conclusions. FINDINGS & CONCLUSION We see that if two digital continuous maps are homotopic, then their digital pullback bundles are equivalent. We investigate a relationship between a digital contractible space and a digital trivial fiber bundle. We believe that this will be a helpful tool in determining the digital homotopy groups of digital images. Also this study leads us to deal with a digital version of vector bundles which are essential for gauge theory and represent “linearizations” of the nonlinear structure of manifolds and so they are in many ways much easier to work with than the base spaces. REFERENCES [1] Alexandroff, P.: Diskrete Rume Mathematiceskiy Sbornik(Receul Mathematika) 2(44), N. 3:502-519, 1937. [2] Boxer, L.: Digital Products, Wedges, and Covering Spaces, Journal of Mathematical Imaging and Vision 25(2006), 159-171. [3] Boxer, L.: Digitally Continuous Functions, Pattern Recognit. Lett. 15, 833-839(1994). [4] Boxer, L.: A Classical Construction for The Digital Fundamental Group, J.Math. Imaging Vis. 10, 5162(1999). [5] Boxer, L. , Karaca, I.: Some Properties of Digital Covering Spaces, Journal of mathematicaL image and vision 37(2010), 17-26. [6] Boxer, L., Karaca, I.: The Classification of Digital Covering Spaces, Journal of Mathematical Imaging and Vision 32(2008), 23-29. [7] Boxer, L., Karaca, I.: Properly Discontinuous Actions in a Digital Covering Space, Preprint 2011. [8] Chen, L.: Discerete Surfaces and Manifolds: A Theory of Digital-Discrete Geometry and Topology, Scientific and Practicle Computing, Rockville (2004). [9] Chen, L., Rong, Y.: Linear Time Recognition Algortihms for Topological Invariants in 3D, Proceedings of International Conference on Pattern Recognition, 2008. [10] Cohen, L.R.: The Topology of Fiber Bundles Lecture Notes, Department of Mathematics Stanford University. [11] Han, E.,: Nonproduct Property of The Digital Fundamental Group, Information Sciences 171, 73-91 (2005). [12] Hatcher, A.: Vector Bundles and K Theory, Version 2.1 (May 2009). [13] Husemoller, D.: Fiber Bundles, Springer-Verlag, Third Edition, 1966. [14] Kong, T.Y.: A Digital Fundamental Group. Comput. Graph, 13. 159-166(1989). [15] Kong, T.Y., Kopperman, R.: On Storage of Topological Information, Discrete Applied Mathematics, Volume 147, Issues 2-3(2005), 287-300. [16] Kopperman, R.: On Storage of Topological Information, Discrete Applied Mathematics, Volume 147, Issues 2-3(2005), 287-300 [17] Kovalevsky, A.: Finite Topology as Applied To Image Analysis, Computer Vision, Gaphics and Image Processing, 45:141-161 (1989). [18] Malgouyres, R.: Homotopy in Two Dimensional Digital Images, Theoretical Computer Science 230(2000), 221-233. [19] Spainer, H.E.: Algebraic Topology, McGraw-Hill (1966). [20] Steenrod, N.: The Topology of Fiber Bundles, Princeton University Press, 1960.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

601

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/68

On Numerical Solution of Multipoint Nonlocal Hyperbolic-Parabolic Equations with Neumann Condition 'Allaberen ASHYRALYEV' 'Fatih University','DepartmentofMathematics','Istanbul','Turkey','[email protected]' 'Yildirim OZDEMİR' 'Duzce University','Department of Mathematics','Duzce','Turkey','[email protected]' Keywords: Nonlocal boundary value problem, hyerbolic-parabolic equation, difference scheme, stability INTRODUCTION It is known that most problems in fluid mechanics (dynamics, elasticity), mathematical biology and other areas of physics lead to partial differential equations of the hyperbolic-parabolic type. These equations can be derived as models of physical systems or mathematical biology are considered as the models for solving boundary value problems. It is well-known that the mixed problem for hyperbolic-parabolic equations can be solved by Fourier series method, by Laplace transform method and by Fourier transform method. However, all of the analytical methods can be used only in the case when it has constant coefficients. The most useful method for solving partial differential equations with dependent coefficients in t and in the space variables is difference method. Methods of solutions of nonlocal boundary value problems for hyperbolic-parabolic differential equations have been studied extensively by many researchers (see, for example, [1]-[12] and references given therein). The method of operators as a tool for the investigation of the solution to hyperbolic-parabolic differential equations in Hilbert and Banach spaces, has been systematically developed by several authors (see, for example, [13]-[15] and the references given therein). A large cycle of works on difference schemes for hyperbolic-parabolic partial differential equation (see, for example, [16]-[17] and the references given therein), in which stability was established under the assumption that the magnitude of the grid space τ and h with respect to the time and space variables are connected. In abstract terms this means, in particular, that the condition, namely, the multiplication of time increment and norm of the difference operator approaches to zero when time increment approaches to zero, is satisfied. Of great interest is the study of absolute stable difference schemes of first and second orders of accuracy for hyperbolic-parabolic partial differential equations, in which stability was established without any assumptions in respect of the grid steps time and space increments. In the paper [18], the stability estimates for the solution of the nonlocal boundary value problem for differential equations in a Hilbert space H with self-adjoint positive definite operator A was considered. The stability estimates for the solution of this problem were established. In applications, the stability estimates for the solution of mixed type boundary value problems for hyperbolic-parabolic equations were obtained. In the paper [19], a numerical method is proposed for solving multi-dimensional hyperbolic-parabolic differential equations with the nonlocal boundary condition in t and Dirichlet condition in space variables. The first and second orders of accuracy difference schemes are presented. The stability estimates for the solution of difference schemes of the nonlocal boundary value problem for solving these difference schemes in the case of a one-dimensional hyperbolic-parabolic differential equation with variable in x coefficients. In the present paper, the multipoint nonlocal boundary value problem for multi-dimensional hyperbolicparabolic equation with a Neumann condition is considered. The first and second orders of accuracy difference scheme are presented to establish the stability estimates for the solution of these difference schemes of nonlocal boundary value problem in the case of a one-dimensional hyperbolic-parabolic equation. The method is illustrated by numerical examples. For approximate solutions of nonlocal boundary value problem, the first and second orders of accuracy

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

602

2 nd International Symposium on Computing in Science & Engineering

difference schemes with time increment and space increment for their different values are used. The second and fourth orders difference equations with respect to n with matrix coefficients. In order to find the solution of these difference equations, a procedure of modified Gauss elimination method is applied for difference equations with respect to n with matrix coefficients. The results of numerical experiments permit us to show that the second order of accuracy difference schemes are more accurate comparing with the first order of accuracy difference scheme. REFERENCES [1] G. Vallet, (2003). Weak entropic solution to a scalar hyperbolic-parabolic conservation law, Rev. R. Acad. Cien. Ser. A. Math., 97, 147-152. [2] S. N. Glazatov, 1998. Nonlocal Boundary Value Problems for Linear and Nonlinear Equations of Variable Type: Sobolev Institue of Mathematics SB RAS, Preprint no. 46. [3] G. D. Karatoprakliev, (1989). On a nonlocal boundary value problem for hyperbolic-parabolic equations, Diff. Urav., 25, 1355-1359. (Russian) [5] A. Gerish, M. Kotschote and R. Zacher, 2004. Well-posed of a Quasilinear Hyperbolic-Parabolic System Arising in Mathematical Biology: Report on Analysis and Numerical Mathematics, Martin-Luther-Universitat Halle-Wittenberg, Germany, no. 04-24. [6] V. N. Vragov, 1983. Boundary Value Problems for Nonclassical Equations of Mathematical Physics: Textbook for Universities, Novosibirsk: NGU. (Russian) [7] A. M. Nakhushev, 1995. Equations of Mathematical Biology: Textbook for Universities, Moscow: Vysshaya Shkola. (Russian) [8] J. I. Ramos, (2006). Linearly-implicit, approximate factorization, exponential methods for multidimensional reaction–diffusion equations, App. Math. and Comp., 174, 1609-1633. [9] X. Z. Liu, X. Cui and J. G. Sun, (2006). FDM for multi-dimensional nonlinear coupled system of parabolic and hyperbolic equations, J. Comp. and App. Math., 186, 432-449. [10] A. S. Berdyshev and E. T. Karimov, (2006). Some non-local problems for the parabolic-hyperbolic type equation with non-characteristic line of changing type, Cent. Eur. J. Math., 4, 183-193. [11] M. S. Salakhitdinov and A. K. Urinov, 1997. Boundary Value Problmes for Equations of MixedComposite Type with a Spectral Parameter, Tashkent: FAN. (Russian) [12] T. D. Dzhuraev, 1978. Boundary Value Problems for Equations of Mixed and Mixed-Composite Types, Tashkent: FAN. (Russian) [13] D. Bazarov and H. Soltanov, 1995. Some Local and Nonlocal Boundary Value Problems for Equations of Mixed and Mixed-Composite Types, Ilim: Ashgabat. (Russian) [14] H. O. Fattorini, 1985. Second Order Linear Differential Equations in Banach Spaces, Elseiver Science Publishing Company: North-Holland. [15] J. A. Goldstein, 1985. Semigroups of Linear Operators and Applications, Oxford Mathematical Monograhs, The Clarendon Press Oxford University Press: New York. [16] S. G. Krein, 1966. Linear Differential Equations in Banach Space, Nauka: Moscow. (Russian) [17] A. Ashyralyev and A. Yurtsever, (2005). A note on the second order of accuracy difference schemes for hyperbolic-parabolic equations, App. Math. and Comp., 165, 517-537. [18] A. Ashyralyev and M. B. Orazov, (1999). The theory of operators and the stability of difference schemes for partial differential equations mixed types, Firat University, Fen ve Muh. Bil. Der. 11, 249-252. [19] A. Ashyralyev and Y. Ozdemir, (2007). On nonlocal boundary value problems for hyperbolic-parabolic equations, Taiw. J. Math., 11, 1077-1091. [20]A. Ashyralyev and Y. Ozdemir, (2010). A note on difference schemes of nonlocal boundary-value problems for hyperbolic-parabolic equations, AIP Conf. Pro., 1309, 725-738. [21] P. E. Sobolevskii, 1975. Difference Methods for the Approximate Solution of Differential Equations, Voronezh:Izlad. Voronezh. Gosud. Univ.,. (Russian)

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

603

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/69

On de Casteljau Type Algorithms Çetin Dişibüyük,Dokuz Eylül University,Department of Mathematics,Fen Fakültesi,Tınaztepe Kampüsü, Izmir, Turkey, [email protected] Halil Oruç,Dokuz Eylül University,Department of Mathematics,Fen Fakültesi,Tınaztepe Kampüsü, Izmir, Turkey, [email protected] Keywords: q-Bernstein polynomials, de Casteljau algorithm, w,q-Bernstein polynomials ABSTRACT We give a new identity which provides us to represent q-Bernstein polynomials in terms of intermediate points of its de Casteljau type algorithm. We also give explicit form of intermediate points in terms of w,q-Bernstein polynomials. Finally, change of basis matrix between q-Bernstein polynomials and w,q-Bernstein polynomials is obtained. INTRODUCTION One of the most important mathematical representation of curves and surfaces used in computer graphics and computer-aided geometric design (CAGD) is Bézier representation. This curves are first used to design automabile bodies. A parametric Bézier curve of degree is defined by P(t)=\sum_{i=0}^n bi{n \binom i}ti(1-t)n-i, t\in[0,1], bi \in E2 or E3 where En denotes n-dimensional Euclidean space. The points bi are called the control points and the polygon obtained by joining the control point bi with the control point bi+1 for i=0,1,...,n-1 is called the control polygon. The reason of the popularity of Bézier curves in CAGD is that the points bi give information about the shape of the polynomial curve P(t). This polynomials have many shape preserving properties and they are not usefull only for CAGD but also for approximation using the fact that any fuction can be approximated uniformly, as close as we wish, by Bernstein polynomials. Although Bézier curves are first publicized in 1962, Paul de Casteljau is the first one who developed them in 1959 by using an algorithm that gives a point on the curve. The most powerful advantage of this method is subdividing any Bézier curve to make it more flexible. For the given points b0,...,bn and t\in R this algorithm is b_{i}^{r}(t)=(1-t)b_{i}^{r-1}(t)+t b_{i+1}^{r-1}(t) for r=1,...,n and i=0,...,n-r. This work is concerned about generalizations of Bézier polynomials introduced in [3] and [11] and their de Casteljau type algorithms. LITERATURE REVIEW A great deal of research papers have appered on q-Bernstein Bézier polynomials since it is first introduced by G. M. Phillips as a generalization of Bernstein Bézier polynomials. Then using q-Bernstein Bézier polynomials he proposed q-Bernstein polynomials based on the q-integers (see [3]). It is shown that under certain conditions, qBernstein polynomials of a function f\in C[0,1] is convergent uniformly to the function itself. Convergence properties of generalized Bernstein polynomials are also investigated for the case q>1 (see, for example, [10]). A more general version with two parameters which leads a connection with q-Jacobi polynomials is introduced in [11].

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

604

2 nd International Symposium on Computing in Science & Engineering

METHODS A new identity is proved by induction to obtain a new formula of q-Bernstein Bézier curves in terms of intermediate points of its de Casteljau type algorithm. We show inductively the explicit form of the intermediate points of the de Casteljau type algorithm in w,q-Bernstein Bézier curves. Finally, we find a change of basis matrix between q-Bernstein Bézier curves and w,q-Bernstein Bézier curves by obtaining a transformation matrix between their basis functions. CONCLUSION Determining intermediate points of q-Bernstein Bézier curves in terms of lower degree basis may lead to finding a more general representation, blossom of q-Bernstein Bézier curves. In practice, having a change of basis matrix will make possible interchange the representations of the same curve. These ideas can be extented to surface representations in terms of w,q-Bernstein polynomials. REFERENCES [1] Farin, G. (2002). Curves and surfaces for CAGD, A practical guide(5th ed.). USA: Academic Press. [2] Phillips, G.M. (1996). A de Casteljau algorithm for generalized Bernstein polynomials. BIT, 36(1), 232-236. [3] Phillips, G.M. (1997). Bernstein polynomials based on the q-integers. Annals of Numerical Mathematics, 4, 511-518. [4] Dişibüyük, Ç. & Oruç, H. (2007). A generalization of rational Bernstein-B´ezier curves. BIT, Numerical Mathematics, 47(2), 313-323. [5] Dişibüyük, Ç. & Oruç, H. (2008). Tensor product q-Bernstein polynomials. BIT, Numerical Mathematics, 48(4), 689-700. [6] Goodman, T.N.T., Oruç, H. & Phillips, G.M. (1999). Convexity and generalized Bernstein polynomials. Proceedings of the Edinburgh Mathematical Society, 42, 179-190. [7] Oruç, H. & Phillips, G.M. (1999). A generalization of the Bernstein polynomials, Proceedings of the Edinburgh Mathematical Society, 42, 403-413. [8] Oruç, H. & Phillips, G.M. (2003). q-Bernstein polynomials and B´ezier curves. Journal of Computational and Applied Mathematics, 151, 1-12. [9] Oruç, H. & Akmaz, H.K. (2004). Symmetric functions and the Vandermonde matrix. Journal of Computational and Applied Mathematics, 172, 49-64. [10] Oruç, H. & Tuncer, N. (2002). On the convergence and iterates of q-Bernstein polynomials. Journal of Approximation Theory, 117(2):301–313. [11] Lewanowicz, S. & Wozny, P. (2004). Generalized Bernstein polynomials. BIT, Numerical Mathematics, 44(1), 63-78. [12] Phillips, G.M. (2008). Survey of results on the q¡Bernstein polynomials. IMA J. Numer. Anal., 30(1), 277288.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

605

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/70

Numerical Solution of the Inverse Problem of Finding the Time-dependent Diffusion Coefficient of the Heat Equation from Integral Overdetermination Data Fatma KANCA Kocaeli University,Department of Mathematics,Kocaeli,[email protected] Mansur I. ISMAILOV Gebze Institute of Technology,DepartmentofMathematics,GebzeKocaeli,[email protected]

Keywords: Heat equation; inverse problem; nonlocal boundary conditions;integral overdetermination condition; time- dependent coefficient;finite difference method INTRODUCTION Suppose that one need to determine the temperature distribution u(x,t) as well as thermal coefficient a(t) simultaneously satisfy the equation ut=a(t)uxx+F(x,t), x∈(0,1), t∈(0,T] (1) with the initial condition u(x,0)=ϕ(x), 0≤x≤1, (2) the boundary conditions (3) u(0,t)=u(1,t), ux(1,t)=0, 0≤t≤T, and the overdetermination condition 0∫

u(x,t)dx=E(t), 0≤t≤T. (4) The problem of finding the pair {a(t), u(x,t)} in (1)-(4) will be called an inverse problem. Denote the domain QT by QT={(x,t): x∈(0,1), t∈(0,T]} The finding pair {a(t), u(x,t)} from the class C[0,T]×C2,1(QT)∩C1,0(QT) for which conditions (1)-(4) are satisfied and a(t)>0 on the interval [0,T], is called the classical solution of the inverse problem (1)-(4). The existence and uniqueness of the classical solution of the inverse problem (1)-(4) are investigated in [1]. The finite-difference solution is also demonstrated for the mentioned inverse problem. 1

LITERATURE REVIEW The parameter identification in a parabolic differential equation from the data of integral overdetermination condition plays an important role in engineering and physics ([2,3,4,5,6]). Various statements of inverse problems on determination of thermal coefficient in one-dimensional heat equation were studied in [5,6,7]. It is important to note that in the papers [5,6] the time dependent thermal coefficient is determined from nonlocal overdetermination condition data. Besides, in [2,5] the coefficients of the heat equations are determined in the case of nonlocal boundary conditions. The references [8,9] and [10,p.263-279] are devoted to the solution of the direct problem for the heat equation with nonlocal boundary condition (in particular the problem of finding u(x,t) from (0.1)-(0.3), when a(t) is known). In [9], also the nature of (0.3) type boundary conditions is demonstrated. Thus, such type of nonlocal boundary condition arises in mathematical biology.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

606

2 nd International Symposium on Computing in Science & Engineering

METHODS We use the finite difference method with a predictor-corrector type approach, that is suggested in [2]. Apply this method to the problem (1)-(4). We subdivide the intervals [0,1] and [0,T] into M and N subintervals of equal lengths h=(1/M) and τ=(T/N) respectively. We choose the Crank-Nicolson scheme, which is absolutely stable and has a second order accuracy in both h and τ. ([11]) The Crank-Nicolson scheme for (1)-(4) is as follows: (1/τ)(uij+1-uij) = (1/2)(aj+1+aj)(1/(2h²))[(ui-1j-2uij+ui+1j+(ui-1j+1-2uij+1+ui+1j+1)] +(1/2)(Fij+1+Fij), ui1=φi, u0j=uMJ, uM-1j=uM+1j, where 1≤i≤M and 0≤j≤N are the indices for the spatial and time steps respectively, uij=u(xi,tj), φi=ϕ(xi), Fij=F(xi,tj), xi=ih, tj=jτ. Then we construct the predicting-correcting mechanism. Finally, we present two examples to illustrate the efficiency of the numerical method described above. FINDINGS & CONCLUSION The inverse problem regarding the simultaneously identification of the time-dependent thermal diffusivity and the temperature distribution in one-dimensional heat equation with nonlocal boundary and integral overdetermination conditions has been considered. This inverse problem has been investigated from numerical points of view. The sensitivity of the Crank-Nicolson scheme combined with iteration method with respect to noisy overdetermination data has been illustrated. REFERENCES [1] M. I. Ismailov, F. Kanca, Inverse Coefficient Problem for Heat Equation, The Inverse Problem of Finding the Time-dependent Diffusion Coefficient of the Heat Equation from Integral Overdetermination Data, Submitted. [2] J. R. Cannon, Y. Lin, S.Wang, Determination of a control parameter in a parabolic partial differential equation, J. Austral. Math. Soc. Ser. B. 33 (1991) 149-163. [3] J. R. Cannon, Y. Lin, S.Wang, Determination of source parameter in a parabolic equations, Meccanica 27 (2) (1992) 85-94. [4] A. G. Fatullayev, N. Gasilov, I.Yusubov, Simultaneous determination of unknown coefficients in a parabolic equation, Applicable Analysis 87 (10-11) (2008) 1167-1177. [5] M. I. Ivanchov, Inverse problems for the heat-conduction equation with nonlocal boundary condition, Ukrainian Mathematical Journal 45 (8) (1993) 1186-1192. [6] M. I. Ivanchov, N. V. Pabyrivska, Simultaneous determination of two coefficients of a parabolic equation in the case of nonlocal and integral conditions, Ukrainian Mathematical Journal 53 (5) (2001) 674-684. [7] W. Liao, M. Dehghan, A. Mohebbi, Direct numerical method for an inverse problem of a parabolic partial differential equation, Journal of Computational and Applied Mathematics 232 (2009) 351-360. [8] A. I. Kozhanov, On a nonlocal boundary value problem with variable coefficients for the heat equation and the Aller equation, Differential Equations 40 (6) (2004) 815-826. [9] L. A. Muravei, A. V. Filinovskii, On a problem with nonlocal boundary condition for a parabolic equation, Math. USSR Sbornik 74 (1) (1993) 219-249. [10] A. M. Nakhushev, Equations of Mathematical Biology, Moscow, 1995 (in Russian). [11] A.A. Samarskii, The Theory of Difference Schemes, Marcel Dekker, Inc., New York, 2001.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

607

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/71

Finite Difference and Iteration Methods for Fractional Hyperbolic Partial Differential Equations with the Neumann Condition Allaberen ASHYRALYEV,Fatih University,Istanbul,Turkey,[email protected] Fadime DAL,Ege University,Izmir,Turkey,[email protected] Keywords: Fractional Hyperbolic Partial Differential Equations, Iteration Methods,initial value problems ABSTRACT It is known that various problems in fluid mechanics (dynamics, elasticity) and other areas of physics lead to fractional partial differential equations. Methods of solutions of problems for fractional differential equations have been studied extensively by many researchers (see, e.g.,(1-14)) The role played by stability inequalities (well-posedness) in the study of boundary-value problems for hyperbolic partial differential equations is well known ( see, e.g.,(18-31)). In the present paper, finite difference and He's iteration methods for the approximate solutions of the mixed boundary value problem for the multidimensional fractional hyperbolic equation. The first order of accuracy in t and the second order of accuracy in space variables for the approximate solution of problem (1.1) is presented. The stability estimates for the solution of this difference scheme and its first and second order difference derivatives are established. The finite difference method is applied multidimensional fractional hyperbolic equation. He's variational iteration method is applied for equation(1.1). THE FINITE DIFFERENCE METHOD In this section, we consider the first order of accuracy in t and the second order of accuracy in space variables stable difference scheme for the approximate solution of problem. The stability estimates for the solution of this difference scheme and its first and second order difference derivativesare established. A procedure of modifed Gauss elimination method is used for solving this difference scheme in the case of one-dimensional fractional hyperbolic partial differential equations. Finaly, one has not been able to obtain a sharp estimate for the constants figuring in the stability estimates. Therefore, our interest in the present paper is studying the difference scheme by numerical experiments. Applying this difference scheme, the numerical methods are proposed in the following section for solving the one-dimensional fractional hyperbolic partial differential equation. The method is illustrated by numerical examples. HE'S VARIATIONAL ITERATION METHOD In this section we consider He's variational iteration method for the approximate solution of problem (1.1). The comparison of finite difference and iteration methods is presented.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

608

2 nd International Symposium on Computing in Science & Engineering

REFERENCES [1] I. Podlubny, Fractional Differential Equations, Academic Press, New York, (1999). [2] S. G. Samko, A. A. Kilbas and O. I. Marichev, Fractional Integrals and Derivatives, Gordon and Breach Science Publishers, London,(1993). [3] J. L Lavoie, T. J. Osler, R. Tremblay, Fractional derivatives and special functions, SIAM Review 18(2), 240-268,(1976). [4] V. E. Tarasov, Fractional derivative as fractional power of derivative, International Journal of Mathematics, 18, 281-299,(2007). [5] E.M. El-Mesiry, A. M. A. El-Sayed, H. A. A. El-Saka, Numerical methods for multi-term fractional (arbitrary) orders differential equations, Appl. Math. Comput. 160(3), 683-699,(2005). [6] A.M.A. El-Sayed, F. M. Gaafar, Fractional order differential equations with memory and fractional-order relaxation-oscillation model, Pure Math. Appl. 12 (2001). [7] A.M.A. El-Sayed, E. M. El-Mesiry, H. A. A. El-Saka, Numerical solution for multi-term fractional (arbitrary) orders differential equations,Comput. Appl. Math. 23(1), 33-54,(2004). [8] R. Gorenflo, F. Mainardi, Fractional calculus: Integral and differential equations of fractional order,in: A. Carpinteri, F. Mainardi (Eds.), Fractals and Fractional Calculus in Continuum Mechanics, Springer, Wien, (1997), pp. 223-276. [9] D. Matignon, Stability results for fractional differential equations with applications to control processing, in: Computational Engineering in System Application 2, Lille, France, (1996). [10] A. Ashyralyev, A note on fractional derivatives and fractional powers of operators, Journal of Mathematical Analysis and Applications, 357(1), 232-236, 2009. [11] A. Ashyralyev, F. Dal, Z. Pinar, A note on the fractional hyperbolic differential and difference equations, Appl. Math. Comput. 217(9), 4654-4664,(2011). [12] A. Ashyralyev, F. Dal, Z. Pinar, On the numerical solution of fractional hyperbolic partial differential equations, Mathematical Problems in Engineering, vol.2009, Article ID 730465, 2009. [13] F. Dal, Application of Variational Iteration Method to Fractional Hyperbolic Partial Differential Equations, Mathematical Problems in Engineering, vol.2009, Article ID 824385, 2009. [14] I. Podlubny, A. M. A. El-Sayed, On Two Definitions of Fractional Calculus, Solvak Academy of Science-Institute of Experimental Phys, (1996). [15] H. O. Fattorini, Second Order Linear Differential Equations in Banach Space, Notas de Matematica, North-Holland, (1985). [16] S. Piskarev, Y. Shaw, On certain operator families related to cosine operator function, Taiwanese Journal of Mathematics 1(4), 3585-3592, (1997). [17] P. E. Sobolevskii, Difference Methods for the Approximate Solution of Differential Equations. Izdat. Voronezh. Gosud. Univ., Voronezh (1975), (Russian). [18] S. G. Krein, Linear Differential Equations in a Banach Space, Moscow: Nauka (1966). (Russian). [19] P.E. Sobolevskii, L. M. Chebotaryeva, Approximate solution by method of lines of the Cauchy problem for an abstract hyperbolic equations, Izv. Vyssh. Uchebn. Zav., Matematika,5, 103116,(1977), (Russian). [20] A. Ashyralyev, M. Martinez, J. Paster, S. Piskarev, Weak maximal regularity for abstract hyperbolic problems in function spaces, Abstracts of 6-th International ISAAC Congress, Ankara, Turkey, (2007), p. 90. [21] A. Ashyralyev, N. Aggez, A note on the difference schemes of the nonlocal boundary value problems for hyperbolic equations, Numerical Functional Analysis and Optimization 25(5-6), 1-24, (2004). [22] A. Ashyralyev, I. Muradov, On difference schemes a second order of accuracy for hyperbolic equations. in: Modelling Processes of Explotation of Gas Places and Applied Problems of Theoretical Gasohydrodynamics, Ashgabat, Ilim, pp.127-138, (1998), (Russian). [23] A. Ashyralyev, P. E. Sobolevskii, New Difference Schemes for Partial Differential Equations, Operator Theory: Advances and Applications, vol.148, Birkhauser, Basel, Boston, Berlin (2004). [24] A. Ashyralyev, Y. Ozdemir, On nonlocal boundary value problems for hyperbolic-parabolic equations, Taiwanese Journal of Mathematics 11(3), 1077-1091, (2007). [25] A. Ashyralyev, O. Yildirim, On multipoint nonlocal boundary value problems for hyperbolic differential and difference equations, Taiwanese Journal of Mathematics 13, 22p., (2009). [26] A. A. Samarskii, I. P. Gavrilyuk, V. L. Makarov, Stability and regularization of three-level difference schemes with unbounded operator coefficients in Banach spaces, SIAM J Numer Anal, 39(2), 708-723, (2001). [27] A. Ashyralyev, P. E. Sobolevskii, Two new approaches for construction of the high order of accuracy difference schemes for hyperbolic differential equations, Discrete Dynamics in Nature and Society 2005(2), 183-213, (2005). [28] A.Ashyralyev, M. E. Koksal, On the second order of accuracy difference scheme for hyperbolic equations in a Hilbert space, Numerical Functional Analysis and Optimization 26 (7-8), 739-772,

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

609

2 nd International Symposium on Computing in Science & Engineering

(2005). [29] A.Ashyralyev, M. E. Koksal, On the stability of the second order of accuracy difference scheme for hyperbolic equations in a Hilbert space, Discrete Dynamics in Nature and Society 2007, Article ID 57491, 1-26, (2007). [30] M. Ashyraliyev, A note on the stability of the integral-differential equation of the hyperbolic type in a Hilbert space, Numerical Functional Analysis and Optimization 29(7-8): 750-769, (2008). [31] A. Ashyralyev, P. E. Sobolevskii, A note on the di®erence schemes for hyperbolic equations, Abstract and Applied Analysis 6(2), 63-70, (2001). [32] M. Inokti, H. Sekine and T. Mura , General use of the Lagrange multiplier in nonlinear mathematical physics , In Variational Method in the Mechanics of Solids, S. Nemar-Nasser, Ed., Pergamon Press, Oxford, UK,(1978). [33] J.H. He, Generalized Variational Principles in Fluids , Science Culture Publishing House of China, Hong Kong , (2003).

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

610

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/73

Half Quadratic Biased Molecular Dynamics on DNA Rotations Levent SARI,Azize Sevim Keywords: HQBMD DNA-Topology Parallel-Computing INTRODUCTION Rotational degree of freedom of a nicked DNA molecule is important in many cellular processes and in DNAprotein interactions [1,2]. The rotation of such a nicked DNA is especially crucial in the relaxation of supercoiled DNA by human topoisomerase I [3,4]. Therefore, the dynamic mechanism of a nicked DNA rotation about the its intact strand has been simulated using Molecular Dynamic (MD) method based on the force-field potentials. In order to see large scale movement of DNA atoms, we employ a technique known as the Half Quadratic Biased Molecular Dynamics (HQBMD)[5]. Calculations are performed on our linux clustered supercomputer employing 8 CPUs in parallel, using CHARMM program package. Rotations in both directions (clockwise and anti-clockwise, that correspond relaxation of negative and positive supercoils respectively) have been performed. 50 different simulations have been carried out in the present study. As we rotate the downstream part of the DNA (as suggested by two science papers, see reference 6 and 7), we focused on the structural and energetic changes, especially in the nicked region of the DNA. The most striking result of this study is that the DNA rotations are topology dependent. Linking number (Lk) can change by any integer value for the relaxation of positive supercoiled DNAs, while by only even numbers for negatively supercoiled DNAs. There are other novel findings of this study that are given in the results section. LITERATURE REVIEW Topology of DNA is defined by a geometry independent topological constant, that can be decomposed into two different geometrical (structural) variables. It is defined by linking number, Lk, the number of times one DNA strand goes around the other, in the absence of any supercoiling[8,9]. Lk is a strict topological property because it does not vary in the case of double stranded DNA is twisted or deformed. Lk of a circular DNA can only be changed by breaking a phosphodister bond in one of the two strands, allowing the intact strand to pass through the broken strand and than rejoining the broken strand. If there is a break in either strand, it is possible to untwist the strands and separate them completely. In this case Lk is undefined, as topology is collapsed. Linking number is described by the summation of two structural components called writhe and twist. Local inter-winding of the double strands itself results crossings or nodes and they measured by a parameter called twist (Tw). Twist is the number of times that the two strands are twisted about each other. Coiling of the helical axis results nodes and they measured by writhe (Wr). Writhe is the number of times that the DNA helix is coiled about itself in three-dimensional space. Twist and writhe may be changed by deformation of DNA, because they depend on geometry. In addition, Tw and Wr are not necessarily integers, indeed most often they are not. If two strands are interwound in a right-handed helix, the linking number is defined as positive (+)[10]. An ordinary B-type DNA has positive linking number, according to the sign convention. Conversely, if they interwound in a left-handed helix, the linking number is negative (-)[10]. And if there is no net bending of the DNA axis upon itself or it lies flat on a plane, than the DNA is said to be in a relaxed state. Lk0 is the linking number for a DNA in its relaxed state. If there is strain on DNA, Lk is less or more than Lk0, than the DNA will undergo a three-dimensional writhing in space, which is called supercoiling. Supercoiled DNA molecules are torsionally stressed relative to their relaxed counterparts. The linking difference ΔLk ΔLk = Lk – Lk0 is used to define the degree of supercoiling [10].

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

611

2 nd International Symposium on Computing in Science & Engineering

Understanding the rotation of a nicked DNA is quite important in regards to the mechanism of Human Topoisomerase I, an enzyme protein that changes the topological state of supercoiled DNA during replication and transcription [1,2]. Two models have been proposed to explain the mechanism of DNA relaxation within Human Topoisomerase I after DNA is cleaved: ‘controlled rotation’ and ‘enzyme-bridging’ [6,7]. In controlled rotation model, protein is attached covalently to one end of the broken strand and downstream of the DNA duplex rotate about the unbroken strand in either direction. This model is most favorable for the type IB subfamily. In enzyme-bridging model, protein is not only attached covalently to one end, is also attached noncovalently to the other end to form a bridge. Then unbroken strand is passed through this bridge. This strand passage model is suited for type IA subfamily, not for Human Topoisomerase I as it is a type IB subfamily topo [2]. Therefore, our results on the torsional degree of freedom of a nicked DNA is crucial in understanding the complete rotation mechanism of the DNA relaxation within Human Topoisomerase I METHODS Molecular dynamics (MD) is defined as the science of simulating the motions of a system of particles. The methods of MD simulations play an important role in the theoretical study of biomolecules. MD simulations are powerful techniques for describing and understanding the relationships of the structure and the function of biomolecules. Also, MD can be used to study fast events that occur on the order of picosecond to nanosecond time scales. This computational method calculates the time dependent behavior of a molecular system using Newton’s equations of motions. Quantum mechanical effects can be neglected in most MD simulations, because of the large size of biomolecules and instead empirical potential energy function is used to determine the interaction energy of the particles of a system as a function of the atomic coordinates. And, MD uses this potential energy to calculate the future position of the particles. In general, energy of system is separated into two parts: bonded and non-bonded energies [11] Utotal = Ubonded + Unonbonded where bonded term refers to atoms that are linked by covalent bond and, nonbonded terms describe the long range electrostatic and van der Waals interactions, which is also called noncovalent. Ubonded = Ubonds + Uangles + Udihedrals + Uimpropers + UUrey-Bradley Unonbonded = ULennard-Jones + UCoulomb One of the most popular molecular mechanics program package for biomolecules is CHARMM (Chemistry at HARvard Macromolecular Mechanics) written by Brooks at al. in 1983 [12] CHARMM is widely used molecular simulation program with broad application to many- particle system including macromolecular energy, minimization and dynamics calculations. It provides a large suite of computational tools that encompass numerous conformational and path sampling methods, free energy estimates, molecular minimization, dynamics, and analysis techniques, and model-building capabilities. All simulations were performed using CHARMM on our HP-Proliant linux clustered supercomputer. FINDINGS & CONCLUSION Our study focuses on different structural and energetic pathways that the nicked DNA undergoes during rotations, and aims to propose the best rotation scheme. In this regard, we have carried out 50 different simulations, in each one of them the DNA is rotated around a different axis. Each simulations is 1 ns long, and takes around 10 days on 8 CPUs. 25 different axis of rotation have been studied. One of them is chosen to be parallel to the helical axis. Then, a perpendicular vector to this axis as we called line 1 is found. This vector is rotated 900 by using rotation matrix to find the line 2. Again rotating line 1 and line 2 in 450, line 3 and line 4 are determined. As we look at from the downstream side, downstream DNA is rotated clockwise that results in producing positive supercoils, and anti-clockwise that results in producing negative supercoils. Therefore, all of the analysis we have done are repeated for both kinds of rotations. At the end of our study, we propose the following novel conclusions : (1) The DNA rotations are topology dependent. Linking number (Lk) can change by any integer value for the relaxation of positive supercoiled DNAs, while by only even numbers for negatively supercoiled DNAs. (2) In the negative rotations DNA rotates around a single dihedral angle ζ, but in the positive rotations it rotates around at least two dihedral angles; ζ and ε. (3)The dihedral angle ζ has been found as the most flexible dihedral angle in all kinds of rotations. (4) For the negative rotations; full DNA rotation is provided by a full rotation of the dihedral ζ,while for positive rotations the dihedral angles should rotate partially to bring a full DNA downstream rotation.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

612

2 nd International Symposium on Computing in Science & Engineering

(5) For the best negative and positive rotations that we propose, the value of α+γ is observed to be around zero, which is perfectly in agreement with the literature studies. (6) For negative rotations, DNA follow a structural path from BII state to BI state and then back to BII state. However, for positive rotations DNA goes from BII state to BI at the end of the downstream rotations. This means again that positive rotations should occur in even numbers, in agreement with our first conclusion.

REFERENCES [1] J. C. Wang, Nature Reviews, 3, 430, and 2002 [2] J. Champoux, Annual Review of Bio-Chem, vol. 70, pages 369-413, 2001 [3] J. C. Wang, Annu. Rev. Biochem, vol.65, pp. 635-692, 1996 [4] Sari L. and Andricioaei I., Nucleic Acids Research, vol.33, pp.6621-6634, 2005 [5] Paci,E. and Karplus,M., J. Mol. Biol., 288, 441–459. (1999) [6] L. Srewart, M. R. Redinbo, X. Qiu, W. G. J. Hol, J. J. Champoux, Science 1998, 279, 1534 [7] M. R. Redinbo, L. Stewart, P. Kuhn, J. J. Champoux, W. G. J. Hol Science 1998, 279, 1504 [8] B.Fuller, Proc. Natl. Acad. Sci. U. S. A. 1971, 68, 815 [9] Nelson D. L. and Cox M. M. Lehninger, Principles of Biochemistry (4th edition) pp. 283-284. [10] L. M. Fisher et al., Phil. Trans. R. Soc. Lond. B 1992, 336, 83 [11] J. P. Mesirov, K. Schulten and D. W. Sumners. Mathematical Applications to Biomolecular Structure and Dynamics, IMA Volumes in Mathematics and Its Applications. 82. New York: Springer-Verlag. pp. 218–247 [12] Brooks,B.R., Bruccoleri,R.E., Olafson,B.D., States,D.J., Swaminathan,S. and Karplus,M. CHARMM: a program for macromolecular energy, minimization, and dynamics. J. Comput. Chem., 4,187–217. (1983)

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

613

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/74

Computational Studies on Identifying Pharmacophore for the Inhibition of Cellular Protein and DNA Synthesis by a Series of Thiosemicarbazone and Thiosemicarbazide Derivatives Ahmet Altun, Department of Physics, Fatih University, 34900 B.Cekmece, Istanbul, Turkey, e-mail: [email protected] Keywords: Medical physics INTRODUCTION Thiosemicarbazones of 2-acetylpyridine, 2-acetylquinoline, 1-acetylisoquinoline and related compounds are inhibitors of herpes simplex virus types 1 and 2 (HSV-1 and HSV-2) [1]. However, all antiviral agents cannot be used in chemotherapy due to their side effects, such as toxicity and inhibitory effect on the cellular protein or DNA synthesis. Since the experiments conducted for testing such effects are not only expensive but also time consuming, it is necessary to develop methods that estimate structure-bioactivity relations and reveal pharmacophore, i.e., a group of atoms in a specific geometric arrangement that is considered responsible for a bioactivity demonstration. In this study, the pharmacophore responsible for the inhibition of cellular protein and DNA syntheses of thiosemicarbazone and thiosemicarbazide derivatives have been identified computationally in the frameworks of electron-conformational method (ECM) [2-13]. LITERATURE REVIEW Structure–inhibitory and –noninhibitory activity relationships of thiosemicarbazone and thiosemicarbazide derivatives against HSV-1 were previously examined by means of ECM with the aim to guide developing derivatives of existing drugs with enhanced potency and synthesizing new antiviral agents [2]. For designing less toxic inhibitory agents, a complementary ECM study was performed on these compounds for their structure– dermal toxicity and –dermal non-toxicity relations [3]. An agent that inhibits the protein or DNA synthesis has a negative selectivity in chemotherapy. Therefore, here we computationally study structural features of the title compounds that result in the inhibition of cellular protein and DNA syntheses by using available experimental data as reference [1]. METHODS ECM [2-13] was developed for pharmacophore identification and pharmacophore-based bioactivity prediction. Its objects are two (active and inactive) or more bioactivity sets of a series of molecules. Its computational part is a consequence of the following steps: (1) conformational analyses; (2) quantum-chemical calculations; (3) electron-conformational matrix of contiguity (ECMC) formation; (4) ECMC processing to select Pha. The first two steps are traditional ones and the others implement its core. In ECM, each compound in view is described by a set of parameters arranged as a matrix (ECMC) being symmetric with respect to the diagonal elements. While the diagonal elements of ECMC are chosen among the

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

614

2 nd International Symposium on Computing in Science & Engineering

atomic parameters, the off-diagonal elements of ECMC are one of the electronic bonding parameters for chemically bonded pairs of atoms and are interatomic distances for the nonbonded pairs of atoms. After forming ECMCs of all compounds with a unique atomic and bond parameters that are deemed most important for the activity demonstration, the submatrix which is present in all active compounds but absent in all inactive ones is searched within some tolerances. The submatrix revealed, which is called as electronconformational submatrix of contiguity (ECSC) or as pharmacophore, is considered responsible for activity demonstration. FINDINGS AND CONCLUSIONS The structure-activity relationship of a series of thiosemicarbazone and thiosemicarbazide derivatives for the inhibition of cellular protein and DNA syntheses has been investigated computationally by using electronconformational method. In the framework of this method, computed geometrical and electron structural parameters of each atom and bond in each molecule considered (54 compounds) were arranged as a matrix. A submatrix that is common (absent) in the inhibitory (noninhibitory) compounds, which is considered responsible for the demonstration of inhibitory activity, has been revealed within some certain tolerances. The revealed submatrix indicates that three charges situated at some specific distances are responsible for the inhibition of cellular protein and DNA syntheses. Thus, one of the criterions for a more effectual screening of new antiviral drugs is the absence of the feature revealed in a compound. The result of this study allows one to design new anti-herpes simplex virus agents without inhibitory side effects for the cellular protein and DNA syntheses. REFERENCES [1] Shipman, C. et al., Antiviral Res. 6, 197–222, 1986. [2] Altun, A. et al., J. Mol. Struct. (Theochem) 535, 235–246, 2001. [3] Altun, A. et al., J. Mol. Struct. (Theochem) 572, 121–134, 2001. [4] Bersuker, I. B. et al., J. Comput.-Aided Mol. Des. 13, 419–434, 1999. [5] Bersuker, I. B. et al., J. Chem. Inf. Comput. Sci. 40, 1363–1376, 2000. [6] Altun, A. et al., Bioorg. Med. Chem. 11, 3861–3868, 2003. [8] Dimoglo, A. S. et al., Drug Res. 47, 415–419, 1997. [9] Shvets, N. et al., J. Mol. Struct. (Theochem) 463, 105–110, 1999. [10] Güzel, Y. et al J. Mol. Struct. (Theochem) 418, 83–91, 1997. [11] Kandemirli, F. et al., Mini-Rev. Med. Chem. 5, 479-487, 2005. [12] Kandemirli, F. et al., Curr. HIV Res. 5, 449-458, 2007. [13] Yanmaz, E. et al., Bioorg. Med. Chem. 19, 2199-2210, 2011.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

615

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/76

Monte Carlo Simulation of the Methanol Trimer Kurtulus Golcuk, Fatih University, Department of Physics, Istanbul, Turkey mail: [email protected]

Keywords: Simulation, monte carlo INTRODUCTION The hydrogen bonded clusters, like water and methanol clusters, have received great amount of interest in the past decade [1]. Small clusters are formed by linear hydrogen bonds which determine their structural properties. Methanol (CH3OH) generally forms three strong hydrogen bonds, two as proton acceptors through oxygen and one as a proton donor. In addition to hydrogen bonding, methanol has hydrophobic interactions due to the presence of methyl group. Much of the stabilization of methanol clusters is due to the very sensitive electronic interaction of hydrogen bond [2,3]. This work aims to predict stable conformers of methanol trimer (CH3OH)3 by using Optimized Potential for Liquid Simulation (OPLS) [4] and Self-Consistent Field (SCF) methods [5] within the Monte Carlo simulated annealing (MCSA) algorithm. The Monte Carlo simulated annealing method was effectively used to predict. METHODS All of the calculations for intra- and inter-molecular potentials were carried out using OPLS and SCF methods implemented in a custom written MCSA program in Fortran. The overall intermolecular potential is called systematic potential energy [6] which governs the conformations of small molecular clusters. In a MCSA run, all the methanol atoms were initially kept fixed in the same coordinates. The total energy of the new conformation was evaluated in each step and compared to the prior one, after which the new conformation was either accepted or rejected based on the Metropolis criteria [7]. The simulation started from a high temperature and the temperature was lowered at the end of each loop by multiplying a scale factor of 0.8 until a stable conformer was obtained. The overall intermolecular potential consists of a sum of electrostatic, penetration repulsion, dispersion and induction energy contributions [6]. Electrostatic interaction between charge densities of the molecules is calculated from the classical first-order Coulomb energy. The penetration energy arises from the interpenetration of diffuse electron clouds around the molecules and varies exponentially with distance between O and H. The exchange repulsion energy is calculated from charge density overlap integral and arises from Pauli principle. The polarization or induction energy is calculated from the polarizabilities and permanent dipoles of the methanol molecules. LITERATURE REVIEW The studies on the structure of hdrogen bonded small clusters remains an active research topic because the fact that hydrogen bond permeates nearly every filed of chemistry, physics and biology [8]. Pauling [9] suggested that the molecules like water and methanol form cyclic structures through hydrogen bonding. Several experimental studies [10-12] provided support for cyclic and combination of ring and chain structures of large methanol clusters as well. The early Monte Carlo work of Jorgensen [4], the classical (MD) of Haughney [13] and the latest theoretical ab initio [2] and DFT [14] studies all predicted the existance of chains. Simulated annealing methods [3] were used with the effective potentials to locate the lowest energy structures

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

616

2 nd International Symposium on Computing in Science & Engineering

for the water clusters. Mandal et al. [15], used “atoms in molecules” theory to analyze the hydrogen bonding network in water-methanol clusters. Their results showed that binding energies of methanol clusters are higher than those of water clusters due to the electron-donating nature of the methyl group. The most comphresensive studies to date are the works of Mo et al. [16] and Tschumper [8] et al. on methanol trimer. Wright and El Shall [17] reported the Monte Carlo simulations of (CH3OH)n n = 5-255. Potentials that provide a better description of hydrogen bonding in cyclic conformations are required for better MD simulations of biomolecules. Computational results indicate that the cyclic methanol are the global minima when compared with chain, branched chain and branched cyclic arrangements [13-17]. Buck et al [1,6] suggested to use separated potentials for intra and intermolecular interactions and successfully predicted different conformers of methanol clusters and their theoretical vibrational spectra. FINDINGS & CONCLUSION Based on the simulations, three methanol trimer (CH3OH)3 confirmations called chair, bowl and open chain have been found to be stable structures. Cyclic conformers (bowl and chair) are joined in a ring structure formed by three hydrogen bonds. Open chain conformer has only two hydrogen bonds. The binding energies of the molecules are calculated using systematic potential approach. The chair configuration has been predicted to be the most stable methanol trimer. Open chain conformer has energy of -55,6 kcal/mol, bowl has energy of -58.2 kcal/mol and chair has energy of -59.2 kcal/mol. Bowl configuration has C3 symmetry whereas chair and open chain conformations have no symmetry. Their optimized geometries are in good agreement with methanol trimers obtained from ab initio [2,8] and DFT calculations [14]. The MCSA method has been effectively used to predict the conformers of methanol trimer.eIt can be concluded from this work that the conformational search of hydrogen bonded clusters can be efficiently searched using systematic potential combined with the MCSA algorithm. REFERENCES [1] U. Buck et al., Chem Rev. 100, 3863, 2000. [2] M.N. Pires and V.F. DeTuri, J. Chem. Theory Comput. 3, 1073, 2007. [3] P. N. Day et al., J. Chem. Phys. 112, 2063, 2000. [4] W.L. Jorgensen, J. Phys. Chem. 90, 1276, 1986. [5] F. Bernardi et al. J. Chem. Phys.67, 4181, 1977. [6] U. Buck et al., J. Chem. Phys. 108, 20, 1998. [7] H. Zhang et al., Carbohydrate Research 284, 25, 1996. [8] G.S. Tschumper et al., J. Chem. Phys. 111, 3027, 1999. [9] L. Pauling, The Nature of the Chemical Bond, Cornell University Press, Ithaca, NY, 1960. [10] S. Sarkar et al., J. Chem. Phys. 99, 2032, 1993. [11] S. Kashtanov et al., Phys. Rev. B. 71, 104205, 2005. [12] K. Wilson et al., J. Phys. Chem. 109, 10194, 2005. [13] M. Haughney et al., J. Phys. Chem. 91, 4934, 1987. [14] S. L. Boyd and R.J. Boyd, J. Chem. Theory Comput. 3, 54, 2007. [15] A. Mandal et al., J. Phys. Chem. A. 114, 2250, 2010. [16] O. Mo et al., J. Chem . Phys. 107, 3592, 1997. [17] D. Wright and M. El Shall, J. Chem . Phys. 105, 11199, 1996.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

617

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 800/03

Optimization of Izmir Alsancak Port Stock Yard Dilay YILDIRIM, Celal Bayar University, Civil Engineering, Manisa, Turkey, [email protected] Begüm Y. DAĞLI, Celal Bayar University, Civil Engineering, Manisa, Turkey, [email protected] Ümit GÖKKUŞ,Celal Bayar University, Civil Engineering, Manisa, Turkey, [email protected] Keywords: Genetic Algorithm, Optimization INTRODUCTION Nowadays, the importance of ports is growing with increase of the importance of oversea commerce in the world trade. Ports includes facilities where ships roll-up with confidence and utilize cargo services. These facilities are collected in two main groups as infrastructure and superstructure. The determination of the physical sizes and capacities of these facilities properly interests port management which is an element of competition among the countries of the world. Countries can develop appropriate strategy about port management but always the main goal is increase effectivity and turnover. In the optimal dock sizing studies which are making for this purpose, ship sizes, arrival-waiting-departure time intervals and the amount of cargo handling create basic parameters. In recent years, owing to the development in the computer technology, optimization studies are made by using artificial intelligence as a shorter and less costly method of solution. Artificial intelligence is a research area, aimed to examine the functions related with intelligence in human, with the aid of computer models, bring these functions into formulas and apply them to artificial systems. The basic principle is making the human life easier and better quality at scientific level in the education and engineering fields. Artificial intelligence occurs from algorithms and methods experimenting with not all solutions only some elective solutions of the difficult or unsolvable problems, to achieve the expected optimum result. One of these methods is Genetic Algorithm. LITERATURE REVIEW Genetic Algorithm gets the first name from biology and the second name from science of computer. Instead of improving the ability of learning of only one mechanical system, a community of such systems are investigated by Genetic Algorithm. Genetic Algorithm was developed by Prof. John Holland and his students at the university of Michigan during the 1960s and 1970s. After the publication of Holland’s book explained the results of his study in 1975, his developed method was began to mention Genetic Algorithm or simply GA. David E. Goldberg who gave Ph. D. dissertation as Holland’s student in 1985, was a civil engineer. Until he published his book considered a classic in its field in 1989, Genetic Algorithm had been considered as a research subject not having had very practical benefits. Whereas Goldberg’s studies, which were above 1173 and about gas pipelines’ inspection, brought him National Science Fundation Young Researcher prize in 1985 and also those studies proved that Genetic Algorithm could be used practically. [7]. Since the inception of GA, they have found applications in numerous areas. In the area of engineering design; Yao (1992) used GA to estimate parameters for nonlinear systems, Joines (1996) applied GA to manufacturing cell design, Gold (1998) introduced GA to kinematic design of turbine blade fixtures. In the area of scheduling and planning; Timothy (1993) optimized sequencing problems using GA, Davern (1994) designed the architecture for job shop scheduling with GA. In the area of computer science; Rho (1995) used GA in distributed database design, ın the area of image processing; Tadikonda (1993) used GA to realize automated image segmentation and interpretation, Huang designed detection strategies for face recognition with GA. [13] In the present day, Genetic Algorithm is used for mostly

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

618

2 nd International Symposium on Computing in Science & Engineering

optimization and observed better results according to the other classical methods in many engineering fields. [3]. METHODS Genetic Algorithm takes as an example the mechanism of evolution and bases on simulation of the natural selection process in the computer environment with genetic operators. The difference from other methods is being a stochastic search method which makes search in a group solution and bases on a objective function. Objective function is defined over the genetic representation and measures the quality of the represented solution. For this reason, in complex optimization problems which the classical search methods are insufficient, the optimal solution can be reached with higher probability in general. Therefore, in the optimum size analyses of the stock yard which includes ambiguities in high degree, Genetic Algorithms constitute an important alternative against other methods. In the content of this study, data was used of the Izmir Alsancak Port for 2010. The stock yard is receiving containers in three channels, including seaway, highway, railway and sends them likewise with help of these three channels. Not only the daily data of containers arriving and deporting in 2010 but also the number of waiting containers in stock yard, counting per month for two years, were considered. Containers’ standby time in stock yard was acknowledged maximum four days. Objective function was determined according to containers flow. Monthly datas of the containers were entered into the toolbox of the Genetic Algorithm in Matlab Software Package, the results were evaluated. FINDINGS & CONCLUSION In this study Izmir Alsancak Port was considered for optimum stock yard size analysis. Izmir Alsancak Port is the largest port in terms of turnover and export of Turkey. The port has a wide range of agricultural and industrial hinterland. All services are given to the all kind of cargo and railway, highway network is linked to the port. Every year thousands of containers arrive at Izmir Alsancak Port therefore optimum using of the stock yard is very important for effectivity and turnover. In this study the number of containers were calculated by using the proposed objective function per month and afterwards compared with the observed values. First, the target capacity was determined in cost analysis. With the purpose of making the most economic sizing with the least damage, various stacking cases and cost analysis of various target capacity values were compared. The obtained optimal values of the stock yard with annual stock yard capacity values of the port of Izmır Alsancak were compared and the results have been exhibited. REFERENCES [1] BOLAT, B., EROL, K. O., IMRAK, C.E., “Genetic Algorithms In Engineering Applications And The Function of Operators” , Journal of Engineering and Natural Sciences, 2004. [2] CROCE, F. D., TADEI, R., VOLTA, G., “A Genetic Algorithm for Job Shop Scheuling, Computers and Industrial Engineering”, Vol: 25, No: 1-4. Pergamon, 1995. [3] DE JONG, K. A., “Analysis of the Behavior of a Class of Genetic Adaptive Systems”. Ph. D., Dissertation, The University of Michigan, Ann Arbor, 1975. [4] ELIIYI TURSEL, D., SEVIL, B., YUMURTACI, I. O., GULDOGAN URSAVAS, E., ADA, E., “Port Management And Berth Allocation Problem”, 243 Ege Academic Review 8 (1) 2008: 243-256, 2008. [5] EREN, A., “Optimum Planning Of Container Yards”, Ph.D. Dissertation, Dokuz Eylül University, Izmir, 2003. [6] GEN, M., CHENG, R., “Genetic Algorithms and Engineering Design”. New York, USA, 1996. [7]GOLDBERG, D. E., “Computer-Aided Gas Pipeline Operation Using Genetic Algorithms and Rule Learning”. Ph.D Dissertation University of Michigan, Ann Arbor, 1983.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

619

2 nd International Symposium on Computing in Science & Engineering

[8 ]GOLDBERG, D. E., “Genetic Algorithms in Search, Optimization and Machine Learning”. AddisonWesley, MA, 1989. [9]GOKKUS, U., “Liman Muhendisligi Ders Notlari”, Celal Bayar University, Faculty of Engineering Civil Engineering Department, Manisa, 2000. [10] HOLLAND, J. H., “Adaptation in Natural and Artificial Systems”, University of Michigan Pres, Ann Arbor, 1975. [11] SYSWERDA, G., “Schedule Optimization Using Genetic Algorithms. In L. Davis (ed.), Handbook of Genetic Algorithms”, New York: Van Nostrand Reinhold, pp. 332-349,1991. [12] KOC, M. L., BALAS, C. E., “Tas Dolgu Dalgakiranlarin Genetik Algoritma ile Guvenilirlik Analizi” [13] WINTER, G., PERIAUX, J., GALAN, M., CUESTA, P., “Genetic Algorithms in Engineering and Computer Science”. John Wiley & Sons Ltd., England, 1996. [14] ZHOU Y., “Study on Genetic Algorithm Improvement and Application”, Worcester Polytechnic Institute , 2006.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

620

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 800/04

Diffusion Bridge Method in Inference of Complex Biochemical Systems Vilda PURUTÇUOĞLU,Middle East Technical University,Department of Statistics,Ankara,Turkey,[email protected] Keywords: Diffusion Bridge Method, Bayesian Inference, Biochemical Systems INTRODUCTION A set of reactions which builds a system can be mathematically modelled in different ways. There are three main approaches, namely, the Boolean, differential equations, and stochastic methods, to describe the biological processes under distinct constrained-based models. Among them, the random nature of microscopic molecular collisions is taken into account by stochastic models. The diffusion approximation is one of the major techniques in stochastic modellings of large biochemical systems. This approximation method basically generates a model from a deterministic differential equation for the dynamics of a probability distribution, which is called the Fokker-Plank equations [1, 2]. But due to the availability of the discrete time-course measurements, we need to use its discretized version, known as the Euler-Maruyama approximation which can be shown by the following expression. ΔΥt=μ(Yt,θ)Δt+β1/2(Yt,θ)ΔWt where μ(Yt,θ) and β(Yt,θ) refers to the drift and diffusion matrices of the states Y and the stochastic reaction rate contants θ=(λ1 ,…,λn) at time t, respectively, for totally n number of reactions whose rates are λi (i=1,…,n). On the other hand ΔWt indicates the independent identically distributed Brownian random vector generated by the normal distribution with mean zero and covariance-variance as the product of the identity matrix I and discrete time interval Δt, i.e. ΔWt~N(0,IΔt) [3, 4]. In inference of θ via the Euler model, both the large number of missing states and the high correlation between the states and θ are the two major challenges in the calculation. The diffusion bridge method is one of the recent and advance approaches to estimate θ under such model. In this study, as the novelty, we first time implement this method in inference of realistically large system with MCMC and data augmentation approaches and compare our results with previous findings which are obtained under high dependency. LITERATURE REVIEW To deal with the underlying problems in inference of θ, several alternative methods are suggested [5, 6]. The column-wise updates of states and the block updates of reaction rates are the major approach in this estimation [4, 8, 9]. This approach can be used within one of the special case of Metropolis-Hasting algorithm, called the Metropolis-within-Gibbs (M-W-G) technique, when the states Y are augmented by adding latent Y in every pair of observed Y [7]. Although the M-W-G method is successful in dealing with the biasness in the Euler approximation, the high dependency in the updates of Y by column-wise strategy or the augmented Y between each pair of observed time points results in slow convergent rates in MCMC algorithm and leads to high computational demand, in particular, for large biochemical systems [8, 9]. In order to solve the underlying correlation problem, Golightly and Wilkinson [10] propose the blocking updates of the latent states by constructing a diffusion bridge between observed time points.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

621

2 nd International Symposium on Computing in Science & Engineering

METHODS In the diffusion bridge approach, similar to the column-wise idea, the inference is conducted within a M-W-G algorithm. But different from that technique, here, we consider to update multiple latent states between each pair of observed time points simultaneously while the states can be composed of either partially observed measurements or fully augmented values. Then by inserting a conditional density within the Euler model, the states can be generated from a multivariate normal distribution. On the other hand to estimate the reaction rates, we can perform the blocking updates within a random walk algorithm whose proposal rates come from the normal distribution. FINDINGS & CONCLUSION From the results it is seen that the performance of the diffusion bridge method is more successful in decreasing the high correlation between augmented Y. Whereas since we continue to keep the dependency between θ and Y, the convergence rate can alter with respect to the number of augmented Y within every pair of observed time points and the number of observations in Y. REFERENCES [1] N. G. Van Kampen. Stochastic Processes in Physics and Chemistry. Amsterdam: North-Holland, 1981. [2] J. M. Bower and H. Bolouri. Computational Modelling of Genetic and Biochemical Networks. Massachusetts Institute of Technollogy, Second Edition, 2001. [3] D.J. Wilkinson. Stochastic Modelling for Systems Biology. Chapman and Hall/CRC, 2006. [4] A. Golightly and D. J. Wilkinson. Bayesian inference for stochastic kinetic models using a diffusion approximation. Biometrics, 61 (3), 781-288, 2005. [5] R. J. Boys, D. J. Wilkinson, T. B. L. Kirkwood. Bayesian inference for discretely observed stochastic kinetic model. Statistics and Computing, 18, 125-135, 2008. [6] D. J. Wilkinson. Handbook of Parallel Computing and Statistics, chapter in Parallel Bayesian Computation, Chapman and Hll/CRC, 477-501, 2006. [7] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian Data Analysis. Chapman and Hall/CRC, 2004. [8] V. Purutçuoğlu and E. Wit. Bayesian inference for the MAPK/ERK pathway by considering the dependency of the kinetic parameters. Bayesian Analysis, 3 (4), 851-886, 2008. [9] V. Purutçuoğlu and E. Wit. Bayesian inference of the complex MAPK pathway under structural dependency. Journal of Statistical Research, 6 (1), 1-17, 2009. [10] A. Golightly and D. J. Wilkinson. Bayesian sequential inference for nonlinear multivariate diffusions. Statistics and Computing, 16, 323-338, 2006.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

622

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 800/07

Investigating Zipf’s Laws on Turkish Bahar KARAOĞLAN Ege University, International Computing Institute, İzmir, Turkey, [email protected] Senem KUMOVA METİN İzmir University of Economics,Faculty of Engineering and Computer Science, İzmir,Turkey,[email protected] Bekir Taner DİNÇER Muğla University, Department of Statistics, Muğla, Turkey, [email protected] Keywords: Zipf Laws, least effort principle, distribution of words, vocabulary balance, distribution of meaning, law of burstiness INTRODUCTION In this study, we investigate the applicability of Zipf Laws on Turkish. These empirical laws are formulated using mathematical statistics and are based on the principle of least effort. The laws offer a means of formulization for natural languages by providing relations for four different properties of language. The fitness of a language to Zipf Laws may enrich language models defined for several Natural Language Processing applications. LITERATURE REVIEW The basic philosophy under Zipf Laws is that there is a balance between the parameters that govern a relation such that as one increases the other decreases in some respect so that the product of these parameters is a constant constituting the balance in least effort battle. Many researchers that study in the field of statistical linguistics have investigated the fitness of Zipf’s laws regarding to different languages. Zipf has stated four laws on frequency distribution of words, vocabulary balance, distribution of meanings and burstiness. In Zipf’s first law, it is stated that there is an inverse relation between the frequency of a word and its rank when the words in the corpus are ordered from most frequent to the least frequent. This inverse relation may be used to predict the frequencies of the words in a large corpus. The first law is studied and modified by several researchers (Mandelbrot, 1952, Baayen, 1996, etc.) The law of vocabulary balance defines a relation that enables the mathematical calculation of vocabulary size of a corpus. It is stated as: the number of different words having same frequency of occurrence is inversely proportional to their frequency. The law of meanings relates the frequency of a word with the number of different meanings that this word is expected to take on. Zipf’s law of burstiness focuses on the distribution of content words and in this respect differs from the other 3 principles in that it considers not only the frequency of the words but also how they are distributed within the document. Zipf explains the distribution of content words with power laws consistent with the least effort principle. The principle simply says that if a speaker/writer uses once a content word during his conversation/written-text about a given content, he is likely to use the same word again instead of a synonymous word, to spend less effort. METHODS The ultimate goal of this study is to investigate the applicability of Zipf’s laws on Turkish and draw the Zipfian parameters for further studies in automatic processing of the language. We used several Turkish corpora developed by different institutions both for training and testing purposes. Moreover we have explored Zipf 1. Law both on stemmed and surface formed corpora to investigate the effect of stemming in Turkish. The corpora are subjected to some preprocessing steps such as tokenization, stemming and punctuation removal before the experimentation step.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

623

2 nd International Symposium on Computing in Science & Engineering

FINDINGS & CONCLUSION In this article after giving a brief overview of Zipf’s laws based on principle of least effort their applicability on Turkish is experimentally investigated. Four different properties of language have been investigated: frequency distribution of words, vocabulary balance, distribution of meanings and burstiness. In the first part of the study, both Zipf’s and Mandelbrot’s equations of frequency-rank relation are explored on stemmed and surface-based corpora The results showed us that Zipfian equation modified by Mandelbrot resembles the actual distribution for Turkish corpora. The experiments on vocabulary balance principle showed us that Turkish is convenient to Zipf’s principle. Another deliverable of this stage is the evidence on relation between second and first law as in English. The deliverable convinces that Zipf’s laws are modeling different aspects of Turkish without violating the integrity of language. Experimentation on Turkish corpus showed us that there is a correlation between number of meanings and their frequencies as it is stated in Zipf’s law of meanings. This encourages word sense disambiguation researches in Turkish supplying an average number of meaning to each word and an upper bound to total number of meanings in a given corpus. The correlation exponent we have obtained is close to Zipf’s exponent for English, so the relation seems to be independent from language encouraging studies on different languages supporting a language-independent measure of meaning content of corpus. The results of burstiness experiments on Turkish corpus showed that if a content word has seen in a corpus several times, one occurrence of word will be high probably close to the next occurrence . In other words, content words are distributed in scattered groups in Turkish corpus as in English. This verifies that Zipf’s law of burstiness does not depend on the language. The behavior revealed in this experiment is explained by topic clustering in the area of information retrieval. And we believe that it may guide related studies as indexing in Turkish. As a conclusion, the empirical results from Turkish corpora showed us that Zipf’s laws are applicable on Turkish with a high degree of similarity to English. The parameters obtained from experiments can be used to model different aspects of language in a variety of NLP studies. REFERENCES [1] Baayen, R. H. (1996). The Effects of Lexical Specialization on the Growth Curve of the Vocabulary. Computational Linguistics 22, 455-480. [2] Baayen, R. H. (2001). Word Patterns and story shapes: The statistical analysis of narrative style. Literary and Linguistics 2, 61-70. [3] Dinçer, B. T. (2004) Türkçe için İstatistiksel Bir Bilgi Geri Getirim Sistemi. Ph.D. Thesis, UBE, Ege University, Turkey. [4] Herdan, G. (1960) Type-Token mathematics : A textbook of mathematical linguistics. The Netherlands: The Hague: Mouton and Co. [5] İlgen, B. & Karaoğlan, B. (2007). Investigation of Zipf's 'Law-of-Meaning' on Turkish Corpora. ISCIS2007. Istanbul. [6] Kocabaş, İ. & Kışla, T. & Karaoğlan, B. (2007). Zipf's Law of Burstiness in Turkish. ISCIS2007. Istanbul. [7] Kornai, A. (2002). How many words are there ?. Glottometrics 4, 61-86. [8] Mandelbrot, B. (1952). An information theory of the structure of the language. Symposium on Applications of Communication Theory, 486-500. London. [10] Mandelbrot, B. (1959). A note on a class of skew distribution function analysis and critique of a paper by H.A. Simon. Inform. and Control 2, 90-99. [11] Mannıng, C. D. & Schutze, H. (2000). Foundations of Statistical Natural Language Processing. The MIT Press. [12] Poole, H. (1985). Theories of the middle range. Norwood: New Jersey: Ablex. [13] Powers, D. M. W. (1998). Applications and Explanations of Zipf's Law. In: J. Burstein and C. Leacock (eds.): Proceedings of the Joint Conference on New Methods in Language Processing and Computational Language Learning, 151-160. Somerset, New Jersey: Association for Computational Linguistics. [14] Sanuelsson, C. (1996). Relating Turing's Formula and Zipf's Law. The 4th Workshop on Very Large Corpora. Copenhagen, Denmark. [15] Simo, H. A. (1955). On a Class of Skew Distribution Functions. Biometrika 42, 425-440. [16] Tür, G. (2000). A Statistical Information Extraction System for Turkish. Ph.D. thesis, Bilkent University, Department of Computer Engineering, Ankara, Turkey. [17]Zipf, G. K. (1935). The Physco-Biology of Language, An Introduction to Dynamic Philology. The MIT Press. [18] Zipf, G. K. (1945). The meaning-frequency relationship of words. Journal of General Physchology 33, 251-266. [19] Zipf, G. K. (1949). Human Behavior and the Principle of Least-Effort. Cambridge, MA: Addison-Wesley.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

624

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 800/08

A Novel Objective Function Embedded Genetic Algorithm for Adaptive IIR Filtering and System Identification Tayebeh MOSTAJABI, Iran University of Science and Technology, Electrical Department, Tehran, Iran, [email protected] Javad POSHTAN, Iran University of Science and Technology, Electrical Department, Tehran, Iran, [email protected] Keywords: Adaptive IIR filtering, genetic algorithm, objective function, system identification INTRODUCTION It is well known that adaptive IIR (infinite impulse response) filters are useful in many fields such as echo cancelations, noise reductions, bio systems, speech recognitions communications and control applications. There are principally two different important set of applications in IIR filter design: adaptive signal processing and adaptive system identification. The design of IIR filter for adaptive signal processing is mostly based on the desired frequency response (passband) whereas in adaptive system identification, IIR filter is employed for modeling, Therefore it should behave as similar as possible to the real system in both time and frequency domains. In order to use adaptive IIR filtering, a practical, efficient and robust global optimization algorithm is necessary to minimize the multimodal error function. On the other hand, Genetic algorithm is a powerful optimization technique for minimizing multimodal functions, Thus several researchers have proposed various methods specifically designed for use in adaptive IIR filtering applications [2-8,10-11]. In this paper the designing methods of IIR filter in signal processing by using GA based on frequency response are employed for adaptive system identification and for this purpose, a novel objective function is proposed to enhance the performance of estimated model in both time and frequency responses. LITERATURE REVIEW Adaptive digital signal processing is an important subject in many applications. Adaptive system identification, adaptive noise cancellation, adaptive channel equalization and adaptive linear prediction are just some examples of important application areas that have been significantly advanced with adaptive signal processing techniques. This systems are recursive in nature and would greatly benefit from implementing infinite impulse response filters (IIR) [1,9]. Adaptive IIR filters more accurately model physical plants than equivalent adaptive FIR filters. In addition, they are typically capable of meeting performance specifications using fewer filter parameters. Despite that, IIR structures tend to produce multimodal error surfaces for which its cost function is significantly difficult to minimize. Therefore many conventional methods especially stochastic gradient optimization strategies may become entrapped to local minimum. Genetic Algorithm is a robust search and optimization technique, which finds applications in numerous practical problems. The robustness of GA is due to its capacity to locate the global optimum in a multimodal landscape[4]. Thus several researchers have proposed various methods in order to use GA in adaptive IIR filtering applications, for instance, in [3-5] GA and input-output data in time domain are utilized to estimate the parameters of an IIR model in order to use in system identification where fitness function based on mean square error (MSE) between the unknown plant and the estimated model is used. On the other hand GA is extensively employed in adaptive signal processing applications. The primary application is reported in [2]. In recent years, several successful techniques have been introduced to improve GA capability for signal processing applications.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

625

2 nd International Symposium on Computing in Science & Engineering

[6,8,10,11]. In [8] a stability criterion embedded in the GA is applied to design robust d-stable IIR filter, but IIR filters, produced by this algorithm, do not describe the dynamic plant necessarily. Therefore it is not useful for system identification. In the most applications of optimization methods in adaptive IIR filtering, the objective function is designed based on the MSE criteria. In [7] another cost function based on Least Mean Squared error (LMS) and mean absolute error (MAE) is considered alongside MSE. Another example is [6] in that the phase response is considered besides the magnitude response and it has been tried to design a linear phase response filter via a fitness function based on variance of the phase different sequence of the designed IIR filter. METHODS The recursive expression of an IIR filter and its equivalent transfer function of the IIR filter is considered, in which [a1…aN] and [b0 b1…bM] are filter coefficients that define its poles and zeroes respectively. These parameters are estimated by genetic algorithm so that the error based on fitness function between the frequency response of designed IIR filter and the real frequency response (of the plant) is minimal. The typical cost function, in adaptive filtering is the mean squared error (MSE) between the frequency response of the unknown system and the estimated adaptive filter by each chromosome. As the result of this objective function, magnitude frequency response of the estimated model would be suitable, but it could not guarantee that this method behave as the same as real system in time domain too. In this paper a novel objective function is proposed to enhance the performance of estimated model in both time and frequency responses. FINDINGS & CONCLUSION Magnitude frequency response could be guaranteed with the conventional fitness function , but phase response may not be similar enough respect to the real system. the quality of magnitude response and the desired passband and stopband frequency response might be insure the desired performance in adaptive signal processing applications, whereas in adaptive system identification, high quality and similarity in a whole time responses and frequency responses (magnitude and phase) are considered, Therefore , the error function based on variance in order to control the phase response respect to the real system is suggested. Finally two objective functions based on MSE, LMS and variance are proposed in order to attain acceptable performance in both time and frequency responses. Four typical IIR filters - lowpass, highpass, bandpass and bandstop - are considered as an unknown plants, in order to examine the suggested objective functions that are employed by GA for system identification. Experimental results, including step response, impulse response, magnitude and phase responses, bode diagram, pole-zero map and root locus diagram illustrate the capability of the proposed objective functions in compared to conventional one. Numerical results indicate that the suggested fitness functions are effective in building an acceptable model for linear identification. REFERENCES [1] J.J Shynk, "Adaptive IIR Filtering," IEEE ASSP Magazine april 1989. [2] D. Etter, M. Hicks, and K. Cho, “Recursive adaptive filter design using an adaptive genetic algorithm,” proc.IEEE Int. Conf. on ASSP, vol. 7, May 1982, pp. 635–638. [3] S. C. Ng, S.H. Leung, C. Y. Chung, A. Luk , and W. H. Lau, “The genetic search approach : A new learning algorithm for adaptive IIR filtering, ” IEEE Signal Processing Magazine, pp .38–46, Nov.1996. [4] V.Hegde., S.Pai and W. K. Jenkins, “ Genetic Algorithms for Adaptive Phase Equalization of Minimum Phase SAW Filters,” Proc. 34th Asilomar Conf. on Signals, Systems, and Computers, November 2000. [5] S. Pai., W. K. Jenkins., and D. J. Krusienski, “Adaptive IIR Phase Equalizers Based on Stochastic Search Algorithms”, Proc. of the 37th Asilomar Conf. on Signals, Systems, and Computers, November 2003. [6] Yang Yu and Yu Xinjie, " Cooperative Coevolutionary Genetic Algorithm for Digital IIR Filter Design, " IEEE Trans. Industrial Electronics ,vol. 54, no.3 , june 2007. [7] N.Karabogal and B.Cetinkaya, " Performance Comparison of Genetic Algorithm Based Design Methods of Digital Filters with Optimal Magnitude Response and Minimum Phase, " proc. IEEE Int Symp on Micro-Nano mechatronics and Human Science 2003. [8] S.Tai Pan, " Design of robust D-stable IIR

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

626

2 nd International Symposium on Computing in Science & Engineering

filters using genetic algorithms with embedded stability criterion," IEEE Trans. Signal Processing, vol. 57, no. 8, august 2009. [9] D. J. Krusienski and W. K. Jenkins, “ Design and performance of adaptive systems based on structured stochastic optimization strategies,” IEEE Circuits And Systems Magazine, First Quarter 2005. [10] J.T.Tsai, J.H. Chou and T.K.Liu, " Optimal design of digital IIR filters by using hybrid taguchi genetic algorithm," IEEE Trans. Industrial Electronics ,vol. 53, no.3 , june 2006. [11] M.Haseyama and D.Matsuura, "A filter coefficient quantization method with genetic algorithm, including simulated annealing," IEEE Signal Processing Letters, Vol. 13, No. 4, April 2006.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

627

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 800/09

Comparison of Various Distribution-Free Control Charts with Respect to FAR Values Tuğba ÖZKAL YILDIZ, Dokuz Eylül University,Department of Statistics,İzmir,Turkey,[email protected] Senem ŞAHAN VAHAPLAR, Dokuz Eylül University,Department of Statistics,İzmir,Turkey,[email protected] Keywords: Statistical Process Control, Nonparametric Methods, Distribution-Free Control Charts, Order Statistics, False Alarm Rate INTRODUCTION Statistical process control (SPC) can be defined as the application of statistical methods for monitoring and controlling a process to ensure that it operates at its full potential for producing conforming products. In statistical process control, causes of variation in the processes can be detected by using control charts. The control limits of traditional Shewhart control charts are based on the assumption that the quality characteristic is distributed normally. But frequently, this normality assumption is not satisfied in real life data and this affects the statistical properties of standard control charts. LITERATURE REVIEW Balakrishnan et al. (2010) introduced a new distribution free Shewhart type control chart that takes into account the location of a single order statistic of the test sample (such as the median) as well as the number of observations in that test sample that lie between the control limits. They derived some exact formulas for alarm rate, false alarm rate and average run length. The authors provide tables for some ARL values and false alarm rates. They compare the chart they designed to the one developed by Chakraborti et al. (2004). In their paper, Chakraborti et al. considered a class of Shewhart type distribution-free control charts. They derived exact expression for the run length distribution and the average run length. The chart they proposed is preferable from a robustness point of view. METHODS Supposing that there is a reference sample of size m, X1, X2, …, Xm with cumulative distribution function FX(x)=F(x), and a test sample is chosen from this reference sample of size n, Y1, Y2, …, Yn, with cumulative distribution function FY(x)=G(x). The purpose is to detect a possible change in the underlying distribution from F(x) to G(x). In nonparametric applications, usually sample median M is preferred to sample mean and the median is compared to suitably chosen control limits from the reference sample. When the distribution of the reference sample is unknown, specific order statistics obtained from the reference sample can be used as control limits. FINDINGS & CONCLUSION There are some measures used for determining the performance of control charting, such as False Alarm Rate (FAR). FAR is defined as the probability of getting an out-of-control signal while the process is actually incontrol. This study aims to compare various distribution-free control charts with respect to their FAR values.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

628

2 nd International Symposium on Computing in Science & Engineering

REFERENCES [1] Balakrishnan, N., Triantafyllou, I. S., Koutras, M. V. (2010). A distribution-free control chart based on order statistics. Communications in Statistics- Theory and Methods, 39, 3652-3677. [2] Chakraborti, S., Van der Laan, P., Van de Wiel, M. A. (2004). A class of distribution-free control charts. Applied Statistics, 53 (3), 443-462. [3]Bakir, S. T. (2004). A distribution-free Shewhart quality control chart based on signed-ranks. Quality Engineering, 16 (4), 613-623. [4]Montgomery, D.C. (1997). Statistical Quality Control, 3rd ed. John Wiley, New York. [5]Janacek, G. J., Meikle, S. E. (1997). Control charts based on medians. The Statistician, 46 (1), 19-31.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

629

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 800/10

Multivariate Regression Splines and their Bayesian Approaches in Nonparametric Regression Akhlitdin NIZAMITDINOV,Anadolu University,Faculty of Science,Department of Statistics,Turkey,Eskisehir,[email protected] Memmedaga MEMMEDLI,Anadolu University,Faculty of Science,Department of Statistics,Turkey,Eskisehir,[email protected]

Keywords: Bayesian P-spline, P-GAM, Adaptive regression spline, Thin plate splines, Bayesian adaptive regression spline INTRODUCTION Multivariate regression approach such as generalized additive models, thin plate splines, penalized generalized additive models have a wide implementation in nonparametric problems. Simultaneously, their Bayesian versions, Bayesian adaptive regression spline, Bayesian penalized splines used in many studies to improve prediction and assessment of data set. In this study we made a comparison between thin plate splines, penalized generalized additive models, and their Bayesian versions: Bayesian adaptive regression spline and Bayesian penalized splines through the simulation study and the Boston Housing data set. We made a simulation study that was conducted to evaluate the performances of the above nonparametric techniques. The functions for the simulation were taken from the paper of (Smith and Kohn, 1997). For each function we took 100 values with 100 replications. The results of simulation are compared using log10(MSE) with box-plots. For performance criteria of Boston Housing Data dataset we used root mean square error. The results of performance criteria are compared with each other to realize the best estimator. LITERATURE REVIEW In this study we used thin plate splines, penalized generalized additive models (P-GAM), and Bayesian versions: Bayesian adaptive regression spline and Bayesian penalized splines. Further we give a short theoretical background of these techniques. Penalized generalized additive models (Eilers and Marx, 1998) consider in the form of g(μ)=Ba, where B=(1 B1 ... Bp) is the regressor matrix, and a=(α, a1,...,ap)T. P-splines directly fit generalized additive models through a slightly modified method of scoring algorithm and avoid the call to backfitting algorithm. Penalized generalized additive models technique essentially eliminates the local scoring algorithm. Overfit Bsplines(de Boor, 1978, Diercx, 1993) for each GAM component, while penalizing estimation of each vector j=1,...,p (based on finite differences of adjacent Baj, splines coefficients), results in maximizing the penalized version of the log-likelihood l*=l(y;a)-½∑pj=1 λjatjPjaj (1) where λj≥0, for all j>0 are the smoothing parameter. We now take a closer look at the structure of the penalty. Define Pj=(Ddj)T(Ddj), where d=0,1,2,... . The matrix Ddj, of dimension (nj-d)×nj is the building block of the penalty with its (banded) rows consisting of dthorder polynomial contrasts. For fixed component j, this banded matrix corresponds to the matrix repres entation of the difference operator of order d. For the jth component, we express a nj-d vector of differences as Ddjaj, where and Dd+1jaj=D1jDdjaj (2) D0jaj=aj 1 D jaj={ajk-aj,k-1} k=2,...,nj

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

630

2 nd International Symposium on Computing in Science & Engineering

When d=0, we have Pj=In×n which reduces to ridge regression with B-splines. For d>0, Pj has a banded structure. (Marx and Eilers, 1998). Bayesian methods play a significant role among the nonparametric regression techniques with splines. Using modern simulation approach for sampling posterior makes estimation’s results more precision. The most studies and papers in Bayesian approach use Markov chain Monte Carlo method with Metropolis-Hastings algorithm and Gibbs sampling. Bayesian P-splines (Lang S. and Brezger A. (2001)) approach by Andreas Brezger and Stefan Lang for additive models and extensions by replacing difference penalties with their stochastic analogues, i.e. Gaussian (intrinsic) random walk priors which serve as smoothness priors for the unknown regression coefficients. A closely related approach based on a Bayesian version of smoothing splines can be found in (Hastie and Tibshirani, 2000) , see also (Carter and Kohn, 1994) who choose state space representations of smoothing splines for Bayesian estimation with Markov Chain Monte Carlo (MCMC). Compared to smoothing splines, in a P-splines approach a more parsimonious parametrization is possible, which is of particular advantage in a Bayesian framework where inference is based on MCMC techniques. Adaptive Bayesian regression spline was introduced by (Biller C. (2000)). He supposed a fully Bayesian approach to regression splines with automatic knot selection in generalized semi parametric models. As a basis function representation of the regression spline he used B-spline basis. The reversible jump Markov chain Monte Carlo method allows for estimation both of the number of knots and the knot placement, together with the unknown basis coefficients determining the shape of the spline (Biller C. (2000)). METHODS The Boston Housing Data dataset was taken to make analysis with various techniques. This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University. Also, we made a simulation study that was conducted to evaluate the performances of the above nonparametric techniques. For simulation study we used functions from (Smith and Cohn, 1997). These are following three examples considered for the model: 1. ƒ(x1,x2)=1/5exp(-8x21)+3/5exp(-8x2), x1 and x2 were distributed independently with a mean 0.5 and variance 1. 2. ƒ(x1,x2)=x1sin(4πx2), where x1 and x2 are distributed independently uniform on [0,1]. 3. ƒ(x1,x2)=x1x2, x1 and x2 are distributed normally with mean 0.5, variance 0.05 and correlation 0.5. For each function we took 100 values with 100 replications. The results of simulation are compared using with box-plots. For performance criteria of Boston Housing Data dataset we used root mean square error. The results of performance criteria are compared with each other to realize the best estimator. We used R software to make analysis of dataset and simulation study. We create some helpful subprograms and functions to make our calculation easier. The empirical analysis are made by mgcv package (S. Wood, 2010) of R software, the functions for calculating penalized generalized additive models, BayesX software (Lang S. and Brezger A., (2001), http:// www.stat.uni-muenchen.de/~lang/bayesx) and for the adaptive Bayesian regression spline we used the functions from the http://www.stat.uni-muenchen.de/sfb386, which was created using C++. FINDINGS AND CONCLUSIONS In this study we made an empirical analysis using regression techniques: penalized generalized additive models, thin plate spline and their Bayesian analogues: Bayesian penalized splines and Bayesian adaptive regression spline. As we could realize from these analysis, all these techniques can make good approximation for dataset. From the comparison of the result of simulation study we could find out, that for different dataset, results show different characteristics. In some simulated dataset, a regression technique outperforms results of Bayesian techniques. For some dataset Bayesian regression and penalized splines shows better result. REFERENCES [1] Biller C. (2000), Adaptive Bayesian Regression splines in semi parametric generalized linear models, Journal of Computational and Graphical Statistics, 9 (1), 122-140. Carter C. and Kohn R. (1994), On Gibbs Sampling for State Space Models, Biometrica, 81, 541-553. [2] De Boor C. (1978), A Practical Guide to Splines, Springer, New York. [3] Dierckx P. (1993), Curve and surface fitting with splines, Clarendon Press, Oxford. [4] Eilers P.H.C. & Marx B.D. (1996), Flexible smoothing using B-splines and penalized likelihood (with

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

631

2 nd International Symposium on Computing in Science & Engineering

comments and rejoinders), Statistical Science, 11(2), 89-121. [5] Eilers P.H.C. & Marx B.D. (1998). Direct generalized additive modeling with penalized likelihood, Computational Statistics and Data Analysis, 28, 193-209. [6] Fahrmeir, L. & Lang, S. (2001a), Bayesian Inference for generalized additive mixed models based on Markov Random Field Priors, Journal of the Royal Statistical Society C (Applied Statistics), 50, 201-220. [7] Green, P.J., & Silverman, B.W. (1994). Nonparametric regression and generalized linear models. Chapman and Hall, London. [8] Hastie T. and Tibshirani R. (2000), Bayesian Backfitting, Statistical Science, 15, 193-223. [9] Lang S. and Brezger A. (2001), Bayesian P-splines, Journal of Computational and Graphical Statistics, 13, 183-212. [10] Smith M. and Kohn R. (1997), A Bayesian approach to nonparametric bivariate regression, Journal of Econometrics, 75, 317-343. [11]Wood S.N. (2006), Generalized additive models: an introduction with R, Chapman and Hall, London.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

632

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 800/11

Forecasting via MinMaxEnt Modeling: An Application on the Unemployment Rate Cigdem Giriftinoglu, Aladdin Shamilov

Keywords: Entropy, forecasting, time series INTRODUCTION The maximum entropy principle has given us an explicit distribution for the observed time series when the information about time series is consisted only of the expected values of autocovariances up to a lag m. This distribution which has maximum entropy and agrees with our state of knowledge is a multivariate Gaussian distribution, the dimension of which equals to the number of observations. In this study, when an observed time series with real value [y1, y2,…,yN] is given, forecast value yN+1 is considered as the unknown parameter η and the entropy optimization function U(η) determined by the time series become dependent on η . The distribution p(η0), where η0 is a value at which U(η) reaches its minimum value, is called MinMaxEnt distribution [5]. In this situation, it can be said that γ0 minimizing to entropy optimization function is a forecast value by use of a theorem proved in [7]. Consequently, we show that a new method based on MinMaxEnt distribution is successfully applied to time series for forecasting via an application on the U.S. unemployment rate. LITERATURE REVIEW Forecasting is essential for planning and operation control in a variety of areas such as production management, inventory systems, quality control, financial planning, and investment analysis. There are many forecast methods in the literature [1-3]. In this study, taken discrete time series can be viewed as “single trial” from a stochastic process since a continuous function can be transformed into a set of discrete observations. To apply the maximum entropy principle, there are some assumptions about the time series. Firstly, the interval between successive observations will be assumed to be equal in order to simplify the mathematical treatment. Another assumption is that the mean of the observations is zero. This is not restrictive at all, because the series can be easily transformed to another series with zero mean. Furthermore, the stochastic process generating the observed time series will be assumed to be stationary. While the raw time series is represented by [y1, y2,…,yN], where yj is the j th ordered variate, an important feature of the time series is given by the covariance between yj and a value yj+k which is k time lags away. When the information about the time series is given as autocovariances up to a lag m, MaxEnt distribution of the real time series can be determined as a multivariate normal distribution. entropy optimization function via entropy value of the multivariate normal distribution is constructed to define MinMaxEnt distribution [5-7]. METHODS MinMaxEnt Modeling for Forecasting If we consider future value of an observed time series y1, y2,…,yN as parameter γ (yN+1= η), then the maximum entropy probability distribution become a function of γ. This function is denoted by p(η). The MinMaxEnt distribution p(η0), determined by γ0 minimizing the entropy optimization function can be considered as MaxEnt distribution with moment functions and moment values dependent on parameter [8, 9]. Because future value γ involve in moment constraints, entropy optimization functional depends on future value η. Thus, entropy optimization functional denoted U(η) is minimized with respect to η and obtained η0 generate

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

633

2 nd International Symposium on Computing in Science & Engineering

MinMaxEnt distribution. We proposed that η0 can be used in time series for forecasting [10]. Similarly, instead of one unknown parameter, it is possible to consider vector parameter η=( η1, η2,…, ηn) and define MinMaxEnt. When the future values are more than one, η is an unknown vector. Therefore, by virtue of MinMaxEnt distribution it is possible to estimate several future values in time series. In application of MinMaxEnt modeling for forecasting is used the data set consisting of the unemployment rate of the United States from 1909 to 1988 taken from [2]. FINDINGS & CONCLUSION This paper explored how to apply MinMaxEnt Modeling to forecast time series data under Maximum Entropy Distribution. MinMaxEnt distributions (models) are obtained on the basis of MaxEnt distributions dependent on parameter and it is shown that the proposed method can be used to forecast in time series. In order to evaluate the performance of this method is applied on real time series consisting of the unemployment rate of the United States and it is required a computer program the written in MATLAB to do computations. In addition to this, it is shown that the forecast error which is the difference between the actual value and the forecast value for the corresponding period is very low and it is seen that MinMaxEnt modeling provides reliable and efficient forecasting outputs. REFERENCES [1] W.S.Wei,( 2006), Time series Analysis, Univariate and Multivariate Methods, U.S.A:Pearson. [2] B. Pfaff (2008), Analysis of Integrated and Cointegrated Time Series with R, U.S.A., Springer. [3] H. Madsen (2007), Time Series Analysis, U.S.A., Chapman&Hall. [4] J.N. Kapur and H.K. Kesavan (1992), Entropy Optimization Principles with Applications, New York: Academic Press. [5] Shamilov, A. “A Development of Entropy Optimization Methods” WSEAS Transaction on Mathematics, Issue 5, Vol. 5,568-575, 2006. [6] Shamilov, A., “Generalized entropy optimization problems and the existence of their solutions”, Phsica A., 382, 465-472, 2007. [7] Samilov A., Giriftinoglu C., "Generalized Entropy Optimization Distributions Dependent on Parameter in Time Series", WSEAS TRANSACTIONS on INFORMATION SCIENCE &APPLICATIONS, (Compendex, Scopus), Issue 1, Vol. 7, 102-111, 2010. [8] Shamilov A., "Generalized entropy optimization problems with finite moment functions sets", Journal of Statistics and Management Systems, Issue 3, Vol. 13, 595-603., 2010. [9] Shamilov A. and Giriftinoğlu Ç. "Some relationships between entropy values of entropy optimization distributions for time series", The Scientific and Pedagogical News of Odlar Yurdu University, 28, pp.1017., 2009. [10] Shamilov A. and Giriftinoğlu Ç. " A new method for estimating the missing value of observed time series and time series forecasting",The Scientific and Pedagogical News of Odlar Yurdu University, 28, pp.1723, 2009.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

634

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 800/12

Comparison of Simplified Bishop and Simplified Janbu Methods in the Determination of the Factor of Safety of Three Different Slopes Subjected to Earthquake Forces Tülin ÇETİN,Celal Bayar University,Vocational School of Turgutlu,Manisa,Turkey,[email protected] Yusuf ERZİN,Celal Bayar University,Faculty of Engineering,Department of Civil Engineering,Manisa,Turkey,[email protected]

Keywords: Slope stability, earthquake forces, simplified bishop, simplified janbu INTRODUCTION Analysis of stability of slopes in terms of factor of safety (Fs) is important to define the stability of a slope. Most stability analysis is carried out under static loading. However, in a seismically active region, earthquakes are one of the important forces that can cause the failures of slopes. A slope becomes unstable when the shear stresses on a potential failure plane exceed the shear resistance of the soil. The additional stresses due to earthquake further increases the stresses on these planes and decreases the Fs value further. Limit equilibrium methods have been widely adopted for slope stability analysis [1]. A potential sliding surface is assumed prior to the analysis and a limit equilibrium analysis is then performed with respect to the soil mass above the potential slip surface. Many methods based on this approach are available. For example, Ordinary [2], Simplified Bishop [3], Simplified Janbu [4], Morgenstern and Price (1965), Spencer(1967), and Sarma (1979). In this study, attempts were made to estimate of the (Fs) values of three slopes with different heights subjected to earthquake forces. To achieve this, a computer program based on Simplified Bishop and Simplified Janbu Method was prepared in the Matlab programming environment. [5] Then, the (Fs) values of three slopes were estimated by using both methods and comparisons were made between these methods. LITERATURE REVIEW Earthquakes are a major trigger for instability of natural and man-made slopes. Often the instability of slopes due to an earthquake causes more destruction and kills more people than the actual earthquake itself [6]. More recently, the magnitude 7.5 Guatemala Earthquake of 1976 is reported to have generated at least 10.000 landslides and slope failures [7]. The slope instabilities and resulting disasters after the El Salvador Earthquakes in 2001 hit the front pages of the papers worldwide [6]. Natural and artificial slopes may become equally unstable during an earthquake. To calculate how a slope will behave during an earthquake or to design a slope that is also stable under earthquake conditions is not that easy [6]. The factor of safety (Fs) based on an appropriate geotechnical model as an index of stability, is required in order to evaluate slope stability. However, the stability analysis of slopes is difficult because of many uncertainties. The problem is statistically indeterminate, and some simplifying assumptions are necessary in order to obtain a critical factor of safety (Fs) [8]. For slope stability analysis, the limit equilibrium method is widely used by engineers and researchers and this is a traditional and well established method. [1] Alkasawneh et.al. has shown that limit equilibrium methods are reliable and can be used with confidence to investigate the stability of slopes.[9] Due to the differences in the assumptions, several limit equilibrium methods such as Ordinary [2], Simplified Bishop [3], Simplified Janbu [4], Morgenstern and Price (1965), Spencer(1967), and Sarma (1979) have been proposed.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

635

2 nd International Symposium on Computing in Science & Engineering

METHODS In this study, a computer program based on Simplified Bishop and Simplified Janbu Methods was prepared in the Matlab programming environment. [5] Then, the (Fs) values of three slopes with different heights during earthquakes were estimated by using the prepared program. During the stability analysis, while the value of the slope (2 : horizontal / 1 : vertical) and the properties of the soil namely internal angle of friction, cohesion and unit weight involved in the slope are kept constant, the height of the slope (H), the magnitude of the earthquake (M), the distance between the center of the earthquake and the location of the slope (R) and the frequency of the cyclic loading (N) varied as follows. The H value is allowed to vary 5, 10 and 15 m, the M value is 6, 7 and 8, R is as 5, 10, 20, 50 and 100 km while N is 5, 10, 30 and 60. Then, comparisons were made between the results obtained from the Simplified Bishop and Simplified Janbu methods. FINDINGS & CONCLUSION In this study, model analyses during earthquakes were performed for three different slope cross sections and one soil type involved in the slope by using the prepared program based on Simplified Bishop and Simplified Janbu methods. The effects of the slope height and the magnitude of the earthquake (M), the distance between the center of the earthquake and the location of the slope (R) and the frequency of the cyclic loading (N) on slope stability were investigated. As a result of the analyses, low safety factors were obtained in both methods for the earthquakes with high magnitudes when the slope was closer to the fault line. In both methods, in addition to the earthquake effects, the increase in slope height and the existence of ground water also decreased the slope stability by considerable amount. REFERENCES [1] Cheng, Y.M., Lansivaara, T., Wei, W.B. (2007) Two-dimensional Slope Stability Analysis By Limit Equilibrium And Strength Reduction Methods [2] W. Fellenius, Calculation of the stability of earth dams, In Transactions of the 2nd International Congress on Large Dams, Washington, USA, 445-462, 1936 [3] A. W. Bishop, The use of the slip circle in the stability analysis of slopes, Géotechnique 5(1), 7-17, 1955 [4] N. Janbu, Application of composite slip surface for stability analysis, In Proceedings of European Conference on Stability of Earth Slopes, Stockholm, Sweden, 43-49, 1954. [5] Cetin, T., 2010. Developing a computer program for analysis of slope stability and comparing different analysis methods, MSc. Thesis, Celal Bayar University Manisa, (In Turkish). [6] Hack, R., Alkema, D., Kruse, G.A.M., Lenders, N., Luzi, L. (2007) Influence of earthquakes on the stability of slopes [7] Anon (1997) Report on Early Warning Capabilities for Geological Hazards. Convener of International Working Group, and first author: Dr. Robert Hamilton Chairman, International Decade for Natural Disaster Reduction (IDNDR). Early Warning Programme, IDNDR Scientific and Technical Committee Washington, D.C. USA. Document 12085, IDNDR Secretariat, Geneva. 35 pp. [8] Ö. Tan, Investigation of soil parameters affecting the stability of homogeneous slopes using the Taguchi methods, Eurasian Soil Science, 39, 1248-1254, 2006 [9]. Alkasawneh, W., Malkawi, A.I.H., Nusairat, J.H., Albataineh, N. (2008) A Comparative Study of Various Commercially Available Programs in Slope Stability Analysis [10] H. B. Wang, W. Y. Xu, and R. C. Xu, Slope stability evaluation using back propagation neural networks, Engineering Geology 80, 302-315, 2005. [11] Matasovic, N. (1991) Selection of Methos for Seismic Slope Stability Analysis [12] Seed, H.B., Idriss, I.M., (1971) Simplified Procedure for Evaluating Soil Liquefaction Potential,’ Journal Soil Mechanics and Foundations Division, ASCE, Volume 97(9), pp. 1249-1273.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

636

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 800/13

Comparison of MaxMaxEnt and MinMaxEnt Distributions for Time Series in the Sense of Entropy Aladdin Shamilov, Eskisehir, Turkey/Department of Statistics/Science Faculty / Anadolu University, [email protected] Cigdem Giriftinoglu, Eskisehir, Turkey/Department of Statistics/Science Faculty / Anadolu University, [email protected]

Keywords: Entropy, information, autocovariance, maximum entropy distribution

INTRODUCTION In the present study, MaxMaxEnt and MinMaxEnt distributions obtained on the basis of MaxEnt distribution dependent on parameter are comparized in the sense of entropy. It is proved that as number of autocovariances k goes on increasing; the entropy of each MaxMaxEnt distribution goes on decreasing. MinMaxEnt distributions also possesses this property that entropy of MinMaxEnt distributions form a decreasing sequence when the number of autocovariances increases. Moreover entropy of each MaxMaxEnt distribution generated by the given autocovariance vector greater the entropy of MinMaxEnt distribution generated by the same autocovariance vector. Comparing MaxMaxEnt and MinMaxEnt distributions generated by the different autocovariance vectors it is possible to obtain autocovariance vectors generating distributions neighbouring in the sense of entropy. It should be noted that entropy value of MaxMaxEnt distribution can be less than entropy of MinMaxEnt distribution if and only if the number of autocovariances generating MaxMaxEnt Model is considerable the number of autocovariances generating MinMaxEnt distribution. LITERATURE REVIEW Entropy Values of MaxMaxEnt and MinMaxEnt Distributions Generalized entropy optimization distributions (GEOD) in the form of as MaxMaxEnt and MinMaxEnt distributions are defined and investigated in the different aspects in [5-7]. In this study, mentioned distributions are considered in the sense of entropy. Exactly, it is established some relations between information worth of autocovariances in MaxMaxEnt and MinMaxEnt modeling. The established results can be applied in solving many problems, in particularly in estimation theory [8-10]. For this reason, MaxMaxEnt and MinMaxEnt distributions according to different autocovariance vectors of time series are investigated in the sense of entropy. Let be MaxEnt distribution generated by autocariances vector of given stationary time series with parameter , at position s, where , and entropy value of this distribution be . Let be the value realizing MinMaxEnt distribution which is denoted as i.e. . (1) Moreover, let be the value realizing MaxMaxEnt distribution which is denoted as i.e. . (2) Teorem Let and , , be MinMaxEnt and MaxMaxEnt distributions respectively. Then, between entropy values

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

637

2 nd International Symposium on Computing in Science & Engineering

of these distributions the inequalities (3) . (4) are satisfied. FINDINGS & CONCLUSION MaxEnt distributions according to different number of autocovariances are considered and it is proved that the entropy values of these distributions constitute a monotonically decreasing sequence when the number of autocovariances increases. In this study, it is shown that MaxMaxEnt and MinMaxEnt distributions consist with decreasing sequence in the sense of entropy when the number of autocovariances generating these distributions increases. Moreover, entropy values of MaxMaxEnt and MinMaxEnt distributions generated by the same autocovariances satisfy the inequality that entropy value of MaxMaxEnt distribution greater than MinMaxEnt distribution. It should be noted that entropy value of MaxMaxEnt distribution can be less than entropy of MinMaxEnt distribution if and only if the number of autocovariances generating MaxMaxEnt Model is considerable the number of autocovariances generating MinMaxEnt distribution. Consequently, MinMaxEnt distribution is more informative than MaxMaxEnt distribution and can be used for the solving many numerical problems for time series. REFERENCES [1] W.S.Wei,( 2006), Time series Analysis, Univariate and Multivariate Methods, U.S.A:Pearson. [2] B. Pfaff (2008), Analysis of Integrated and Cointegrated Time Series with R, U.S.A., Springer. [3] H. Madsen (2007), Time Series Analysis, U.S.A., Chapman&Hall. [4] J.N. Kapur and H.K. Kesavan (1992), Entropy Optimization Principles with Applications, New York: Academic Press. [5] Shamilov, A. “A Development of Entropy Optimization Methods” WSEAS Transaction on Mathematics, Issue 5, Vol. 5,568-575, 2006. [6] Shamilov, A., “Generalized entropy optimization problems and the existence of their solutions”, Phsica A., 382, 465-472, 2007. [7] Samilov A., Giriftinoglu C., "Generalized Entropy Optimization Distributions Dependent on Parameter in Time Series", WSEAS TRANSACTIONS on INFORMATION SCIENCE &APPLICATIONS, (Compendex, Scopus), Issue 1, Vol. 7, 102-111, 2010. [8] Shamilov A., "Generalized entropy optimization problems with finite moment functions sets", Journal of Statistics and Management Systems, Issue 3, Vol. 13, 595-603., 2010. [9] Shamilov A. and Giriftinoğlu Ç. "Some relationships between entropy values of entropy optimization distributions for time series", The Scientific and Pedagogical News of Odlar Yurdu University, 28, pp.1017., 2009. [10] Shamilov A. and Giriftinoğlu Ç. " A new method for estimating the missing value of observed time series and time series forecasting",The Scientific and Pedagogical News of Odlar Yurdu University, 28, pp.1723, 2009.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

638

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 800/14

Examining EEG Signals with Parametric and Non- Parametric Analyses Methods in Migraine Patients and Migraine Patients during Pregnancy Mustafa ŞEKER,Cumhuriyet Üniversitesi,Divriği Nuri Demirağ MYO, Sivas,Türkiye,[email protected] Keywords: EEG, Welch, Yule-Walker, Covariance, Burg, Modified Covariance, AR, Spectral Analysis INTRODUCTION Migraine headache is a disease known for century. The most important factor in the diagnosis of migraine, the patient’s medical history, age at onset of pain, location of pain, characteristics, and the course of pain, accompanying semptoms and examining of neurological disorders is a great importance in terms of diagnosis. In parallel to developing technology, for interrogated of neurogical functions and put forward a clear diagnosis is widely used neurology examination tools such as Magnetic Rezonan(MR), Brain Tomography and Electroencephalographic(EEG) in clinics[4,2]. There is a very complex structure of EEG signals and difficult to interpret [3]. Potentials of the EEG measured head through the surface have peak amplitude range of 1-100 µV and frequency band is 0-100 Hz. Despite EEG signals have a wide frequency in 0-100 Hz, in the interpretation and analysis of EEG signals is very important frequency range of 0-30 Hz. O-30 Hz frequency range includes four basic EEG frequency range. This frequency band ranges, respectively, is 0,5-4 Hz (Delta), 4-8 Hz (Theta), 8-13 Hz (Alpha), 13-30 Hz (Beta) frequency ranges. EEG signals are not periodic and frequency, phase, amplitude varies continuously. When EEG signals are interpreted by experts because of both long-term records and time axis, visual analysis of this signs is deficiency. In Parallel with today’s technology, to overcome this deficiency and obtain meaningful result that EEG signals developed different signal processing techniques and methods of statistical analysis. In the literature, there are no finding migraine-specific of EEG. Migraine, migraine with aura and migraine without aura is basically classified into two groups. Severe pain in attack migraine without aura lasting 4-72 hours. In migraine with aura, the pain duration is shorter than 60 minutes Also it is examined that what sort of changes occurs on migraine at pregnancy term.. LITERATURE REVIEW In this paper, EEG signals of people diagnosis with migraine, people in the during pregnancy and healty persons examined with spectral analysis methods and whether diagnosis of migraine by EEG tools can be observed. EEG data used in this study selected female patients ranging in age from 18 to 35 in Cumhuriyet University Research and Training Hospital Department of Neurology Branch and study with three different experimental groups (17 healthy people, 9 migraine patients and 5 migraine patients during pregnancy) were studied. EEG data is used for migraine are those of persons diagnosis with migraine without aura in migraine attacks and in the process to apply to the clinic with complaints of headache was obtained. According to international 10-20 electrode system for analysis, all channels are selected as in monopoles and sampled with 200 Hz sampling frequency. EEG signals are non- stationary signals so we need to analyze the signal by separating certain segment[7-8]. In this study, EEG data collected from healthy, migraine, and pregnant women were analyzed using Burg, Modified Covariance, Covariance and Yule-Walker, which parametric spectral analysis method, and Welch, which a non-parametric method, in an attempt to re-evaluate the value EEG in migraine diagnosis. We have also compared the performance of different spectral analysis methods in distinguishing different conditions. Futher, we have examined changes in the EEG spectral characteristics that occur due to migraine in the pregnancy term[1].

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

639

2 nd International Symposium on Computing in Science & Engineering

METHODS In order to obtain a diagnosis EEG signals uses in wide range of signal frequency spectrum. EEG frequency spectrum contains four basic frequency ranges. This frequency range is frequency ranges 0-4 Hz(Delta), 4-8 Hz(Theta), 8-13 Hz(Alpha) and 13-30 Hz(Beta).MATLAB7.0 is used for analysis results. According to the standard international 10-20 electrode system was used for analysis of all channels. For analysis non-parametric Welch method Welch and parametric Burg, Yule-Walker, Covariance and Modified Covariance methods for analysis are used. As a result of analyzing from the channels calculated the Power Spectral Density (PSD). With healthy persons, migraine patients, the migraine patients during pregnancy of the tree experimental groups including power spectral density were obtained by calculating the average power spectral density for each channel. Average power spectral density speared that meaningful to EEG frequency ranges of 0-4 Hz(Delta), 4-8 Hz(Theta), 8-13 Hz(Alpha) and 13-30 Hz(Beta) and the tree experimental groups compared with statistical t-test. FINDINGS & CONCLUSION In this study, healthy persons, migraine patients, migraine patients during pregnancy of EEG signals were examined with spectral analysis methods. The PSD of EEG signals were calculated using non-parametric Welch and parametric Burg, Yule-Walker, Covariance and Modified Covariance methods. A power spectral density is speared of delta, theta, alpha, beta frequency bands and average power spectral density was obtained for each channels. Average power spectral density compared with statistical t-test and statistical comparison tables have been obtained. Examined the comparison result of healthy persons and migraine patients during pregnancy is not a distinctive feature. Determining the diagnosis of migraine with Welch method has not been detected in a specific channel. The parametric method gives the best result is Modified Covariance method. The result obtained this method, decisive for the diagnosis of migraine channels Fp1, T4, Fz channels has been identified. In the future, increasing the number of persons used the analysis can be more meaningful statistical values. Due to hormonal influences during pregnancy also reduce the effect of migraine. Examined the effect of pregnancy hormones can be reduce in process of migraine headaches by hormone therapy and also be improved fuzzy logic algorithms are for migraine can be created a automatic identification system. REFERENCES [1] Şeker, M., Tokmakçı, M., Asyalı M.H., Seğmen, H., Examining EEG Signals with Parametric and NonParametric Analyses Methods in Migraine Patients during Pregnancy, Biyomut 2010, Antalya, Turkey [2] Akın, M., Kıymık, M.K., Arserim, M.A., Türkoğlu, İ., Separation of Brain Signals Using FFT and Neural Networks, Biyomut 2000, İstanbul, Turkey [3] Dahlöf, C., Linde M., One-year prevelance of migraine in sweden: a population- based study in adult. Cephelalgia 2001; 664-671 [4] Lüleci, A., Maltepe İlçesi Doğurganlık Çağındaki Kadınlarda Migren Prevalansının araştırılması,TC Sağlık Bakanlığı Dr Lütfi Kırdar Kartal Eğitim Ve Araştırma Hastanesi Nöroloji Kliniği , Uzmanlık Tezi., İstanbul, 2004 [5] Proakis, J.G., Manolakis, D.G., Digital Signal Processing Principles, Algorithms, and Applications , Prentice-Hall, New Jersey, 1996. [6] Übeyli, E. D., Güler, I., Selection of optimal AR spectral estimation method for internal carotid arterial Doppler signals using Cramer-Rao bound. Comput. Electr. Eng, 30,491–508, 2004 [7] Übeyli, E. D, Güler İ., Atardamarlardaki Daralma ve Tıkanıklığın Maksimum Olabilirlik Kestiriminin AR metodu ile İncelenmesi, Gazi Üniversitesi Fen Bilimleri Dergisi, 375-385, 2003 [8] Zhenmei Li, Jin Shen, Bouxe Tan, Liju Yin, Power System İnterharmonic Monitoring Based on Wavelet Transform and AR Model, IEEE Pasific- Asia Workshop on Computational Intelligence and Industrial Aplication, 2008 [9] Alkan, A., Subaşı, A., Kıymık, M.K., Epilepsi Tanısında MUSIC ve AR Yöntemlerinin Karşılaştırılması, IEEE 13. Sinyal İşleme Uygulamaları Kurultayı(SİU’05) 2005, Kayseri,Türkiye [10]Bronzino J. D. 1996., The Biomedical Engineering handbook, IEEE Pres, 3erd edition

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

640

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 800/15

The Statistical Analysis of Highly Correlated Gaussian Noise in a Double Well Potential Raşit Çakır,Department of Physics,İbrahim Çeçen University of Ağrı,Ağrı,Turkey,[email protected] Osman Çağlar Akın,Department of Physics,Fatih University,İstanbul,Turkey,[email protected] Keywords: Stochastic processes, Long range correlations, Gaussian noise, Double well potential INTRODUCTION We examine highly correlated Gaussian fluctuations in a double well potential for subdiffusion and superdiffusion cases and compare them with those cases of the free particle. The equation of motion is solved numerically by a fourth order Runge Kutta method. We notice that the renewal property of the system is lost due to the effect of the double well potential. METHODS We have a particle in the potential V(x)=x4-2a2x2 The equation of motion is dx2/dt2=-bv - dV/dx where b is the friction constant and V(x) is the double well potential. We consider two first order equations, and the stochastic velocity is added to the first equation dx/dt=v+c ξ dv/dt=-bv-4x3+4a2x where ξ is the Gaussian noise with zero average and unit standard deviation and c is a constant for intensity. The Gaussian noise is correlated with power law tail. For subdifusion the scaling is H=1/3 and the correlation function is C(t)=-1/tß where ß=2-2H=4/3 and for superdifusion the scaling is H=2/3 and correlation function is C(t)=1/tß where ß=2-2H=2/3 The equations of motion is solved numerically using a forth order Runge Kutta method with time difference but the noise is added only at integer times. For each integer time T ,the values of x(T+1) and v(T+1) is obtained from x(T) and v(T) and finally stochastic velocity is added at T+1 to the value of x Also note, the methods used for the generation of highly correlated Gaussian noise are the traditional Voos algorithm and alternatively more effective algorithm what is known as Fourier Algorithm for the generation of the time series. Once the time series are for the correlated noise are obtained it is used in conjunction with the above mentioned procedure to find its effect on a particle moving in a double well potential. CONCLUSIONS The results indicate that the correlation time of the long range correlations yields stretched exponentials which are akin power law behavior in the asymptote, yet as soon as the correlations decay the systems tends to look like a Poisson system with an exponential probability distribution function. In this respect the correlations change the behavior of the stochastic system. It is also notice by the aging analysis of the system that the system is a renewal system. REFERENCES [1] R.Cakir,P.Grigolini, A.A.Krokhin, Dynamical origin of memory and Renewal, Phys.Rev.E. 74, 021108,(2006) [2] Aldo H. Romero, J. M. Sancho, Katja Lindenberg, First Passage Time Statistics For Systems Driven By Long Rage Correlated Gaussian Noises, Fluctuation and Noise Letters, Vol. 2, No. 2 (2002) [3] Mandelbrot B. B., Van Ness J., The Fractional Brownian motions, fractional noises, and applications,

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

641

2 nd International Symposium on Computing in Science & Engineering

SIAM Rev., 10, 422, 1968. [4] F. M. Izrailev, A.A. Krokhin, and S. E. Ulloa, Phys. Rev. B 63, 041102(R) (2001). [5] Weiss U., The Fractal Geometry of Nature, World Scientific, Singapore, 1999. [6] Mandelbrot B. B., Quantum Dissipative Systems, Freeman, New York, 1977. [7] Feder J., The Fractals, Plenum Press, New York, 1988. [8] M. Kuno, D. P. Fromm, H. F. Hamann, A. Gallagher, and D. J. [9] Nesbitt, J. Chem. Phys. 115, 1028, 2001. [10] F. Cichos, J. Martin, and C. von Borczyskowski, Phys. Rev. B, 70, 115314 , 2004. [11] X. Brokmann, J.-P. Hermier, G. Messin, P. Desbiolles, J.-P. Bouchaud, and M. Dahan, Phys. Rev. Lett. 90, 120601, 2003.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

642

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 800/16

A Novel Sentiment Classification Approach Based on Support Vector Machines Arda ALADAG,Turk Telekomunikasyon A.S.,Computer, Engineer, Ankara, Turkey, [email protected] Cagdas Hakan ALADAG,Hacettepe University,Department of Statistics, Ankara,Turkey, [email protected] Mehmet Selcuk ARSLAN, Arsel group advertising and media planning, Chemical Engineer, Ankara,Turkey,[email protected] Keywords: Automated text classification, Opinion extraction, Polarity mining, Sentiment classification, Support vector machines. INTRODUCTION Sentiment analysis aims to determine the attitude of a speaker or a writer with respect to some topic. It refers to a broad area of natural language processing, computational linguistics and text mining [4]. The attitude may be either their judgment or evaluation which eventually tends to a polarity. Automatic sentiment classification is useful in many areas. It can be used to classify product reviews into positive and negative categories, so that potential customers can have an overall idea of how a product is perceived by other users [1]. It has proven useful for companies, recommender systems, and editorial sites to create summaries of people’s experiences and opinions that consist of subjective expressions extracted from reviews (as is commonly done in movie ads) or even just a review’s polarity-positive (“thumbs up”) or negative (“thumbs down”) [3]. In this study, a new sentiment classification approach based on support vector machines is proposed that will allow user to predict a polarity of a review and observe the subjective parts within that review easily along with a review of the literature on text and sentiment classification and detailed explanation of some crucial preprocessing steps that are generally employed in this field. These preprocessing steps are utilized in a support vector machine based text categorization model which is proposed for sentiment classification problem. In a text categorization model, when support vector machines are used as the text classifier, scalability will be poor if raw data is used since the text data has high dimensional feature space. Hence, some well known preprocessing techniques that are generally employed in automated text classification domain are also employed in the proposed method in order to overcome high dimensionality of the problem. LITERATURE REVIEW The automated classification of textual data into predetermined categories has been a great interest since the availability of documents in digital form has been increased rapidly in the last 15 years. Among the many classification methods that have been used for text categorization are Bayesian belief networks, decision trees, nearest neighbor algorithms, Bayesian classifiers, Boolean decision rules and Support Vector Machines. Support vector machines have been shown to be highly effective at traditional text categorization, generally outperforming Naive Bayes. They are large-margin, rather than probabilistic, classifiers, in contrast to Naive Bayes and Max Entropy classifiers. Support Vector Machines are based on the Structural Risk Minimization principle from computational learning theory. The idea of structural risk minimization is to find a hypothesis h for which we can guarantee the lowest true error. The true error of h is the probability that h will make an error on an unseen and randomly selected test example.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

643

2 nd International Symposium on Computing in Science & Engineering

Turney [3] used an unsupervised machine-learning technique to estimate the semantic orientation of a word based on its association with the words “excellent” and “poor”, i.e., the extent to which the word co-occurs with “excellent” and “poor” in a text collection. A phrase has a positive semantic orientation when it has good associations (e.g., “romantic ambience”) and a negative semantic orientation when it has bad associations (e.g., “horrific events”). Na et al. [1] studied on automatic sentiment classification to automatically classify documents as expressing positive or negative sentiments. Their study investigates the effectiveness of using a machine-learning algorithm, support vector machine, on various text features to classify on-line product reviews. What they do is actually to compare the effectiveness of using negation phrases with a traditional machine learning technique. Pang and Lee [2] suggested a novel machine-learning method that applies text-categorization techniques to just the subjective portions of the document. Extracting these portions can be implemented using efficient techniques for finding minimum cuts in graphs; this greatly facilitates incorporation of cross-sentence contextual constraints. METHODS Sentiment classifier is designed as a standalone desktop application which utilizes a well-known Database Management System, MySQL.As to the software development, Java programming language was utilized during software development phase. Some preprocessing operations are applied to reviews before feeding into support vector machine model. In preprocessing phase, Word Extraction is an essential step. In order to identify words in movie reviews, some tokenizing algorithms are utilized. Alphabetic Tokenizer and n-gram Tokenizer are employed for extracting features in the proposed method. Words with specific parts of speech, such as nouns, verbs, adjectives, and adverbs could be selected before training classifier model. Using “Part Of Speech Tagging” is optional and for some experiments conducted it is useful using this option to compare the results. For support vector machine implementation, LIBSVM 2.86 is utilized. Preprocessing Parameters employed are POST (it is possible to select the part-of-speech tags to force retain in the reviews such as Nouns, Adverbs, Adjectives and Verbs.), Heuristics (Negation phrase ‘not’will be distributed among other words within the same clause.) Tokenizer (There are 4 types of Tokenizer Alphabetic, Bi-gram, 3-gram and 4-gram Tokenizers), Stemming (words within reviews content will be stemmed using Lovins Stemmer algorithm), Stop List (stop words included in Stop List will be filtered out and ignored while building word co-occurrence matrix), Word Co-occurrence Matrix (There are 3 parameters regarding cooccurrence Matrix. These are Presence-Absence values (1 or 0), Word Counts or TF-IDF values) CONCLUSION Using the parameters mentioned in the previous section, some experiments are conducted using different combination of parameters. For testing purposes, Polarity Dataset v2.0 that can be found in [5] is utilized to evaluate the performance of the method. Polarity Dataset consist of 2000 movie reviews which are extracted from IMDB archive of the rec.arts.movies.reviews newsgroup, processed, down-cased into text files and also used by Pang and Lee [2]. There are 1000 positive and 1000 negative instances and 1600 of them are selected as training instances which includes 800 positive and 800 negative ones and 400 of them are selected for testing purposes which includes 200 positive and 200 negative ones. In this study, 13 experiments were conducted for different parameter combinations. The parameters used in the experiment are summarized in Table 1. As a result of the experiments, it is clearly seen that the proposed method produces very accurate results. For both positive and negative instances, we achieved over 80% accuracy.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

644

2 nd International Symposium on Computing in Science & Engineering

Table1 – Parameters used in experiments Parameter Name

Parameter Options

POST

Nouns

Adjectives

Heuristics

Yes

No

Tokenizer

Alphabetic

2-gram

Stemming

Yes

No

Stop List

Yes

No

Word Matrix

Co-occurrence Absence-Presence (1 or 0)

Adverbs

Verbs

3-gram

4-gram

Word Counts

TF-IDF

SVM Type

C-SVM

nu-SVM

one-class SVM

Kernel Type

linear

polynomial

radial basis

sigmoid

Kernel Function Variables

degree

gamma

coef0

nu

epsilon

REFERENCES [1] Na, J.C., Khoo, C., Wu. P.H.J. 2005. Use Of Negation Phrases In Automatic Sentiment Classification Of Product Reviews. Library Collections, Acquisitions and Technical Services, 29, 180-191. [2] Pang, B. and Lee, L., 2004. A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts. Proceedings of the ACL 2004, Barcelona, Spain, Main Volume (pp. 271-278). [3] Turney, P.D. 2002. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. Proceedings of the 40th Annual Meeting of the ACL (Association for Computational Linguistics) Philadelphia, PA, July 7–12 (pp. 417–424). [4] http://en.wikipedia.org/wiki/Sentiment_analysis [5] http://www.cs.cornell.edu/people/pabo/movie-review-data/ [6] Halteren, H.V., Zavrel, J., Daelemans, W, 2001. Improving Accuracy in NLP Through Combination of Machine Learning Systems. Computational Linguistics. 27(2), 199-229. [7] Pang, B., Lee, L., Vaithyanathan, S. 2002. Thumbs up? Sentiment classification using machine-learning techniques. Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, Philadelphia, PA, July 6–7 (pp. 79-86). [8] Sebastiani, F. 2002. Machine Learning In Automated Text Categorization. ACM Computing Surveys, Volume 34, Number 1, 1-47. [9] Yang, Y., Liu. X. 1999. A re-examination of Text Categorization Methods. 22nd Annual International SIGIR, Berkley, (pp. 42-49). [10] Zhou, L. Chaovalit, P. 2008. Ontology Supported Polarity Mining. Journal of the American Society for Information Science and Technology. 59 (1), 98-110.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

645

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 800/18

Use of A Combination of Statistical Computing Methods in Determining Traffic Safety Risk Factors on Low-Volume Rural Roads in Iowa, USA Reginald R. Souleyrette,Iowa State University,Department of Civil, Construction, and Environmental Engineering,Ames, Iowa,USA,[email protected] Mehmet Caputcu,Gediz University,Department of Civil Engineering,Izmir,Turkey,[email protected] Thomas J. McDonald,Iowa State University,Institute for Transportation,Ames, Iowa,USA,[email protected] Robert B. Sperry,Iowa State University,Institute for Transportation,Ames, Iowa,USA,[email protected] Zachary N. Hans,Iowa State University,Institute for Transportation,Ames, Iowa,USA,[email protected] Dan Cook,Iowa State University,Department of Civil, Construction, and Environmental Engineering,Ames, Iowa,USA,[email protected] Keywords: Test of proportions, scatterplot matrix of correlations, ordered probit model, traffic safety INTRODUCTION Iowa features an extensive surface transportation system, with more than 110,000 miles of roadway, most of which is under the jurisdiction of local agencies. Given that Iowa is a lower-population state, most of this mileage is located in rural areas that exhibit low traffic volumes of less than 400 vehicles per day. However, these low-volume rural roads also account for about half of all recorded traffic crashes in Iowa, including a high percentage of fatal and major injury crashes. This study was undertaken to examine these crashes, identify major contributing causes, and develop low-cost strategies for reducing the incidence of these crashes. Iowa’s extensive GIS crash and roadway system databases were utilized to obtain needed data. Using descriptive statistics, a test of proportions, and crash modeling, various classes of rural secondary roads were compared to similar state of Iowa controlled roads in crash frequency, severity, density, and rate for numerous selected factors that could contribute to crashes. LITERATURE REVIEW Low-volume roads are defined in the 2003 Manual on Uniform Traffic Control Devices (MUTCD) Part-5 as follows: A low-volume road shall be a facility lying outside of built-up areas of Cities, towns, and communities, and it shall have a traffic volume of less than 400 AADT (Annual Average Daily Traffic). A low-volume road shall not be a freeway, expressway, interchange ramp, freeway service road, or a road on a designated State highway system. A low-volume road shall be classified as either paved or unpaved. The Rural Transportation Initiative of the U.S. Department of Transportation (USDOT 2006) has reported that factors such as rural terrain, faster vehicle speeds, relatively high alcohol involvement, and relatively long emergency response times make rural road crashes more likely than urban crashes to result in fatalities. Crash rates have been found to be higher on low-volume roads than on other roads. In a study on a sample of nearly 5,000 miles of paved two-lane rural roads in seven states, Zegeer et al. (1994) computed a crash rate of 3.5 per million vehicle miles (MVM) on low-volume roads (defined in the study as roads with ≤2,000 average daily traffic [ADT]), versus a crash rate of 2.4 per MVM on all high-volume roads. Zegeer et al. determined that fixed object crashes, rollover crashes, and other run-off-road crashes were more frequent on the lowervolume roads. It should be noted that the Zegeer et al. study defined low-volume as less than 2,000 VPD. In Iowa, many primary roads fall into this category, and 2,000 VPD is not considered to be particularly low volume in some

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

646

2 nd International Symposium on Computing in Science & Engineering

states. The present study defines low volume as less than 400 VPD, which is a volume consistent with the 2003 MUTCD’s definition. Additionally, some studies have found that crash severity is relatively high on low-volume roads. Caldwell and Wilson (1997) compared the injury crash rates on unpaved county road sections in Albany County, Wyoming, to injury crash rates on all roads in the state. Injury risk on county roads was determined to be more than five times higher than on all roads. However, this report was based on a very small sample of road sections and crashes. The present study has covered seven years of crash data from all of the ninety-nine counties in the state of Iowa. METHODS Descriptive statistics for seven years of available GIS crash data (2001-2007, later updated to 2008), were prepared for rural low-volume (≤400 Average Daily Traffic) secondary road crashes, as well as for all other 2lane rural road crashes for comparison purposes. The descriptive statistics included a broad range of both crash- and driver/vehicle-level attributes. Iowa DOT Geographic Information Management Systems (GIMS) roadway data were integrated with crash data for analysis. System-wide crash data were utilized to investigate trends in frequency, rate, and severity with respect to the various crash characteristics and general road types. This also facilitated a test of proportions analysis. A test of proportions was employed to determine various crash characteristics which were overrepresented on different low-volume road categories. The comparison group was primary road crashes. Distribution of lowvolume rural secondary road crashes were compared to that of rural primary roads to find the main factors that increase risk for low-volume roads as compared to primary 2 lane roads. The proportion of various characteristics of crashes on roads of different jurisdictions, traffic volume ranges, and surface types were computed. These proportions were then statistically tested to determine if differences between pairs of proportion were statistically significant. Given adequate sample sizes and assumptions of independence of the proportions, the z-statistic for a standard Normal random variable was utilized for the test. A significance level of 0.05 was selected for the test. The results point to the types of crashes that are particularly problematic for low-volume roads. These characteristics were also used to develop a crash-based statistical model. In order to find the most significant factors of all crashes on rural roads with 400 vehicles per day (vpd) or less between 2001-2007, an ordered probit model was estimated. The Nlogit 4.0 extension of the Limdep software was utilized for building the model. The dependent variable was selected as crash severity and a 90 percent confidence level was used for determining relevant independent variables. Three severity levels were considered: Fatal and major injury (serious), minor or possible injury, and property damage only crashes. The model was designed to predict the marginal effects of independent variables on each of the three crash severity levels. FINDINGS & CONCLUSION The test of proportions compared more than 200 aspects of crashes. More than 60 of these factors were “statistically significantly more risky” on low-volume roads. Since the test of proportions included various pairs of comparison groups, it was hard to tell which of these 60+ factors were the most important. The crashlevel statistical model (the ordered probit model) has supplemented the test of proportions analysis, accounted for the effect of multiple causal factors and explanatory variables, and helped determine the top risk factors of a more limited number. For low-volume rural roads (400 vpd or less, excluding intersections with roads carrying any higher traffic) the following factors were found to increase the severity of crashes: 1. Paved surfaces 2. Spring/Summer months (April thru September) 3. Weekends 4. Fixed objects struck 5. Overturn/rollover crashes are more severe 6. Multi-vehicle broadside collisions are more severe 7. Impaired driving, including both alcohol and/or drug involvement 8. Daytime 9. Speeding 10. Younger (≤19) and older (≥65) driver involvement 11. Counties with less total rural population and less VMT per rural population

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

647

2 nd International Symposium on Computing in Science & Engineering

12. Counties with positive traffic control at intersections (information was available for 73 of the 99 counties). The following factors were found NOT to increase the severity of crashes 1. Animal collisions 2. Weather, environment, and surface-related factors While a crash level model is useful in identifying and verifying the effect of variables on crash severity, it is not able to identify specific locations of potential problems. For that, a road or segment-based model is required. Development of a segment based model is the subject of a follow on project currently being conducted at InTrans, Ames, Iowa.USA. REFERENCES [1] Caldwell, R. C., Wilson, E. M., A Safety Improvement Program for Rural Unpaved Roads. MPC Report No. 97-70. January 1997 [2] FHWA and NHI, Guide to Safety Features for Local Roads and Streets. 1992. [3] Iowa Department of Transportation, Office of Traffic and Safety, Engineering Bureau, Highway Division, Historical Summary of Travel, Crashes, Fatalities, and Rates (1970-2007) - State of Iowa. Updated on October 8, 2008. [4] Khattak, A. J., Pawlovich, M. D., Souleyrette, R. R., and Hallmark S. L., Factors Related to More Severe Older Driver Traffic Crash Injuries. Journal of Transportation Engineering, May-June 2002. [5] Ksaibati, K. and Evans, B., Wyoming Rural Roads Safety Program. TRB 88th Annual Meeting, Washington, D.C., November 2008. [6]Liu, L. and Dissanayake, S., Speed Limit-Related Issues on Gravel Roads. Kansas State University, August 2007. [7] Liu, L. and Dissanayake, S., Examination of Factors Affecting Crash Severity on Gravel Roads. Kansas State University, November 17, 2008. [8] Madsen, M., Farm Equipment Crashes on Upper Midwest Roads. Midwest Rural Agricultural Safety and Health Forum (MRASH), 2008. [9] Manual Uniform Traffic Control Devices for Streets and Highways (MUTCD), Part 5: Traffic Control Devices for Low-Volume Roads. FHWA, U.S. Department of Transportation, 2003 Edition. [10] Neenan, D., Driver Education Survey Results on Rural Roadway Driving - The National Education Center for Agricultural Safety. Midwest Rural Agricultural Safety and Health Forum (MRASH), 2008. [11] Roadway Safety Tools for Local Agencies -A Synthesis of Highway Practice, NCHRP Synthesis 321. 2003. [12] Russell, E. R., Smith, B. L., and Brondell, W. , Traffic Safety Assessment Guide. Kansas State University, Civil Engineering Department, April 1996. [13] Souleyrette, R. R., Guidelines for Removal of Traffic Control Devices in Rural Areas. 2005. [14] Tate III, J., Wilson, E., Adapting Road Safety Audits to Local Rural Roads. MPC Report No. 9896B. October 1998. [15]The American Traffic Safety Services Association, Low Cost Local Roads Safety Solutions, 2006. [16]US Department of Transportation Rural Transportation Initiative. May 23, 2006. [17]http://www.communityinvestmentnetwork.org/nc/single-news-item-states/article/us-department-oftransportation-rural-transportationinitiative/terrain%2C%20faster%20speeds/?tx_ttnews[backPid]=782&cHash=e4906f03c4, Last reviewed: June 27, 2009. [18]Zegeer, C. V., Stewart, R., Council, F., Neuman, T. R., Accident Relationship of Roadway Width on LowVolume Roads. TRR 1445, 1994. [19]Zegeer, C. V., Stewart, R., Council, F., Neuman, T.R., Roadway Widths for Low-Traffic-Volume Roads (NCHRP Report 362). 1994

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

648

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 800/19

Detecting Similarities of EEG Responses in Dichotic Listening Alper VAHAPLAR,Dokuz Eylül University,Department of Computer Science, Izmir, Turkey, [email protected] C. Cengiz ÇELİKOĞLU, Dokuz Eylül University,Department of Statistics, Izmir,Turkey, [email protected] Murat ÖZGÖREN, Dokuz Eylül University, Department of Biophysics,Izmir,Turkey,[email protected] Keywords: EEG, Dichotic Listening, Similarity, Clustering, Data Mining INTRODUCTION The dichotic listening (DL) paradigm has an important role in brain asymmetry studies at the behavioral level. In dichotic listening, the subject is alerted by diotic stimuli on both of his ears. The subjects then presented the syllable he heard. During this procedure, the EEG signals of the subjects were recorded by a 64 channel cap [1]. Measures of similarity are required in a wide range of radar sonar, communications, remote sensing, artificial intelligence and medical applications, where one signal or image is compared with another. A statistical treatment based on a few standard statistical relationships will be described and used to derive the new measure of signal similarity. This statistical test will be modified in order to detect the similarity in behavior, not in amplitude. In this study, the predefined pattern of response EEG signal will be passed over the single sweeps and will be compared with the responses of each stimulus. The signals will be compared by means of Zm statistical test in order to detect the similarity in behavioral manner. Most similar signals and the response times of the stimuli will be determined. As a result of this study, the most similar responses to the given dichotic stimuli will be clustered and the relations between the brain asymmetry and electrodes will be determined. Signal similarity method will be used to detect the behavioral patterns in EEG recordings. LITERATURE REVIEW The electroencephalogram (EEG) reflects the electrical activity of the brain as recorded by placing several electrodes on the scalp. The EEG is widely used for diagnostic evaluation of various brain disorders such as determining the type and location of the activity observed during an epileptic seizure or for studying sleep disorders [1]. Dichotic listening means presenting two auditory stimuli simultaneously, one in each ear. The subject reports which of the two syllables was perceived best. The test follows a typical sequence of events, in which a dichotic or diotic stimulus is presented followed by the subject reporting what they heard, usually out of a list of six syllables or two tones In the dichotic listening test of auditory laterality, consonant-vowel syllables, like ba, da, ga, ka, pa, ta are used. [1] Many basic processing operations, such as matched filtering, cross correlation, and beam formation, may be interpreted as being based on measures of similarity. These related operations typically form the foundation of the detection, classification, localization, association and registration algorithms employed in semiautonomous sensor systems. [2] A statistical treatment of a delay-and-sum beam-former will now be described and used to derive the new measure of signal similarity. The derivation is based on a few standard statistical relationships. A hypothesis test is performed with the null hypothesis being that there is no signal present and that the waveforms entering the beam former contain only zero-mean Gaussian-distributed noise. It is assumed that any Direct Current (DC) offset in the data (e.g. sensor bias) or frequencies that are of no interest (e.g. wind or self noise) have been

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

649

2 nd International Symposium on Computing in Science & Engineering

removed by a pre-whitening stage. If the null hypothesis is rejected then it is assumed that a localizable signal is present. [2] METHODS In this study, the EEG recordings of 2 different subjects (one with Right Ear Advantage, other with Left Ear Advantage) will be examined. The peak values and response times of each stimulus will be determined. Analyzing this data, a template pattern of responses will be constructed. Then, using this template signal, each response of the subject will be compared for detecting similarity in amplitude and behavioral manner. Most similar signal will be grouped and be used for clustering types of stimulus. Also the brain asymmetry will be considered as a scope of the study. In order to obtain the EEG recordings, first each of responses for the stimuli will be extracted from the whole recording. Then it will be labeled as Right or Left ear advantaged or homonym stimulus (which both of the syllables are the same). Using these signals, a template pattern will be constructed to compare each signal. The comparisons will be performed in two ways: one in amplitude similarity, second in behavioral similarity. Most similar and dissimilar responses and stimuli will be detected. According to the results, conclusions about syllable grouping will be made. Another concern for the study will be in response times. The responses times will be investigated by comparing the ear advantage and stimuli type. The delays and response time averages will be calculated. The dichotic stimuli and the electrodes will be analyzed by clustering. More similar responses to different stimuli will be arranged and these syllable groups will be compared by means of time, amplitude and behavior. FINDINGS – CONCLUSION In this study, EEG, dichotic listening, Zm statistical similarity measure will be introduced. The test statistics Zm which is used for amplitude similarity will be used for behavioral similarity. Dichotic listening EEG recordings will be investigated in detail. Results or effects of brain asymmetry will be searched. Comparing each response for the stimulus, new conclusions will be discussed. Syllable analysis will allow displaying the possible groups and/or most dominant or recessive syllables. Studying in time domain will have the opportunity of analyzing response times or delays in EEG signal. REFERENCES [1] Bayazıt, O., Öniz, A., Güntürkün, O., Hahn, C. and Özgören, M. Dichotic listening revisited: Trial-by-trial ERP analyses reveal intra- and interhemispheric differences. Neuropsychologia, Article in Press, 2008. [2] Hugh L. Kennedy. A New Statistical Measure of Signal Similarity. Information, Decision and Control IEEE, 2007. [3] Jiawei Han and Micheline Kamber. Data Mining - Concepts and Techniques. Morgan Kaufmann Academic Press, 2001. [4] Uğraş Erdoğan, Onur Bayazıt, Serhat Taşlıca, EEG kayıt ve paradigma yöneticisi olarak AT89C52 Mikrokontrolör Tabanlı Gömülü Sistem Donanım/Assembly Yazılım Ve Matlab Kontrol Yazılım Tasarımı. Genç Bilim İnsanları ile Beyin Biyofiziği 1. Çalıştayı, İzmir, Darboğazlar ve Çözüm Arayışları Bildiri Özet Kitabı, 47, 2007. [5] Kenneth Hugdahl, Symmetry and Asymmetry in the Human Brain. Academia Europaea, European Review 2005 13: 119-133, 2005. [6]Todd. K. Moon, Similarity Methods in Signal Processing. IEEE Transactions On Signal Processing,Volume: 44, No:4 April 1996.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

650

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 800/22

Applying Decision Tree on Incident Management Data Base for Service Level Agreement Filiz ERSÖZ,Turkish Military Academy,Defence Sciences Institute,Department of Operational Research,Ankara,Turkey,[email protected] , Ahmet HAYRAN,Gazi University,Informatics Institute,Ankara, Turkey, [email protected] , Cevriye GENCER,Gazi University,Industrial Engineering,Ankara,Turkey,[email protected] Keywords: Data Mining, Classification, Service Management Data Base, Call Center INTRODUCTION In recent years, especially in Turkey, government agencies and big business organizations have been building their own unified contact center and service desk for giving good and effective service to their employees and customers. The service desk can be mentioned with some others titles help desk, contact center, call center. However, all missions and goals of the organizations are nearly similar, their strategies, operations, organizations and technologies may differ in their contact center and service desk architecture. Because, in this architecture, organizations define their communication channels for example 7/24 call center operator service, e-mail, web, mobile, service and service levels, work processes, workflow models and roles in organization for employees’ professions and experiences. In this study, incident database of Ministry of National Education was investigated to provide valuable structured data for improving service quality. A data mining process, based on IBM SPSS Modeler Professional Clementine, a professional data mining tool, was worked to discover insights in your data, classifying and clustering the information to interpret data and make decision on how to improve service quality. Data mining is an extension of statistical analysis to identify valuable hidden patterns and subtle relationships by extracting information from a collection of data through various techniques such as decision trees, clustering analysis, rule induction, and statistical approaches. Data mining uses a combination of machine learning, statistical analysis, modelling techniques and database technology. LITERATURE REVIEW In study of Anand and friends, a subset of incidents from facilities in Harris County, TX, extracted from the National Response Center Database was investigated. By classifying the information into groups and using data mining (DM) techniques, interesting patterns of incidents according to characteristics such as type of equipment involved, type of chemical released and causes involved were revealed and further these were used to modify the annual failure probabilities of equipment (Anand and others 2006). The results of data mining ındicates customer relationship, customer segmentation, customer valuation, such as credit rating and marker basis analysis (Rud 2001, Berry and others 1997, Rygielski 2002, Hwang and others 2004). The other study conducted by S. C. Hui and G. Jha, investigates how to apply data mining techniques to extract knowledge from the database to support two kinds of customer service activities: decision support and machine fault diagnosis. A data mining process, based on the data mining tool DBMiner, was investigated to provide structured management data for decision support. In addition, a data mining technique that integrates neural network, case-based reasoning, and rule-based reasoning have been proposed; it would search the unstructured customer service records for machine fault diagnosis. The proposed technique has been implemented to support intelligent fault diagnosis over the World Wide Web (Hui and Jha 2000).

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

651

2 nd International Symposium on Computing in Science & Engineering

The use of computer technology in decision support is now widespread and pervasive across a wide range of business and industry. This has been resulted in the capture and availability of data in immense volume and proportion (Apte and Weiss 1997). There are many examples that can be cited. Mainly, decision support system based on KDD (Knowledge discovery in databases). KKD can be defined as non-trivial process of identifying valid, novel, potentially useful, and ultimately understandable patterns in data (Fayyad, Piatetsky-Shapiro and Smyth 1996). In this case, data mining is a phase also central step in the KDD process concerned with applying computational techniques (DM algorithms implemented as computer programs) to actually find patterns in the data (Dzeroski 2008). Thanks to data mining software, data from many different dimensions is analyzed to categorize and summarize the relationships identified. Also, it allows to find correlations and patterns among dozens of fields in large relational databases (Luo 2008). METHODS In this study, incident database of Ministry of National Education was investigated to provide valuable structured data for improving service quality. Data for this analysis are drawn from nearly five hundred thousand incidents in Incident DB. These data consist of call center and help desk data coming from alternative communication channels i.e. web, call center, email and web based help desk. Also, it was worked on help desk data submitted on between 05.2008 and 04.2009 dates. IBM SPSS Modeler was used to analysis the incident management data base for service level agreement. Decision tree analysis was used. Classification is the task of classifying data into predefined groups. Decision tree, link analysis and memory-based reasoning are some of the classification methods. Decision tree is powerful in its functions and a very popular tool for classification and making prediction. It is constructed top-down: specific instances are put in sets, and as the tree grows, smaller subsets which are mainly leaves and branches are gradually divided. Among all algorithms of decision trees, the most popular one is Chi-squared Automatic Interaction detection (CHAID) algorithm for classification, and regression tree for decision making. FINDINGS & CONCLUSION In this article, the problem is based on “improving service quality”. According to dataset, service quality depends on the solution time of incidents. Moreover, idea that forms the basis of incident management is the restore a normal service operation as quickly as possible. Thus, ensuring that the best possible levels of service quality and availability are maintained. Analyzes the effect of location and error categories on solution time is evaluated. The results of the analysis indicate that data mining function results at maximum three levels, tree depth in that location and the error categories were taken as independent for dependent solution time. The nodes 8 and 13’s solution time is highly more than other nodes and nodes 19, 14, 9 and 11. It is obvious that there are service level problems at every incident error category in locations of node 8, these are “Hakkari” and “Sirnak. So, that supports not to take on interest of incidents coming from this location. The Nodes 9,11,13,17 and 19 describe that “Furniture” and “Infrastructure” type incidents’ solution time is more than “Computer and Peripheral” type incidents. Also, incidents in every category coming from node 3’s locations have nearly the highest solution time. This study illustrates the Ministry of National Education for reevaluation by operation experts of this project and managers to help improving incident management process and setting next year’s objectives REFERENCES [1] ANAND, S., KEREN, N., TRETTER, M. J., WANG, Y., O’CONNOR, T. M., and MANNAN, S. M. (2006), “Harnessing Data Mining to Explore Incident Databases,” Journal of Hazardous Materials, 130, 33– 41. [2] APTE, C., and WEISS, S. (1997), “Data Mining with Decision Trees and Decision Rules”, Future Generation Computer Systems, 13, 197-210. [3] BERRY M, Linoff G., “Data Mining Techniques For Marketing, Sales And Customer Support”, John Wiley & Sons , New Jersey, U.S.A, 1997. [4] DZEROSKI, S. (2008), Introduction Data Mining Tasks Patterns Data Mining Algorithms, Elsevier B.V. [5] FAYYAD, U., PIATETSKY-SHAPIRO, G., and SMYTH, P., (1996), From Data Mining to Knowledge Discovery: an Overview, Advances in Knowledge Discovery and Data Mining, pp. 1–34. Cambridge, MA: MIT Press.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

652

2 nd International Symposium on Computing in Science & Engineering

[6] HUI, S.C., and JHA, G. (2000), “Data Mining for Customer Service Support”, Information & Management, 38, 1-13. [7] HWANG H., JUNG T., SUH E., “An LTV Model And Customer Segmentation Based On Customer Value: A Case Study On The Wireless Telecommunication İndustry”, Expert System with Application, 26: 181-188, 2004. [8] RUD O.,“Data Mining Cookbook: Modelling Data for Marketing, Risk and Customer Relationship Management”, John Wiley & Sons, Canada, U.S.A, 2001. [10] LUO, Q. (2008), Advancing Knowledge Discovery and Data Mining, Workshop on Knowledge Discovery and Data Mining. [11] RYGİELSKİ C., WANG J., YEN D., “Data Mining Techniques for Customer Relationship Management”, Technology in Society, 24:483-502, 2002

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

653

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 900/02

An Alternative Approach to Promoting Traffic Safety Culture in Communities: Traffic Safety Data Service – The Iowa Case Mehmet Caputcu, Gediz University, Department of Civil Engineering, Menemen, Izmir, Turkey, [email protected] Eric J. Fitzsimmons, Iowa State University, Department of Civil, Construction, and Environmental Engineering, Ames, Iowa, USA, [email protected]

Keywords: Traffic safety data; GIS crash maps; pivot tables; on demand, quick-response, and customized analysis for data requests

INTRODUCTION

The AAA Foundation for Traffic Safety, which has initiated a global action for road safety (2011), reported that 1.3 million people are killed on the world’s roads each year and 3. 5 million people globally are injured, many of whom are disabled as a result. The newsletter also indicates that road deaths are the most frequent cause of death for young people worldwide (including in the U.S.). The AAA Foundation has also provided a projection about the situation in future: Annual road deaths globally are forecast to rise to 1.9 million by 2020, and by 2015, road deaths will be the leading health burden for children. “Zero death” is a target which is becoming adopted by a growing number of agencies in USA. However, it is widely discussed that legislature does not give traffic safety the importance and priority it desires. The initiative of the AAA Foundation promotes a “decade of action for road safety” and aims to influence governments to stabilize and then reduce global road deaths by 2020. This extended abstract presents background information on a service administered by Iowa State University’s Institute for Transportation. The unit is called the Iowa Traffic Safety Data Service (ITSDS) and supports traffic safety by quickly disseminating information to all areas of the highway network by analyzing traffic crashes in the State of Iowa. This ongoing study will attempt to investigate the historical development of comparable units and efforts worldwide, analyze how they serve the goal of raising awareness on the most common traffic safety issues, and will make suggestions for modifications to the program.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

654

2 nd International Symposium on Computing in Science & Engineering

LITERATURE REVIEW

United States Federal Highway Administration (FHWA) defines that a successful highway safety improvement program uses a multi-disciplinary approach what is known as the “four E’s”. These include engineering, enforcement, emergency response, and education strategies. Engineering refers to the geometric design of road facilities and features in accordance with traffic safety specifications which are constantly updated through research and crash experience. Additionally, engineering can also be related to the vehicle in which technology is constantly evolving to protect drivers and pedestrians when crashes occur. Education relates to the awareness of road users against safety risks. Drivers must also be taught the safe and proper way to use roadway facilities and this is a life-long learning process. Enforcement aims to help drivers change their driving habits and comply with the rules on roadway facilities. Finally, emergency response refers to how quickly emergency response can get to a crash location and provide medical assistance. Traffic safety research throughout the world has provided results and toolboxes to develop and enhance the multidisciplinary approach to highway safety. A critical aspect to successfully developing these tools and implementing a multidisciplinary program to reduce crashes in a state or targeted area is to provide agencies the most accurate historical vehicle crash and roadway data. Souleyrette et al. (1998) submitted a two-phase report for a project called “GIS-based Accident Location and Analysis System (GIS-ALAS)” to the Office of Transportation Safety at Iowa Department of Transportation. A GIS-based crash database would provide more capability than its predecessors such as the PC-based accident location and analysis system (PC-ALAS) could do. The development of GIS-ALAS later yielded to the initiative of the Iowa Traffic Safety Data Service. Background The Iowa Traffic Safety Data Service (ITSDS) is administered by the Institute for Transportation as a joint service between Iowa State University, the Iowa Department of Transportation (Iowa DOT), and the Iowa Governor’s Traffic Safety Bureau (GTSB). ITSDS provides state and local agencies, researchers, and public individuals interested in helping reduce the number of traffic crashes in their jurisdiction by providing historical vehicle crash information with its associated historical roadway geometry. ITSDS relies heavily on access to the Iowa DOT’s crash and roadway database and utilizes geographical information systems (GIS) to clean, sort, and plot data for a specified location within the state, or statewide. The database is actively being updated, edited, and reviewed by the Office of Traffic Safety at the Iowa DOT. The database is initially created by processing actual crash reports received from police stations throughout the state.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

655

2 nd International Symposium on Computing in Science & Engineering

ITSDS Operations Data requests are received via phone, email, or the form available at ITSDS’s web site. Visitors of ITSDS web site can also access a list of recently completed projects to see if similar studies have been completed for the desired location, or to explore how they could benefit from ITSDS. The ITSDS staff including trained students responds to data requests as quickly as possible, works directly with public and private agencies or individuals, assesses their needs, and communicates suggestions. Generally, ITSDS fulfills requests in three ways: First; the project requested can be an original and new study. Second; same project for the same area/location can be repeated with more current data for a different period of time. Third; a project can be applied to another area with the same or a similar format. Since established, ITSDS has completed numerous project requests from state, county, and local engineers, law enforcement agencies, and university researchers. Advocacy groups willing to contribute to the awareness of traffic safety in their communities and citizens who are concerned about a traffic safety problem of any kind in their area have also benefited the service. ITSDS uses various applications of geographic information systems and other crash analysis programs developed by the Iowa DOT. Formats of data presented to requesters by ITSDS include; GIS-based maps illustrating crashes and related attributes, interpretable tabular summaries (e.g. frequency data), diagrams, and reports.

Examples of Past Requests ITSDS is equipped to handle a wide variety of crash analysis requests related to types such as; • • • • • • • • • • • • •

Historical crashes for specific jurisdictions, road segments, and intersections Severity of crashes Alcohol-related crashes Seat-belt usage Cross-median crashes Urban pedestrian (and bicycle) crashes Weather conditions High risk rural roads analysis Driver-related factors, e.g., impaired, younger, older Environmental conditions, e.g. animal-involvement Attributes related to injured persons Roadway conditions Vehicle-related factors

PROGRAM OUTPUTS • • • •

It assists design engineers and policy makers by supporting engineering investments and improvements. It indirectly helps to promotes a "traffic safety culture" within the community and improves road users’ perception of risk factors on the road. It indirectly facilitates the deployment of new safety technologies by integrating readily available data services to relevant activities of co-operating agencies or individuals. It constitutes a perfect example of collaboration between a university, a research center, and state agencies.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

656

2 nd International Symposium on Computing in Science & Engineering

Examples of Benefits to Community • • • • •

The road safety audit data sets, and the audits themselves, have directly lead to improvements on some of those roadways, including signing, paved shoulders, etc. Enforcement efforts (or visibility) have typically increased as well. The corridor enforcement data typically lead to increased enforcement in high crash locations. Data regarding child (ages 0-18) restraint use and/or injuries and young driver crash data have been used to support legislative agendas to change restraint laws and graduated driver’s license requirements. Cities interested in red light running crashes have utilized data to assess whether this problem exists at certain intersections, and if so, prioritize site selection. ITSDS is often represented in government meetings to provide technical input and comments as necessary.

Side Benefits • • • • • •

It guides agencies such as the law enforcement and city governments. It fills the gap between what safety data users can gather for themselves and what they can obtain from experts. It funds research assistants (students). It enhances students’ sense of social responsibility. It can yield to new research ideas. It satisfies the curiosity of citizens.

Engineers, researchers, law enforcement officers, and emergency response units in the State of Iowa continuously request data from ITSDS so they can direct their efforts more effectively. Any country would benefit from such a unit if successfully implemented and sustained. Modifications to the methodology of the program may need to be considered under varying conditions of different locations. The present study aims to find

how

it

can

be

implemented

with

the

available

data

in

Turkey

and

other

countries.

ACKNOWLEDGEMENT Special thanks to Dr. Reginald Souleyrette and Zachary Hans…

REFERENCES/RESOURCES 1. 2.

3. 4. 5.

Proposals of the Iowa Traffic Safety Data Service submitted to the Office of Transportation Safety at Iowa Department of Transportation. Ames, Iowa. 2000-2009. Souleyrette, R., Strauss, T., Estochen, B., Pawlovich, M. “GIS-based Accident Location and Analysis System (GIS-ALAS)” – Project Report: Phase 1. Office of Transportation Safety, Iowa Department of Transportation. Ames, Iowa. April 6, 1998. U.S. Department of Transportation Federal Highway Administration’s web site (http://www.fhwa.dot.gov/) Iowa Traffic Safety Data Service’s web site (http://www.ctre.iastate.edu/itsds/index.htm) The AAA Foundation for Traffic Safety’s web site (http://www.aaafoundation.org/home/)

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

657

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 900/03

Finite Element Analyses of Composite Steel Beams Muharrem AKTAŞ,Sakarya University,Civil Engineering,Sakarya,Turkey,[email protected] Elif AĞCAKOCA,Sakarya University,Civil Engineering,Sakarya,Turkey,[email protected] Keywords: Finite element

INTRODUCTION In order to understand the behavior of composite steel beams laboratory experiments are essential. However these type of experiments are not economical and they are time consuming. Finite element analysis can be used to substitute such experiments. But all the steps of finite element needs to be verified with at least one real experiment to prove the correctness of the modeling. Developing a finite element model for composite steel beam is challenging. Because beam consists of highly nonlinear reinforced concrete and geometrically nonlinear steel beam . To overcome this problem reinforced concrete is replaced with steel plate and behavior is tested with experiment. For futher investigation this experiment is modeled by employing finite element program. Results are compared to show the correctness of the procedure. LITERATURE REVIEW For finite element analysis, element type, mesh, density, boundary conditions and numerical solution algorithm must be addressed correctly. Shell elements are used for steel I beam and solid elements are used for steel I beam and solid elements are used for steel plate. Geometrical imperfections are also introduced steel beam for geometrical nonlinearities. Simple boundary conditions are used to simulate the experimental layout. Static ricks solution algorithm is employed for numerical solutions since is capable of capturing the behavior around critical point such as buckling. Mesh sensitivity needs to be addressed by introducing different mesh sensitivities. Thus, course, medium and fine mesh models are created and compared. A sample of finite element model is depicted. METHODS Research method can be summarized as: 1-Flexural experiment of composite steel beams; 2-Creating steel beams by replacing concrete and conducting a flexural test; 3- Finite element modeling of steel beam and flexural test through finite element analysis; 4- Comparing the result. FINDINGS & CONCLUSION In order to conduct a flexural test for composite steel beams, computer simulation through finite element analysis can be used. Since reinforced concrete is anisotropic, non homogenous and highly nonlinear, creating a finite element model is really challenging. Thus, homogenous and isotropic steel can be replaced just for modeling perspective. This is proved with real experiments. To avoid expensive testing program computer simulation can be done for further investigation. However computer simulation must be verified with at least one real laboratory experiment. REFERENCES 1-Development of carbon fiber reinforced polmer system for strengthening steel structures''Sami Rizkalla, Mina dawood,David Schnerch' 2-Strengthening of scaled steel-concrete composite girder and steel monopole towers with CFRP'D.Schnercch, s. Rizkalla'

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

658

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 900/04

Digital Analysis of Historical City Maps by Using Space Syntax Techniques: Izmir Case Sabri ALPER,Izmir,Turkiye,[email protected] Cemal ARKON,Izmir Institute of Technology,Dept. of City and Regional Planning,Izmir,Turkiye,[email protected]

Keywords: Space Syntax,Izmir,Historical Map,Axial Map,Urban Analysis

INTRODUCTION It is possible to describe and analyze the morphological evolution of a settlement based on its historical, physical and social aspects, with the objective to identify the space qualities and the main elements that had influenced the morphology. Izmir witnessed great and fundamental changes in the second half of the 19th century. These changes especially affected the functioning of commerce, transportation, public works, industry, agriculture, mining and population, thus shaping the urban layout. Case study area is Izmir Historic City center, which is defined by Kadifekale-Degirmentepe axis at south, Punta at north, the coast line at west and Meles River at east. Area was the probable occupation area of the Roman Period. Even though the city has extended beyond these boundaries since the 19th century, the area still keeps its importance as the core of the city. LITERATURE REVIEW “Space Syntax” is a set of techniques for the representation and quantification of spatial patterns. In recent decades, spatial analysis has seen tremendous growth, especially so with the work of Prof. Hillier and the development of the “space syntax” theory and method, that has encouraged research into areas as diverse as the analysis of houses, courts, factories and hospitals, to the analysis of whole urban systems. Space syntax is a method for spatial description and analysis; and when it is applied in an urban context it would account for some basic characteristics and properties of urban layouts. The procedure for axial map construction in this study follows that outlined by space syntax methodology as described by the founders, Hillier and Hanson. In brief, the fewest number of longest axial lines have been drawn along all public routes through which people can see and move. The longest possible lines of sight have been used to define the axes. Enough axial lines have been used to link all maximal convex public spaces in the city in a network that is as shallow (in graph terms) as is physically possible. This study uses the quantitative data limited to the historical maps, which are two dimensional and depends on the knowledge that is reachable today. We only possess little detailed urban historical evidence in terms of detailed maps and plans to be able to analyze the city of Izmir in the late-Ottoman period. However, a map, called the “insurance plan” dated 1905, is used as a base map for the urban analysis that has been carried on within the scope of this study. This map has been chosen as the most suitable source for a digital reproduction, since the traces of street network and building blocks are clear and can be identified in detail.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

659

2 nd International Symposium on Computing in Science & Engineering

METHODS The axial map construction from the map of Izmir 1905 has been performed through some digital procedures carried out in computer aided design and drafting environment. The original map was scanned, rectified and then scaled in order to draw the digital map. Then it would be possible to digitize the street network and draw the axial lines. Digital axial map was then transferred to two space syntax software environments. The space syntax software used within this study are “UCL Depthmap” and “Mindwalk”. Both of these computer-based syntactic analysis programmes were utilized in order to get the syntactic measures, maps and graphs, that were necessary for this research. FINDINGS & CONCLUSION The street network was analyzed by utilizing the configurational approach found in the axial analysis of space syntax techniques. Using the axial map representation of the street system, a number of measures of the network properties of this system have been calculated. These measures is then tested for significance against the empirical data. The quantitative measures of syntactic analysis have been able to prove the qualitative findings and data found in historical references. REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.

11. 12. 13.

Aksoy, Yaşar. 2002. Smyrna-İzmir, Efsaneden Gerceğe. İzmir: İzmir Buyukşehir Belediyesi Kultur Yayini. Atay, Cınar. 1978. Tarih İcinde İzmir. İzmir: Tifse Basın Yayın. Batty, Michael. 1976. Urban Modeling. Cambridge : Cambridge University Press. Batty, M. and Rana, S. 2004. The automatic definition and generation of axial lines and axial maps. Environment and Planning B 31:615-640. Beyru, Rauf. 2000. 19. Yuzyılda İzmir’de Yaşam. İstanbul: Literatur Yayıncılık. Hillier, Bill. 1997. Space is the Machine: A Configurational Theory of Architecture. Cambridge: Cambridge University Press. Hillier, Bill and Hanson, Julienne, eds. 1984. The Social Logic of Space. Cambridge: Cambridge University Press. Hillier, B., Penn, A., Hanson, J., Grajewski and Xu, J. 1993. Natural Movement: or, Configuration and Attraction in Urban Pedestrian Movement. Environment and Planning B 20: 29-66. Hillier, B. 1998. The Common Language of Space, Bartlett School of Graduate Studies, University College London. www.spacesyntax.org.(accessed February 26, 2008) Hillier, B. 2003. The knowledge that shapes the city: the human city beneath the social city. Proceeding of the Fourth International Space Syntax Symposium. Vol. 1, 01.1-20, Space Syntax Limited, University College London. Klarqvist, B. 1999. Generators of an Urban History. Proceedings of Second International Symposium on Space Syntax, Brasilia. Kostof, Spiro. 1991. The City Shaped. Boston: Little, Brown. Kuban, Doğan. 2001. Turkiye’de Kentsel Koruma, Kent Tarihleri ve Koruma Yontemleri (İzmir’in Tarihsel Yapısının Ozellikleri ve Korunması ile ılgili Rapor) İstanbul: Tarih Vakfı Yurt Yayınları.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

660

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 900/05

Knowledge Representation by Geo-Information Retrieval in City Planning Ali Kemal ÇINAR, Gediz University, Department of Architecture, İzmir, Turkey, [email protected] Erkal SERİM, İYTE, Department of City Planning, İzmir, Turkey Gürcan GERÇEK, İYTE, Department of Software Engineering, İzmir, Turkey Keywords: Case based reasoning, GIS INTRODUCTION City planners take the advantage of using Geographic Information Systems (GIS) to deal with spatial data. However, with this increased use, GIS alone cannot serve all of the needs of city planning. By utilizing integrated modeling approaches, spatial analysis and computational techniques could be used to deal better with the tasks/problems in city planning process. In this paper, “Case Based Reasoning” (CBR) –an artificial intelligence (AI) technique- and GIS –geographic analysis, data management and visualization technique- are used to build a “Case Based Model” (CBM) for information retrieval and knowledge representation on an operational study. By the development of a system that integrates CBR as an AI reasoner and GIS as a spatial analyst, could be very helpful to planners for reaching a knowledge acquisition on a special purpose in planning process. In such an integrated system, CBR will provide a retrieval method by using previous experiences in proposing a solution to a new problem or providing relevant experiences to the planners. The goals of this study are (1) to demonstrate the feasibility and usefulness of CBR technique, (2) to combine knowledge inference capability of CBR with analytical, management and visualization capability of GIS within a hybrid model, (3) to examine whether CBR technique and CBM could benefit the city planning process. LITERATURE REVIEW CBR is a family of AI techniques that simulates the human behavior in solving a new problem. Thus in CBR, reasoning is based on remembering. When confronted with a new problem, it is natural for a human problemsolver to look into his/her memory to find previous similar instances for help. In CBR approach, new problems are handled by remembering old similar ones and moving forward from there. Referencing to old cases is advantageous in dealing with situations that recur. CBR uses matching and ranking to derive similarity. Matching is achieved through index and weights, while ranking is the total of the match score. CBR also searches and matches the entire database not just by comparing two values. Most CBR systems and also this study use the nearest neighbor matching (k-NN) technique for retrieval. The practical usage of CBR includes a) building the case library, b) defining an index for the cases, and c) building retrieval and adaptation methods. In the following step, a system that integrates a CBR tool and a GIS package form the CBM. In this integration, the GIS part provides the functions of handling spatial data and CBR part makes inference. METHODS ArcGIS software is used for preparing and handling geospatial data and jCOLIBRI software is used for indexing the cases and retrieval process through the GIS database as a case library. An effective customized graphical user interface (GUI) is developed in java development environment. If the customized GUI is used, user can easily access to the extra content and no manual effort or links are needed. Consequently, this is the main advantage of the customized GUI when compared to default execution and result (simplified textual output) of the generic jCOLIBRI software.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

661

2 nd International Symposium on Computing in Science & Engineering

This paper is trying to implement an integrated model for providing a geo-spatial planning support in Turkish city planning practice through a practical example. The proposed experimental system tries to process tabular data and link the inference to spatial data. Hence, domain dependent similarity matching matrices are utilized and a customized GUI is developed for enhanced knowledge retrieval and representation. Details of similarity computing and the customized GUI will be clarified in the model implementation process. The case of Çeşme, Izmir, Turkey is provided for comprehending the aim of the research. FINDINGS & CONCLUSION This study tried to find out whether a hybrid spatial model which employs CBR & GIS could support human decision making. GIS based case library generation, domain specific similarity matching and object- oriented software customization tasks are provided for enhanced functionality and spatial reasoning. While the benefits of CBM are discussed, both advantages and limitations have been realized from findings when applied to the complex domain such as city planning. REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11.

12. 13. 14. 15.

16.

17. 18. 19. 20.

21.

Aamodt, A., & Plaza, E. (1994). Case-Based Reasoning: Foundational Issues, Methodological Variations, and System Approaches. AI Communications. IOS Press, 7 (1), 39-59. Benwell, G.L., & Holt, A. (1999). Applying case-based reasoning techniques in GIS. International Journal of Geographical Information Science, 13 (1), 9-25. Case Based Reasoning Homepage: AI-CBR. (2007). Retrieved June 2009, from http://www.ai-cbr.org/ Cognitive Computing Lab. (2009). Retrieved June 2009, from Georgia Tech http://home.cc.gatech.edu/ccl Dikmen, I., & Birgönül M.T., & Gür A.K. (2007). A Case-Based Decision Support Tool For Bid Mark-Up Estimation Of International Construction Projects. Automation in Construction, 17 (1), 30-44. Domeshek, E., & Kolodner, J. (1993). Using the Points of Large Cases. AI EDAM, 7 (2), 87-96. Heylighen, A., & Neuckermans, H. (2001). A Case Base of Case Based Design Tools for Architecture. ComputerAided Design, 33 (14), 1111-1122. Holt, A., & Benwell, G. (1996). Case Based Reasoning and Spatial Analysis. Urisa Journal, 8 (1-2), 27-36. IYTE Department of City and Regional Planning. (2002). 1/25000 Ölçekli Çeşme İlçesi Çevre Düzeni Planı Raporu, Izmir jCOLIBRI. (2008). Retrieved June 2009 from GAIA - Group for Artificial Intelligence Applications http://gaia.fdi.ucm.es/grupo/projects/jcolibri/ Kaster, D.S., & Medeiros, C.B., & Rocha, H.V. (2005). Supporting modeling and problem solving from precedent experiences: the role of workflows and case-based reasoning. Environmental Modeling & Software, 20 (6), 689704. Kolodner, J. L. (1993). Case-based reasoning. San Mateo, CA: Morgan Kaufmann Publishers. Leake, D. B. (1996). Case-based reasoning : experiences, lessons & future directions. Menlo Park, Calif. Cambridge, Mass, AAAI Press. Malczewski, J. (2004). GIS Based Land Use Suitability Analysis: A Critical Overview. Progress in Planning, 62 (1), 3-65. Recio-Garcia, & Sanchez, & Diaz-Agudo, & Gonzales-Calero, (2005): JCOLIBRI 1.0 in a nutshell. A Software Tool for Designing CBR Systems. In Proceedings of the 10th UK Workshop on Case Based Reasoning, University of Greenwich; CMS Press. Recio-Garcia, J. A. (2008). jCOLIBRI: A Multi-Level Platform for Building and Generating CBR Systems. Phd Thesis, Department of Software Engineering and Artificial Intelligence, Facultad de Informática, Universidad Complutense de Madrid, Spain. Riesbeck, C. K., & Schank, R. C. (1989). Inside case-based reasoning. Hillsdale, N.J.: Lawrence Erlbaum Associates, Pubs. Schank, R. C. (1982). Dynamic memory : a theory of reminding and learning in computers and people. Cambridge [Cambridgeshire] New York: Cambridge University Press. Shi, X., & Yeh, A. (1999). The Integration of Case-Based Systems and GIS in Development Control. Environment and Planning B: Planning and Design, 26 (3), 345-364. The Archie Project. (2009). Retrieved June 2009, from Georgia Tech http://www.cc.gatech.edu/aimosaic/faculty/kolodner/archie.html Watson, I. (1996). CBR Tools: An Overview. In Watson, I. (Ed.), Progress in Case-Based Reasoning. Proceedings of the 2nd UK Workshop on Case Based Reasoning (pp. 71-88). University of Salford, April 10th 1996. AICBR/SGES Publications.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

662

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 300/28

Parameter Design of Iterative Feedback Tuning Method Using Analysis of Variance for First Order Plus Dead Time Models Arman Sharifi,Iran University of Science and Technology,Electrical Engineering faculty,Tehran,Iran, [email protected] Houman Sadjadian,Iran University of Science and Technology,Electrical Engineering faculty,Tehran,Iran, [email protected] Keywords: IFT,Analysis of variance,ANOVA,nonlinear regression INTRODUCTION Iterative feedback tuning (IFT) is a model free method to tune controllers using some close loop experimental data and estimate of gradients. The objective of IFT is to minimize a quadratic performance criterion. In this performance criterion there is a user defined scalar parameter(lambda) that expresses the relative importance of the penalty on the control effort and used to be adjusted by trial and error. This parameter plays an important role in output response and a systematic method to tune it can be very useful. This paper presents a method based on ANOVA and nonlinear regression to find an equation or expression for tuning lambda in terms of model parameters. This strategy is derived for First Order plus Dead Time (FOPDT) models of an industrial plant that can model many of industrial processes. At the end of paper the performance of method has been illustrated by using it to control two systems with FOPDT model and compare this method with Relay feedback and Ziegler Nichols methods. LITERATURE REVIEW Any control objectives can be expressed in terms of a criterion function. Generally, explicit solutions to such optimization problems require full knowledge of the plant and disturbances, and complete freedom in the complexity of the controller. In practice, the plant and the disturbances are seldom known, and it is often desirable to achieve the best possible performance with a controller of prescribed complexity. Recently, so called iterative identification and control design schemes have been proposed in order to address the problem of the model-based design of controller parameters for restricted complexity controllers. A different approach was proposed by Hjalmarsson [1] where it was observed that the model bias problem could be avoided by replacing the information carried by the model by information obtained directly from the system itself. This lead to an iterative method where the controller parameters were successively updated approach has since then become known as iterative feedback tuning (IFT). In particular, the IFT method is appealing to process control engineers because, under this scheme, the controller parameters can be successively improved without ever opening the loop. This method is used for tuning parameters of a controller in feedback control systems without needing an explicit model of the .It has proved to be very effective in practice and is now widely used in process control [26] and it has been applied to many experiments such as DC-servo with backlash [2], inverted pendulum [7], and two-mass spring with friction [8]. The key feature of IFT is that the closed-loop experimental data are used to compute the estimated gradient of a performance criterion that usually is a quadratic criterion. Several experiments are performed iteratively and the updated controller parameters are obtained based on the closedloop input-output data obtained during system. In this performance criterion there is a user defined scalar parameter (lambda) that expresses the relative importance of the penalty on the control effort. This parameter used to be adjusted by trial and error. This method usually is time consuming and doesn't have enough accuracy. This paper presents a method based on analysis of variance (ANOVA) and nonlinear regression to find an equation or expression for tuning lambda in terms of model parameters. Recently, Analysis of variance has been used in control [9, 10] for tuning parameters. This strategy is derived for First Order plus Dead Time (FOPDT)

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

663

2 nd International Symposium on Computing in Science & Engineering

models of an industrial plant. In this method a criterion is suggested and a collection of data is built by trying to find the optimal lambda that minimizes this criterion is found for many First Order Plus Dead Time models. Using this information and ANOVA approach, a nonlinear regression has been used to extract an equation for optimal lambda. METHODS In statistics, analysis of variance (ANOVA) is a collection of statistical models, and their associated procedures, in which the observed variance in a particular variable is partitioned into components attributable to different source of variation [9]. In this paper multi-way or n-way ANOVA has been used to reach our purpose.To utilize ANOVA for our goal, some experiments have to be performed that involve FOPDT model parameters and tuning lambda [10, 13].After constructing the bank of data in previous section, an analysis of variance is performed on the optimal tuning parameter as a response vector or dependent variable and FOPDT model parameters as independent variables. Therefore, model parameters or combination of them that have a significant influence on the optimal tuning parameter set would be fined using analysis of variance .using this information and a nonlinear regression an equation for optimal lambda is extracted. FINDINGS & CONCLUSION In this paper a new systematic method for tuning lambda , that is an important parameter in IFT algorithm, has been proposed. This parameter used to be tuned by try and error method that was a time consuming method. Now this parameter can be tuned with a systematic method that depends on model parameters. It has been observed that ANOVA results in a small estimation error and the estimated values for lambda are close to optimal values. After applying the proposed method to two system and compare it with Relay feedback and Ziegler Nichols methods, it can be seen that it has a good performance and has a acceptable disturbance rejection ability. The proposed method is not time consuming and is more accurate in comparison with the try and error method [13]. REFERENCES [1] H. Hjalmarsson, S. Gunnarsson,M. Gevers, “ A convergent Iterative Restricted Complexity Control Design Scheme” , Proc. 33rd IEEE CDC , FL, USA, vol.2 ,pp.1735-1740 ,Dec1994. [2] H. Hjalmarsson, Gevers M, Gunarsson S, Lequin O,“ Iterative feedback tuning: theory and applications”, IEEE control systems magazine, vol. 18, no. 4, pp. 26–41, Aug. 1998. [3] H. Hjalmarsson, “ Iterative feedback tuning-an overview”. Int. J. Adapt. Control Signal Process, vol. 16, no. 5, pp. 373–395, 2002. [4] O. Lequin, M.Gevers, M.Mossberg, E.Bosmans, L.Triest, “Iterative feedback tuning of PID parameters:Comparison with classical tuning rules”, Control Engineering Practice, vol. 11, no.9, pp. 1023-1033, 2003. [5] H. Hjalmarsson, “ Efficient tuning of linear multivariable controllers using iterative feedback tuning”. Int. J. Adapt. Control Signal Process,V0.1, no. 7, Pages 553 – 572,Oct 1999. [6] H.Hjalmarsson," Control of nonlinear systems using Iterative Feedback Tuning", Proc. Of the American control conference,Philadelphia,Pennsylvania,pp.2083-2087,June 1998. [7] Codrons B, De Bruyne F, De Wan M, Gevers M. “Iterative feedback tuning of a nonlinear controller for an inverted pendulum with a flexible transmission”. In Proceedings of the 1998 IEEE International Conference on Control Applications, Trieste, Italy, 1998. [8] Fukuda T, Hamamoto K, Sugie T. “Iterative feedback tuning of controllers for a two-mass spring system with friction”, In Proceedings of the 39th IEEE Conference on Decision and Control, Sydney, Australia 2000; 2438–2443. [9] E.J.Iglesias, M. E. Sanjuan and C.A. Smith,"Tuning equation for dynamic matrix control in SISO loops", Ingenieria y Desarrollo. Issue 19,pp.88-100,2006. [10] A.R. Neshasteriz,A. Khaki Sedigh,H. Sadjadian," An Analysis of Variance Approach to Tuning of Generalized Predictive Controllers for Second Order plus Dead Time Models",8th IEEE International Conference on Control and Automation(ICCA),Xiamen,pp.1059-1064,June 2010. [11] R. V. Hogg, J. Ledolter, “Engineering statistics”, MacMillan,1987. [12] D. Garcia, A. Karimi and R. Longchamp," Pid controoler design with specification on the infinity-norm of sensitivity",IFAC Word Congress,Prague,Czech Republic,July 4-8,2005. [13] J. E. Normey-Rico and E. F. Camacho, “Control of Dead-Time Processes”. London: Springer-Verlag, 2007.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

664

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 400/58

Computational Solution of the Velocity and Wall Shear Stress Distribution Inside a Coronary By-Pass Graft to Artery Connection Under Steady Flow Conditions Nurullah ARSLAN, Fatih University, Department of Genetics and Bioengineering, Istanbul, Turkey, [email protected] Hakan Turmuş, Fatih University, Department of Genetics and Bioengineering, Istanbul, Turkey, [email protected] Mustafa Güden, Sema Hastanesi, Cardiovascular Surgery, Istanbul, Turkey, [email protected] Ali Aşkın Korkmaz, Sema Hastanesi, Cardiovascular Surgery, Istanbul, Turkey, [email protected] Keywords: CFD, Hemodialysis, Atherosclerosis, Bypass graft INTRODUCTION In this study the flow phonemena inside a bypass graft to artery connection is analysed with different graft angles under steady flow conditions. The velocity profiles and shear stresses at different flow locations are found numerically. Due to the ever increasing speed in computers, numerical solutions are become important technique to determine the flow patterns and wall shear stress (WSS) distributions of arterial flows. Critical flow regions are found in this connection. Low shear stress regions are found at separation points and also stagnation points. The results are expected to find new graft designs and model development. New graft models will offer least atherosclerosis formation and less blockage in connection regions. New graft designs will be manufactured and used first in animal studies and humans LITERATURE REVIEW In cardiovascular system, Bypass graft implantation is a widely used surgical procedure for those patients with a severe stenosis of the coronary arteries. Although the surgical technique is well established, the need for a better comprehension of the fluid dynamic and structural behavior of the reconstructed portion of the coronary circulation still seems important. Indeed, a significant number of graft conenction fails postoperatively due to the development of intimal hyperplasia, which occurs preferentially at the site of the distal anastomotic junction. Intimal thickening (hyperplasia) has been cited as a major cause of vascular graft failure [Echave et al. 1979], and hemodynamics have been implicated in the localization of intimal hyperplasia [Sottiurai et al. 1989, White et al. 1993, Bassiouny et al. 1991]. The two main techniques currently employed to investigate the flow patterns and WSS distributions inside vascular grafts are experimental measurements inside in-vitro models and numerical simulations of the flow field. Figure 1 shows a typical graft configuration with blood entering the distal anastomosis from the graft and exiting out both the distal outlet segment (DOS) and the proximal outlet segment (POS). Figure 2 indicates the nomenclature employed herein. The surgically constructed geometry can vary in many ways including: ratio of host artery-to-graft vessel diameter, graft angle with host artery, and hood length. Figure 1. Sketch showing blood flow bypassing the occlusion and exiting the distal anastomosis through both the proximal and distal outlet segments (POS and DOS)

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

665

2 nd International Symposium on Computing in Science & Engineering

Figure 2.

Sketch of the distal anastomosis identifying the nomenclature

METHODS The flow field of this steady state problem is three dimensional in nature. A computational fluids program, ANSYS-FLUENT, from ANSYS, Inc. (Version 13.0), was employed for the numerical simulation. The finite difference technique was used to solve the Navier-Stokes equations for three-dimensional flow. Blood was approximated as a Newtonian fluid for simplicity to allow a direct comparison with the experimental measurements planned in future. The three-dimensional connection geometry is complex with one entrance (graft) and two exits (POS and DOS, POS is assumed fully closed for this study). The graft has a graft to artery diameter ratio of 1 to 1 based on a graft lumen diameter of 5 mm and host artery lumen diameter of 5 mm. The graft intersects the host artery with an angle of fifteen, thirty and fortyfive degrees in three straight connection and five degree with curved connection. The mesh was constructed as two tubes consisting of the graft and the POS which connect to form the anastomosis and the DOS as shown in Figure 3. Mesh sizes are ranging from 236,104 to 265,008 tetrahedral. The cross sectional shape was represented as circles at the entrance and both exits. No-slip and no-penetration conditions are imposed on the rigid walls. Steady flow enters the graft with a uniform blunt velocity profile. All flow exits through DOS since the flow rate is assumed zero throug POS. A stress free boundary condition is imposed at the exit of the DOS. (a) (b) Figure 3. Meshes for straight and curved coronary bypass graft connection FINDINGS & CONCLUSION Numerical calculations were made at Reynolds number of 150. The Reynolds number is based upon the total graft flow and the diameter in the artery (i.e. Re=ρ V.D/µ, Dartery=0.5 cm, µ=0.035 g/sec/cm, ρ=1.05 g/ml, and Vaverage=10cm/sec for Re=150). The velocity vectors at the midplane, scaled to in-vivo values, are shown in Figure 4 for the numerical velocity measurements at Reynolds number 150. The inlet velocity profile is uniform. Inside the graft, flow expands to the larger area similar to flow in a sudden expansion. A stagnation point is formed on the floor of the graft where flow divides to exit proximally through the POS and distally through the DOS. No separation occurs along the hood of the graft. Velocity gradients are low along the hood and then increase as flow accelerates just before entering and inside the DOS due to the reduction in cross sectional area. Velocity profiles in the DOS are blunt and skewed towards the floor. The floor stagnation point move to DOS for different graft angles. Velocity profiles are more blunt at the higher Reynolds number and more skewed toward the floor inside the DOS and POS. (a)

(b) (c)

(d) Figure 4. Velocity profiles a) 15˚ b) 30˚ c) 45˚ d) Curved 5˚ Figure 5 shows the distribution of WSS on the hood and floor. These results demonstrate a sharp increase in WSS on the hood side where the graft meets the artery. The peaks in WSS on both the floor and hood side are more pronounced at higher velocity gradient regions. The large increase in WSS along the hood at the DOS entrance is caused by the angle formed by the hood and artery intersection. This effect may be exaggerated in the numerical simulation because the discontinuity of the curve at the intersection point is less in the physical

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

666

2 nd International Symposium on Computing in Science & Engineering

model. In general, the WSS has a minimum on both the hood and floor of the graft near a position three artery diameters proximal to the DOS entrance (toe). (a) (b) Figure 5. Wall shear streses a) 30˚ b) Curved 5˚ REFERENCES 1. 2. 3. 4. 5.

6. 7.

8.

9.

Bassiouny H.S., White S.S., Glagov S., Choi E., Giddens D.P., Zarins C.K., 1992, "Anastomotic intimal hyperplasia: mechanical injury or flow induced," Journal of Vascular Surgery, Vol. 15, No. 4, pp. 708-717. Echave V., Koornick A.R., Haimov M., 1979, "Intimal hyperplasia as a complication of the use of the polytetrafluoroethylene graft for femoral-popliteal bypass," Surgery, Vol. 86, No. 6, pp. 791. Fei D.Y., Thomas J.D., 1994, "The effect of angle and flow rate upon hemodynamics in distal vascular graft anastomoses: A numerical model study," Journal of Biomechanical Engineering, Vol. 116, pp. 331-323. Loth F., 1993, "Velocity and Wall Shear Measurements Inside a Vascular Graft Model Under Steady and Pulsatile Flow Conditions," Ph.D. Dissertation, Georgia Institute of Technology. Loth F., Jones S.A., Bassiouny H.S., Giddens D.P., Zarins C.K., Glagov S., 1993, "Laser Doppler velocity measurements inside a vascular graft model under steady flow conditions," Bioengineering Conference Proceedings, Edited by N.A. Langrana, M.H. Friedman, E.S. Grood, BED Vol. 24, pp. 48-51, presented at the ASME Summer Annual Bioengineering Conference, Breckenridge, CO, June 24-29, 1993. Ojha M., Ethier C.R., Johnston K.W., Cobbold S.C., 1990, "Steady and pulsatile flow fields in an end-to-side arterial anastomosis model," Journal of Vascular Surgery, Vol. 12, pp. 747-53. Perktold K., Helfried T., Rappitsch G., "Flow dynamic effect of the anastomotic angle: a numerical study of pulsatile flow in vascular graft anastomoses models", Technology and Health Care, 1994, Vol. 1, pp. 197207. Sottiurai V.S., Yao J.S.T., Batson R.C., Sue S.L., Jones R., Nakamura Y.A., "Distal Anastomotic Intimal Hyperplasia: Histopathologic Character and Biogenesis," Annals of Vascular Surgery, Vol. 3, No. 1, pp. 2633, 1989. White S.S., Zarins C.K., Giddens D.P., Bassiouny H.S., Loth F., Jones S.A., Glagov S., 1993, "Hemodynamic patterns in two flow models of end-to-side vascular graft anastomoses: effects of pulsatility, flow division, Reynolds number and hood length," Journal of Biomechanical Engineering, Vol. 115, pp. 104111.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

667

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 700/11

Computational Study of Isomerization in 4-substituted Stilbenes Ahmad Reza BEKHRADNIA ,Pharmaceutical Sciences Researches Center,Department of Medicinal Chemistry, Mazandaran University of Medical Sciences,Sari,Iran,[email protected] Hamid TAGHVA,Pharmaceutical Sciences Researches Center,Department of Medicinal Chemistry, Mazandaran University of Medical Sciences,Sari,Iran,[email protected] Sina KAZEMI,Pharmaceutical Sciences Researches Center,Department of Medicinal Chemistry, Mazandaran University of Medical Sciences,Sari,Iran,[email protected] Keywords: Computational Study, Molecular Modeling, Ismerization

INTRODUCTION In recent years, the theoretical study is developed as a rapid technical method and comparable to experimental approaches in efficiency (1-4). The molecular modeling methods can use for evaluation of isomerization in diphenylethylene (stilbenes). Cis-trans isomerization of diphenylethylene and its derivatives investigated at ab initio and DFT methods. The calculations are carried out at HF/6-31G* and B3LYP/6-31G* levels of theory for cis-trans isomerization of 4-X- diphenylethylene (C15H13SO2X , where X=H, F, Cl, CH3 and OCH3). METHODS Using the computational calculations in gas phase, we obtain an evaluation for isomerization energy barrier for cis-trans conversion through related transition state. Changes of activation electronic energies (ΔE#), enthalpies (ΔH#), Gibbs free energies (ΔG#)/kcal.mole-1 for 4-X- diphenylethylene (where X=H, F, Cl, CH3 and OCH3) are calculated at density functional theory (DFT) and HF in combination with 6-311+G* basis set. Thermodynamic functions obtained through frequency calculations are multiplied by the scaling factor of 0.99 for DFT method as well as 0.89 for HF by the FREQUENCY option of the GAUSSIAN 03 program (5). This is to account for the difference between the harmonic vibrational calculations and harmonic oscillations of the actual bonds. Only real frequency values (with positive signs) for the minimum state structures and only a single imaginary frequency value (with negative signs) for the transition states are accepted. FINDINGS & CONCLUSION Experimentally, the photochemical and the thermal isomerization of trans-stilbenes to related cis-isomers are performed and confirm the computational calculations, satisfactorily. Relative energies (Er), enthalpies (Hr), and Gibbs free energies (Gr) (kcal/mol) including ZPE corrections along with the dipole moment (debye) are computed. Consequently, according to computational data the trans -4-substituted diphenylethylenes are converted to related cis-isomers, hardly. This is due to distribution of charge and dipole moment of each isomer. REFERENCES: [1] A.R.Bekhradnia, S.Arshadi, Monatsh. Chem. 2007, 138, 725. [2] M. Z. Kassaee, A. R. Bekhradnia, J. Biosci. Bioeng. 2003, 95, 526. [3] S. Arshadi, Y. Shad, J. Emami, A.R. Bekhradnia, O. Yazdani, etal, Asian J. Chem. 2010, 22, 1970. [4] A.R.Bekhradnia, S.Arshadi, Chinese J. Struc. Chem. (Accepted for Publication). [5] Frisch, M. J.; Trucks, G. W.; Schlegel, H. B.; Scuseria, etal, Gaussian 03. Gaussian: Pittsburgh, PA 2003.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

668

2 nd International Symposium on Computing in Science & Engineering

Proceeding Number: 800/23

Analysis of Highway Crash Data by Negative Binomial and Poisson Regression Models Darçın AKIN, Gebze Institute of Technology, Department of City and Regional Planning, Kocaeli, Turkey, E-mail: [email protected] Keywords: Crash data, negative binomial regression, poisson regression, crash properties, road and weather factors

INTRODUCTION Accident prediction models are important tools for estimating road safety with regards to roadway, weather and accidents conditions. There are different empirical equations developed for accident prediction models. However, new regression techniques have recently found application opportunities in this area. It is obvious that the model development and subsequently the model results are strongly affected by the choice of the regression technique used. This study evaluates the influence of roadway, weather and accidents conditions, and type of traffic control on accident severity (number of person killed) using negative binomial and position regression models. Information on accident severity and roadway and weather conditions was obtained from the Michigan Department of Transportation Accident Database. Negative binomial and poisson regression models were deployed to measure the association between accident severity and roadway, weather and accidents conditions. LITERATURE REVIEW Statistical models are used to examine the relationships between accidents and features of accidents and accident sites. However, many past studies illuminating the numerous problems with linear regression models (Joshua and Garber, 1990 and Miaou and Lum, 1993) have led to the adoption of more appropriate regression models such as Poisson regression which is used to model data that are Poisson distributed, and negative binomial (NB) model which is used to model data that have gamma distributed Poisson means across crash sites—allowing for additional dispersion (variance) of the crash data. Although the Poisson and NB regression models possess desirable distributional properties to describe motor vehicle accidents, these models are not without limitations. One problem that often arises with crash data is the problem of ‘excess’ zeroes, which often leads to dispersion above that described by even the negative binomial model. ‘Excess’ does not mean ‘too many’ in the absolute sense, it is a relative comparison that merely suggests that the Poisson and/or negative binomial distributions predict fewer zeroes than present in the data. As discussed in Lord et al. (2004), the observance of a preponderance of zero crashes results from low exposure (i.e. train frequency and/or traffic volumes), high heterogeneity in crashes, observation periods that are relatively small, and/or under-reporting of crashes, and not necessarily a ‘dual state’ process which underlies the ‘zero-inflated’ model. Thus, the motivation to fit zeroinflated probability models accounting for excess zeroes often arises from the need to find better fitting models which from a statistical standpoint is justified; unfortunately, however, the zero-inflated model comes also with “excess theoretical baggage” that lacks theoretical appeal (see Lord et al., 2004). Another problem not often observed with crash data is underdispersion—where the variance of the data is less than the expected variance under an assumed probability model (e.g. the Poisson). One manifestation might be “too few zeroes”, but this is not a formal description. Underdispersion is a phenomenon which has been less convenient to model directly than over-dispersion mainly because it is less common observed. Winkelman's gamma probability count model offers an approach for modeling underdispersed (or overdispersed) count data (Winkelmann and Zimmermann, 1995), and therefore may offer an alternative to the zero-inflated family of models for modeling overdispersed data as well as provide a tool for modeling underdispersion.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

669

2 nd International Symposium on Computing in Science & Engineering

METHODS Accident data were barrowed from the Michigan Department of Transportation and it represents the all crash data reported by the Department of Police in all counties and towns across the State of Michigan, USA in 2004. Because of the random, discrete, non-negative nature of accident data, multiple regression models were not considered to be appropriate. The Poisson regression is usually a good modeling starting point, since crash data often are approximately Poisson distributed. When data are observed with overdispersion (crash mean less than crash variance), some modifications to the standard Poisson regression are available. The most commonly applied (and described in the literature) variations include the negative binomial model and the zero-inflated models including both the zero-inflated Poisson (ZIP) and zero-inflated negative binomial (ZINB). The Bayesian Poisson–Gamma hierarchy, leading to the negative binomial distribution, has been the standard practice in developing accident prediction models.A less common model that can deal with overdispersion and underdispersion (crash mean greater than crash variance) is the gamma probability count model. FINDINGS & CONCLUSION NB regression model results presented that monthly, daily and weekday variations are not statistically significant on accident severity (number of persons killed). However, hourly variations are statistically significant at 0.10 level. Type of traffic control was also found to be not statistically significant. Number of vehicles involved, crash type (overturn, rear-end, side-swipe, head-on, hit object, and so on), injury types (A, B, C), number of uninjured, number of occupants and weather conditions are statistically significant at 0.05 level. Light and surface conditions were also statistically significant at 0.10 level. The findings of the Poisson regression are very similar to NB regression but the parameter estimations are little bit different from those determined by NB regression. The results are in agreement with professional judgments with respect to the factors affecting the accident severity on highway crashes. REFERENCES 1.

2. 3.

4.

5.

6. 7. 8. 9.

El-Basyouny, K. and Sayed T. (2006). Comparison of two Negative Binomial Regression Techniques in Developing Accident Prediction Models. Transportation Research Record: Journal of the Transportation Research Board, volume 1950, pp 9-16. Joshua, S.C. and Garber, N.J. (1990). Estimating truck accident rate and involvements using linear and Poisson regression models, Transport. Planning Technol. 15 (1990) (1), pp. 41–58. Lord, D. (2000). The prediction of accidents on digital networks: characteristics and issues related to the application of accident prediction models. Ph.D. Dissertation, Department of Civil Engineering, University of Toronto, Toronto. Lord, D., Washington, S., Ivan, J. (2004). Poisson, Poisson-gamma, and zero-inflated regression models of motor vehicle crashes: balancing statistical fit and theory. Accident Analysis and Prevention, Pergamon Press/Elsevier Science, 2004. Lord, D. (2006). Modeling motor vehicle crashes using Poisson-gamma models: examining the effects of low sample mean values and small sample size on the estimation of the fixed dispersion parameter, Accid. Anal. Prevent. 38 (2006) (4), pp. 751–766. Miaou, S.P. and Lum, H. (1993). Modeling vehicle accidents and highway geometric design relationships, Accid. Anal. Prev. 25 (1993), pp. 689–709. Oh, J., Washington, S.P. and Nam, D (2006). Accident prediction model for railway-highway interfaces. Accident Analysis & Prevention, Volume 38, Issue 2, pp. 346-356. Washington, S., Karlaftis, M. and Mannering, F (2003). Statistical and Econometric Methods for Transportation Data Analysis, Chapman Hall/CRC, Boca Raton, FL. Winkelmann, R. and Zimmermann, K. (1995). R. Winkelmann and K. Zimmermann, Recent developments in count data modeling: theory and applications, J. Econ. Surveys 9, pp. 1–24.

June 1 -4, Kusadasi, Aydin, Turkey http://iscse2011.gediz.edu.tr

670