conference proceedings conference proceedings ...

8 downloads 0 Views 4MB Size Report
Mar 7, 2018 - Bradley Wiggins. AUSTRIA Lorenz Cuno Klopfenstein. ITALY. Catherine Higgins. IRELAND Luis Gómez Chova. SPAIN. Chelo González.
2018 12th International

Technology, Education and Development Conference 5-7 March, 2018 Valencia (Spain)

CONFERENCE PROCEEDINGS

Rethinking Learning in a Connected Age

Published by IATED Academy iated.org

INTED2018 Proceedings 12th International Technology, Education and Development Conference March 5th-7th, 2018 — Valencia, Spain Edited by L. Gómez Chova, A. López Martínez, I. Candel Torres IATED Academy

ISBN: 978-84-697-9480-7 ISSN: 2340-1079 Depósito Legal: V-262-2018

Book cover designed by J.L. Bernat All rights reserved. Copyright © 2018, IATED The papers published in these proceedings reflect the views only of the authors. The publisher cannot be held responsible for the validity or use of the information therein contained.

INTED2018

12th International Technology, Education and Development Conference

INTED2018 COMMITTEE AND ADVISORY BOARD Agnė Brilingaitė Agustín López Alastair Robertson Alison Egan Alvaro Figueira Amparo Girós Ana Ćorić Samardžija Ana Paula Lopes

LITHUANIA Joanna Lees SPAIN Joao Filipe Matos UNITED KINGDOM Jorgen Sparf

SWEDEN

IRELAND Jose F. Cabeza

SPAIN SPAIN

SPAIN Juanan Herrero

SPAIN

CROATIA Karen Eini PORTUGAL Kim Sanabria SPAIN Konstantinos Leftheriotis

Antonio García

SPAIN Laurentiu-Gabriel Talaghir

Azzurra Rinaldi

PORTUGAL

PORTUGAL Jose Luis Bernat

Ana Tomás Aysen Gilroy

FRANCE

ISRAEL UNITED STATES GREECE ROMANIA

UNITED ARAB EMIRATES Linda Daniela

LATVIA

ITALY Lorena López

SPAIN

Bradley Wiggins

AUSTRIA Lorenz Cuno Klopfenstein

ITALY

Catherine Higgins

IRELAND Luis Gómez Chova

SPAIN

Chelo González

SPAIN Mª Jesús Suesta

SPAIN

Christian Stöhr

SWEDEN Mairi Macintyre

UNITED KINGDOM

Christopher Ault Christos Alexakos Cira Nickel Colin Layfield Cristina Lozano Daniel Otto David Cobham David Martí Davydd Greenwood

UNITED STATES Maria Porcel GREECE Martina Coombes CANADA Maureen Gibney MALTA Mónica Fernández SPAIN Noela Haughton GERMANY Nor Ibrahim UNITED KINGDOM Norma Barrachina

SPAIN IRELAND UNITED STATES SPAIN UNITED STATES BRUNEI DARUSSALAM SPAIN

SPAIN Oana Balan

ROMANIA

UNITED STATES Olga Teruel

SPAIN

Eladio Duque

SPAIN Paul Cowley

Emilio Balzano

ITALY Peter Haber

AUSTRIA

ESTONIA Ruth Vancelee

IRELAND

Eneken Titov Fabiana Sciarelli

ITALY Sergio Pérez

Fern Aefsky

UNITED STATES Siti Zainab Ibrahim

Frank de Langen

NETHERLANDS Sylvie Roy

Giovanna Bicego

ITALY Taskeen Adam

Ignacio Ballester

SPAIN Vicky O'Rourke

Ignacio Candel Ingolf Waßmann

SPAIN Victor Fester

UNITED KINGDOM

SPAIN MALAYSIA CANADA UNITED KINGDOM IRELAND NEW ZEALAND

GERMANY Wendy Gorton

UNITED STATES

Isao Miyaji

JAPAN Xavier Lefranc

FRANCE

Iván Martínez

SPAIN Yael Furman Shaharabani

Javier Domenech

SPAIN Yvonne Mery

Javier Martí

SPAIN Zarina Charlesworth

ISRAEL UNITED STATES SWITZERLAND

INTED2018

12th International Technology, Education and Development Conference

CONFERENCE SESSIONS ORAL SESSIONS, 5th March 2018 Learning Analytics Augmented Reality & 3D Videos Experiential Learning Ethical Issues & Digital Divide Educational Leadership Social Media in Education Special Education Teaching STEM Education Experiences Teachers' Competencies Development and Assessment Virtual Reality & Immersive Videos Project-Based Learning in Higher Education Models and strategies for facilitating work-integrated learning ICT skills and competencies among Teachers (1) Mobile Learning Apps Employability Skills for the Special Needs Learner Language Learning Education e-Assessment Learning Management Systems Students Motivation & Engagement Employability Issues and Challenges ICT skills and competencies among Teachers (2) Flipped Classroom Multicultural Education Experiences Computer-Assisted Language Learning Assessment & e-Portfolios Next Generation Classroom Pedagogical and Didactical Innovations Soft and Employability Skills Teachers' Education Flipping the STEM Classroom Inclusive Education Experiences Technology Enhanced Learning in Engineering Education

POSTER SESSIONS, 5th March 2018 Emerging Technologies in Education Global Issues in Education and Research

INTED2018

12th International Technology, Education and Development Conference

ORAL SESSIONS, 6th March 2018 Research on Technology in Education Technology Enhanced Learning Service Learning & Community Engagement Experiences in Business Education Pre-service Teacher Education International Cooperation Experiences Maths Teaching and Learning (1) Gender and Diversity Issues Technological Issues in Education Massive Open Online Courses & Open Educational Resources Gamification & Game-based Learning Lifelong Learning & Digital Skills Teacher Training in the Digital Age University-Industry Collaboration Maths Teaching and Learning (2) Language Learning: from EFL to EMI Virtual Learning Environments Sharing Digital Learning Content Internationally. Reality or wishful thinking? Personalized Learning Competence Assessment Professional Development of Teachers Entrepreneurship Education Experiences in Early and Primary Education Curriculum Design in Architecture & Civil Engineering Virtual and Real Mobility Experiences Media and Digital Literacy Peer and Cooperative Learning Links between Education and Research Developing a Teacher Education Model for Preparing Teachers for the Future Quality Assurance in Education STEAM Education Experiences New Experiences in Engineering Education Active Learning Experiences e-Learning & Distance Learning Research on Education Experiences in Health Sciencies Education Educational Management New Challenges of Higher Education Institutions Science Popularization 21st Century Skills for Engineers

POSTER SESSIONS, 6th March 2018 Experiences in Education Pedagogical Innovations and New Educational Trends

INTED2018

12th International Technology, Education and Development Conference

VIRTUAL SESSIONS Apps for education Barriers to Learning Blended Learning Collaborative and Problem-based Learning Competence Evaluation Computer Supported Collaborative Work Curriculum Design and Innovation Digital divide and acces to internet Diversity issues and women and minorities in science and technology E-content Management and Development e-Learning Education and Globalization Education in a multicultural society Educational Research Experiences Educational Software and Serious Games Enhancing learning and the undergraduate experience Ethical issues in Education Evaluation and Assessment of Student Learning Experiences in STEM Education Flipped Learning ICT skills and competencies among teachers Impact of Education on Development Inclusive Learning International Projects Language Learning Innovations Learning and Teaching Methodologies Learning Experiences in Primary and Secondary School Lifelong Learning Links between Education and Research Massive Open Online Courses (MOOC) Mobile learning New projects and innovations New Trends in the Higher Education Area Online/Virtual Laboratories Organizational, legal and financial issues Pedagogical & Didactical Innovations Pre-service teacher experiences Quality assurance in Education Research Methodologies Research on Technology in Education Science popularization and public outreach activities Student Support in Education Technological Issues in Education Technology-Enhanced Learning Transferring disciplines University-Industry Collaboration Virtual Universities Vocational Training

INTED2018

12th International Technology, Education and Development Conference

ABOUT INTED2018 Proceedings HTML Interface: Navigating with the Web browser This USB Flash drive includes all presented papers at INTED2018 conference. It has been formatted similarly to the conference Web site in order to keep a familiar environment and to provide access to the papers trough your default Web browser (open the file named "INTED2018_Proceedings.html"). An Author Index, a Session Index, and the Technical Program are included in HTML format to aid you in finding conference papers. Using these HTML files as a starting point, you can access other useful information related to the conference. The links in the Session List jump to the corresponding location in the Technical Program. The links in the Technical Program and the Author Index open the selected paper in a new window. These links are located on the titles of the papers and the Technical Program or Author Index window remains open. Full Text Search: Searching INTED2018 index file of cataloged PDFs If you have Adobe Acrobat Reader version 6 or later (www.adobe.com), you can perform a full-text search for terms found in INTED2018 proceedings papers. Important: To search the PDF index, you must open Acrobat as a stand-alone application, not within your web browser, i.e. you should open directly the file "INTED2018_FrontMatter.pdf" with your Adobe Acrobat or Acrobat Reader application. This PDF file is attached to an Adobe PDF index that allows text search in all PDF papers by using the Acrobat search tool (not the same as the find tool). The full-text index is an alphabetized list of all the words used in the collection of conference papers. Searching an index is much faster than searching all the text in the documents. To search the INTED2018 Proceedings index: 1. Open the Search PDF pane through the menu "Edit > Advanced Search" or click in the PDF bookmark titled "SEARCH PAPERS CONTENT". 2. The " INTED2018_index.pdx" should be the currently selected index in the Search window (if the index is not listed, click Add, locate the index file .pdx, and then click Open). 3. Type the search text, click Search button, and then proceed with your query. For Acrobat 9 and later: 1. In the “Edit” menu, choose “Search”. You may receive a message from Acrobat asking if it is safe to load the Catalog Index. Click “Load”. 2. A new window will appear with search options. Enter your search terms and proceed with your search as usual. For Acrobat 8: 1. Open the Search window, type the words you want to find, and then click Use Advanced Search Options (near the bottom of the window). 2. For Look In, choose Select Index. 3. In the Index Selection dialog box, select an index, if the one you want to search is available, or click Add and then locate and select the index to be searched, and click Open. Repeat as needed until all the indexes you want to search are selected. 4. Click OK to close the Index Selection dialog box, and then choose Currently Selected Indexes on the Look In pop-up menu. 5. Proceed with your search as usual, selecting other options you want to apply, and click Search. For Acrobat 7 and earlier: 1. In the “Edit” menu, choose “Full Text Search”. 2. A new window will appear with search options. Enter your search terms and proceed with your search as usual.

DEVELOPMENT OF A PLATFORM TO SIMULATE VIRTUAL ENVIRONMENTS FOR ROBOT LOCALIZATION D. Valiente, Y. Berenguer, L. Payá, A. Peidró, O. Reinoso Miguel Hernández University (SPAIN)

Abstract Nowadays, the use of mobile robots has increased substantially and we can find them in many environments, solving a wide range of tasks. When a mobile robot has to carry out a task autonomously in an unknown environment, it has to carry out two fundamental steps. On the one hand, it has to generate a model of the environment (namely, a map) and on the other hand it must be able to use this map to estimate its current pose (position and orientation). The robot can extract the necessary information from the unknown environment using the different sensors that it may be equipped with. This information is compared with the map data to estimate the pose of the robot. Several kinds of sensors can be used with this aim, such as laser, touch or vision sensors. Recently, the use of vision sensors has become a very common tool to solve both the mapping and localization tasks, thanks to the great quantity of information they offer with respect to their relatively low cost. Also, the use of images makes it possible to carry out other high level tasks, such as people detection and recognition. However, since images are very high dimensional data, they have to be treated to extract relevant information. Several algorithms exist to carry out these tasks. However, they tend to be mathematically complex. Also, it is necessary to have a variety of environments to test and tune the algorithms. In the first stages of design, these environments should be simple and static, to try to test the algorithms under ideal conditions. The use of real environments in this initial stage is not advisable as they tend to change their appearance (eg. noise, occlusions, changes in lighting conditions, changes in the position of doors, objects, etc.) and it would introduce an uncontrolled level of uncertainty in the algorithms. Taking these facts into account, we have developed a software tool that offers the students the possibility of generating easily virtual environments where they can simulate the movement of a virtual robot. Thanks to it, students can generate some sets of images captured under ideal conditions and test their algorithms using these images. This software tool has been designed to be used by the students of a Master in Robotics. In this Master, students learn how to design autonomous robots using computer vision to guide them. This way, the tool is useful in the first stages of design, to easily generate sets of images, extract the main information from the images and test the algorithms under ideal conditions. According to our experience in this kind of topics, the use of real images complicates unnecessarily the first stages of the design of the algorithm and students usually get lost in this point. We expect that this tool helps them to understand better the test process and to focus on the design and tuning of the algorithms. Keywords: Mapping, robot localization, virtual environment, simulation, mobile robotics

1

INTRODUCTION

Nowadays, mobile robots are used for a wide variety of purposes, from domestic to industrial applications. In general terms, these robots are required to operate autonomously, in order to accomplish their tasks. To that end, they need to interact with the environment. Usually, this is performed by acquiring and processing sensory data from the environment. Besides this, the mobile robot has to construct an internal representation of the environment, which is commonly denoted as the map of the environment. Such representation allows the robot to compare the sensory data with the information stored in the map. Then the robot has to be enabled to estimate its position inside the map, and at the same time, to estimate its trajectory [1]. In this sense, visual information has emerged as a powerful source of data. Visual sensors [2] are represented by digital cameras, which are normally mounted on the robot. They provide several kinds of benefits in terms of low cost, lightness, and good amount of information of the environment, in

contrast to traditionally acknowledged sensors like laser [3] or sonar [4]. Different projection systems have been devised amongst cameras. In particular, the omnidirectional projection is remarkable for its capability to produce images with wide field of view [5]. In addition to this, the omnidirectional projection can be easily post-processed into a panoramic view, in order to obtain a human-like view representation [6]. Current research in mobile robotics is mainly focused on two aspects: i) the development of new mapping and localization algorithms, and ii) the improvement of sensory data processing. The general goal is to enhance the robustness of the final map and trajectory estimation, regardless the final target application. In this context, computer vision and robotics algorithms become the fundamental basis to be taught at academic degrees in engineering. In particular, in our Master degree in the Miguel Hernandez University (Spain), there are several subjects where these topics are studied in depth. The students learn how to extract and process visual information from the environment, by means of a set of images captured by the robot. They are also taught how to produce robust estimates for the robot localization and the map building. The students are provided with a consistent theoretical background, despite the fact that these subjects are eminently practical. The practical sessions are conducted in the robotics laboratory. During the initial stages of the design, the students need simple and static environments in order to assess the performance of different algorithms. Next, they start working with further extended environments, under more realistic conditions. From our experience, the acquisition of real environments is timeconsuming, and it is also very likely to cause a number of issues. Moreover, under certain real conditions, control and motion data, but more specifically images, are considerably affected by noise, occlusions, changes in the lightning conditions, presence of dynamic elements, etc. These facts tend to corrupt the acquired data, and likewise the final estimation, which is eventually destabilized by the uncertainty in the system. As a result, the learning process is highly compromised at the first stages. According to this, our objective is to alleviate these difficulties that students usually find. To that purpose, we have implemented a software tool in order to ease the first data acquisition and processing steps. This tool is intended for the students to generate virtual environments, in which a robot equipped with a specific camera sensor may be simulated along a trajectory. This trajectory, and the data extracted from the different poses traversed, are also available under user configuration. Therefore, this tool simplifies the initial procedure, by easily generating sets of images along a predefined trajectory, from which specific information can be extracted in order to test the robotics algorithms under idealistic circumstances. In consequence, we initially help the students focus on higher objectives regarding the learning and comprehension of advanced robotic concepts, in terms of map building and localization. Moreover, the students have at their disposal, an advantageous tool for their own purposes, wherever they may need to test extended developments in a synthetic environment or to test and improve their own designs. The remainder of the paper is structured as follows: section 2 provides a brief overview about the main robotics topics the students are taught during the course; section 3, concentrates on the fundamentals of the vision system to be synthetically implemented in the software tool; section 4 describes the design of the software tool; section 5 presents some examples of virtual environments, generated with the tool; section 6 discusses on the conclusions extracted from this work.

2

VISUAL MOBILE ROBOTICS: MAPPING AND LOCALIZATION APPROACHES

In this section, we present the fundamentals of visual mapping and localization, according to the main theoretical aspects which the subjects of our Master degree deal with. Nowadays, approaches to visual mobile robotics may be classified according to different aspects, such as the kind of map representation, the robotic algorithm for estimating a solution, and the specific visual sensor to extract information from the environment. Traditionally, most of the models for the map representation were extensively obtained from range data captured with laser and sonar sensors. Besides this, these map representations were principally computed according to occupancy models [7]. However, more recently, the nature of the map has evolved towards a discrete representation of specific variables from the environment. This has been possible due to the emergence of visual sensors, to the improvement of the processing techniques, and to the new visual mapping and localization models.

More precise distinctions can be also stated depending on the particular visual technique. General classifications are usually referred to as appearance-based methods and feature-based methods. Both coincide in pursuing a discrete visual representation of a scene, namely visual descriptor. On the one hand, appearance-based methods concentrate on the processing of the pixel intensity, treating the entire set of pixels of an image as a unique representation, by means of specific computation and metrics [8]. Some feasible examples to express the information of an image are the Fourier Signature [9], HOG [10] and Gist [11]. On the other hand, feature-based methods intend to detect distinctive and robust physical points of the environment, at the pixel reference system, in a single and an independent manner, point by point. Some well-acknowledged descriptors are Harris [12], SIFT [13], SURF [14] and ORB [15]. All these methods permit redefining the map model. In contrast to previous approaches, based on occupancy areas, now the visual maps can be constituted by single appearance-based or feature-based descriptors of a certain set of images. In consequence, a reasonable improvement on the efficiency is achieved, since the number of variables is reduced and its management and processing are substantially enhanced.

2.1

Approach to Mapping and Localization

Once these classifications have been introduced, now the overall mapping and localization procedure may be devised as follows: I. Data acquisition. Under realistic conditions, the robot equipped with a camera, traverses a particular environment meanwhile it captures a set of images. During its navigation, it will be able to process the visual information from the images, which will ultimately represent the input data for the estimation algorithm. II. Computing image descriptors. Once each image is captured, different visual techniques may be applied to the visual content extracted from the images. As a result, a unique, or even several appearance descriptors can be computed per each scene. Feature-based methods can be also computed over the image. Consequently, the visual information observed by the robot, is finally encoded in a discrete reference, associated to the visual descriptors. III. Observation measurement. After the visual descriptors are available, an observation measurement has to be performed, as one of the most relevant parts of the algorithm. This stage allows the robot to compute its localization by comparing the current visual descriptors of the image captured at the current pose, with the set of visual descriptors associated to images that are stored in the map. Such comparison is achieved by extracting tabulated drifts between descriptors (appearance methods), or even by inferring geometric relationships between physical points that have been detected as matchings, between images in the map (feature methods). IV. Re-estimation. Once the robot localizes itself, then it is necessary to update the last estimation of the map, but also to back-propagate the estimation update to the trajectory followed by the robot. V. Mapping update. There are several approaches to incrementally update the map. The robot may initiate new parts in the map, as long as its decision module advises the uncertainty is considerably high and therefore it is necessary to include a new image in the map. It would correspond to the image acquired at the current robot’s pose. Fig.1a depicts an example of such approach. Differently, as observed in Fig. 1a, it exists a given distribution of a set of grid images, which are pre-acquired in an equally distributed manner, along the map dimensionality (either on the 2D or 3D pose reference frame). With this approach, the robot decides whether an image of the grid has to be included in the current representation of the map, according again to the current uncertainty of the system. Notice that poses where images were stored in the map are represented with red circles. At the end of the procedure, the set of images, their descriptors, and the topological or geometrical relationships between them, eventually conform the final estimate of the map of the environment. Synthesizing, the formulation of the problem may be expressed by an augmented state vector !(#), which encodes the current pose of the robot, !% , at each time step, and the pose of the set of images stored in the map, !& , which are either captured along the trajectory of the robot or as selected images in the pre-defined grid. This set ultimately determines the final map estimate. Notice that the specific visual descriptors are also stored, and tied to the poses where the images were captured. In

accordance to this nomenclature, the state vector that contains the different variables of the map is expressed as: !(#) = !%

!(

!)



!&

Both, the current pose of the robot and the pose of the images, are expressed in the cartesian reference system, with an additional variable for the orientation. For instance, in a 2D planar reference system, the elements of the state vector, !(#) may also consider its orientation, as follows: !% = !%+

,%+

-%+

!& = !.

,.

-.

In addition, the visual descriptors, associated to each image in the map, may be computed in a multidimensional subspace, according to the particular visual descriptor technique and its dimensionality: /& 0 ℛ 345 Moreover, an additional record has to be defined in order to keep the uncertainty measures for each estimated variable in the map, with the corresponding units of the reference system that expresses the poses of the robot. That is, a record in the form of the following square matrix, 6(#)748 → : = ; = 3= + 3, @.A 0 ℛ.

Figure 1. (a) example of a trajectory where the robot stores images in the map, indicated with red circles. (b) example of a trajectory where the robot selects images from a given grid (indicated with red circles), in order to store them as part of the map.

3

OMNIDIRECTIONAL VISION SYSTEM

Before presenting the software tool implemented in this work, it is worth introducing the omnidirectional vision system, that has been modeled in order to generate synthetic set of images within simulated virtual environments. Our omnidirectional vision sensor consists of a catadioptric system constituted by a hyperbolic mirror, that is jointly assembled with a firewire CCD camera, as represented in Fig. 2a, and finally mounted on the robot, as observed in Fig. 2b. In order to understand the image generation, Fig. 3a depicts the projection of a 3D point onto the pixel image frame. Notice that the centre of projection coincides with the focus of the hyperboloid. This is crucial in order to focalize the 3D scene properly, on the image frame. The scene point B0 ℛ C , generates a ray, D, that intersects with the mirror, in the same direction that @ EE = (! EE , F EE ), which represents the notation onto the unitary central sphere for normalization purposes, since there is no knowledge about the depth. Finally, the intersection point permits directing the projection towards the focus, and consequently it is projected onto the image frame, being the pixel point, G EE , obtained. Likewise, Fig. 3b presents the projection procedure, with the parameters of the hyperbolic mirror indicated. The dimensions of the hyperboloid are defined by the parameters H and I, whereas J expresses the focal distance, fact that is essential in order to assemble the CCD camera adequately, for focalizing the image scene. In consequence, the resulting image, named omnidirectional image, it possesses the capability to comprise the 360º of a scene due to the nature of the projection procedure, as it may be observed in Fig. 4.

Figure 1. (a) omnidirectional vision system constituted by a hyperbolic mirror, assembled with a CCD camera. (b) mobile robot, equipped with the omnidirectional vision system.

Figure 3. (a) projection procedure of a 3D point onto the image plane, for an omnidirectional vision system. (b) same projection procedure, with the mirror parameters indicated.

Figure 4. Example of an omnidirectional image.

4

DESCRIPTION OF THE SOFTWARE TOOL

Once the omnidirectional vision system has been introduced, next the software tool can be accordingly detailed. It mainly consists of a simulation tool for the models presented so far: the map representation stored by the robot, and the image generation procedure, which the omnidirectional vision system produces.

4.1

Simulation Algorithm

4.1.1

Virtual Environments: Object Definition

As stated previously, a simulation algorithm has been devised in this work. Firstly, it is necessary to define a virtual environment in a 3D reference system, where a set of available objects is pre-defined. These objects are implemented as a cluster of the resulting intersections between different 3D planes, each one with specific dimensions, as represented in Fig. 5, for a sample cube. This strategy allows us to provide a wide set of objects. The user is allowed to define the dimensions of each object, and to design their arrangement within the virtual environment.

Figure 5. Example of a simulated object, which consists of a 3D cube, that is obtained by means of the intersection between six 3D planes.

4.1.2

Trajectory definition

Having created a simulated environment, then the user is able to establish a simulated trajectory for the robot. That is, a set of points in the virtual environment, which the robot has traversed. As previously commented, these points will have a particular image associated, from which the scene of the environment at such positions is captured. Finally, these data will jointly represent the required input for any robotic algorithm the students may need to test, or work with.

4.1.3

Image Generation

Finally, it is necessary to generate images from the virtual environment, at such set of poses defined along a trajectory. The user has to define the desired resolution for the image. Then, the backprojection procedure is simulated: 1. Each pixel receives a vector which describes the ray from the focus to the mirror. 2. The set of vectors is directed from the mirror towards the 3D world. 3. The rays emerging from the mirror surface in the form of vectors, can intersect with the planes that conform the arranged objects in the 3D world. Whenever an intersection occurs with a plane, the intersection point, which is part of object, must appear in the image. Hence the colour associated to such point, is recorded in the image pixel associated to the corresponding vector ray. This way, once all the pixels receive their colour, the image is completely generated.

5 5.1

USER INTERFACE Menu Window

In this section, we present the graphic user interface that has been implemented for the software tool, which has been developed under the programming tool MATLAB 2016 [16]. Fig. 6 presents the appearance of the main menu window of the software tool. All the variables of the virtual environment can be configured along several submenus. Firstly, the dimensions of the 3D virtual environment should be established, in the upper-left side of the menu. Secondly, in the upperright side, the configuration for the vision system can be tuned, as the geometric parameters of the mirror. It is worth mentioning that this mirror is assumed to be a hyperbolic mirror. There is also an option to include the desired resolution for the images that this vision system will generate. Thirdly, in the middle-left side, the user can define a trajectory, by either inputting the desired coordinates of each traversed pose, or by selecting a total number of equispaced pose’s points. There is a specific button to insert poses, one by one. Notice that any specific orientation (rotation) for the 3D coordinates

of each pose, can be also configured. Moreover, the user may introduce trajectory points by simply clicking on the environment layout. The poses of the trajectory are subsequently numbered. Next, in the middle-right side, different kind of image projections can be chosen. This projection will be used for generating the set of images at each pose of the trajectory. This is very useful when the students have to test certain algorithms, in which the kind of image projection has a significant relevance in the final estimation. Finally, in the bottom of the menu, a set of available objects is listed. Each object can be marked and automatically dragged onto a position inside the environment. Also, they can be inserted by introducing their 3D coordinates, and then clicking on the button for inserting objects into the virtual environment. Note that the colour and the dimensions of the objects can also be modified. Finally, after configuring all the variables of the environment, the final environment layout may be observed from its plan view, in the right-side of the main window.

Virtual Environment Simulation Tool Environment

Mirror Parameters

[Xmin,Xmax] [Ymin,Ymax] [Zmin,Zmax]

a

rot_X

b

rot_Y

c

rot_Z

h

Resolution (uxv)px

Trajectory

Environment 1 Layout 7 6

Projection

[rotX,rotY,rotZ]

Insert Pose

Poses

2

Doormat Cupboard 2

Painting

4

Stereo Omnidirectional

3 4

3

Panoramic

[dX,dY,dZ]

1

5

Planar

[X,Y,Z]

6

2

Width Height Depth Colour (R,G,B)

Table

5

Objects [X,Y,Z]

Plant

Door Cupboard 1

1 Door

Doormat

Table

Painting

Cupboard

Plant

Window 1

y

Window 2

0 0

1

2

3

4

5

6

7

8

9

10

Window Insert Object

x

Simulate

Fig. 6. Appearance of the main menu window of the simulation tool. The left side submenu permits configuring the virtual environment. The right side submenu presents the virtual environment layout.

5.2

Results Window

Once the environment has been completely defined, then each pose of the trajectory has to be associated to a synthetic image. With this aim, there is a button in the bottom of the window that launches the image generation process. Once the button is clicked, a secondary visor window is prompted in order to present the synthetic images which are generated along the poses of the predefined trajectory. The set of images are also tagged with their corresponding orientations and positions inside the trajectory. Fig. 7 shows a sample of this new window, with the resulting images for the example that has been recently introduced in Fig. 6. The user can save the simulated results of the virtual environment, by clicking the specific button. The saved data consist of the 3D coordinates of the trajectory with their orientations, the set of images associated to each pose of the trajectory, and the layout of the environment, with the arranged objects. Finally, Fig. 8 presents results for the same example, but this time, the set of images have been generated with a panoramic projection. Amongst other outcomes, students can easily confirm the transference between the omnidirectional and panoramic coordinates, since a single rotation in the omnidirectional view can be simply detected as a longitudinal deviation in the panoramic view.

5.3

Height Simulation

According to novel research works, the omnidirectional projection allows us to infer height changes in the scene [17]. This software tool allows the student to test this functionality. Since any 3D trajectory

can be defined, it is easy to introduce a set of poses along the K axis, in order to simulate a trajectory along the height coordinate, as seen in Fig. 9. This example shows the resulting omnidirectional images, which can be used for testing the performance of several height estimation algorithms with omnidirectional images. Virtual Environment Simulation Tool Images 1

(4,6.5,0.5) m

(0,0,0)º

2

(5.5,5.5,0.5) m

(0,0,0)º

3

(8,4,0.5) m

(0,0,0)º

Environment 1 Layout

7

1

6

2

5 4

3

4

3

6

2 4

(2,3,0.5) m

(30,45,0)º

5

(3,2,0.5) m

(0,45,0)º

6

(7,3,0.5) m

5

1

(30,0,0)º

y

0 0

1

2

3

4

5

6

7

8

9

10

x

Back

Save Data

Fig. 7. Appearance of the visor window for the synthetic image generation. The left side shows the resulting images, which are tagged with their position and orientation. The right side submenu presents the virtual environment layout, where the images were generated along the pre-defined trajectory, in Fig.6.

Virtual Environment Simulation Tool Images

1

(4,6.5,0.5) m

(0,0,0)º

2

(5.5,5.5,0.5) m

(0,0,0)º

3

(8,4,0.5) m

(0,0,0)º

4

(2,3,0.5) m

1

(30,45,0)º

2 Back

Save Data

Fig. 8. Synthetic images generated with panoramic projection along the same pre-defined trajectory in the example presented in Fig. 6.

Virtual Environment Simulation Tool Images 1

(3,3,0.5) m

(0,0,0)º

2

(3,3,1.5) m

(0,0,0)º

3

(3,3,2.5) m

(0,0,0)º

3D Environment

z

3

3

2

1

0

2

x

1

y

Back

Save Data

Fig. 9. Synthetic images generated with omnidirectional projection along the height coordinate, with a trajectory consisting of three poses.

6

CONCLUSIONS

This work has presented a software tool which has been designed for teaching purposes in a Master degree course. In this course, there are several subjects in which the students are requested to test the operation and behaviour of different robotics algorithms, which are principally aimed at mapping and localization tasks within the field mobile robotics. Initially, the students have to go through general and primary tasks such as data acquisition and image processing, amongst others. Our experience confirms that these tasks generally cause issues and are considerably time-consuming for the students. They may be even aggravated when dealing with realistic environments, under noise conditions. Nonetheless, thanks to this application, the students can avoid the common difficulties that arise at these first stages, and they can easily obtain complete synthetic virtual environments, with a wide set of possibilities for its configuration. Consequently, the time dedicated to the practical sessions is fully optimized, since they can directly concentrate on extracting visual information from the images, and the behaviour of advanced robotic algorithms. They can also implement advanced versions, which can be tested accordingly. Overall, with this contribution, we ease the initial stages of the learning process, since we allow students to focus on higher levels of comprehension, and to establish further learning objectives. Furthermore, we provide students with a powerful tool that allows them to freely test and enhance the algorithms, for their own purposes.

ACKNOWLEDGEMENTS This work has been partially supported by the Spanish government through the project DPI201678361-R (AEI/FEDER, UE), “Creación de Mapas Mediante Métodos de Apariencia Visual para la Navegación de Robots”. In addition, part of the work has been achieved under the support of a postdoctoral grant, APOSTD/2017/028, held by D. Valiente, which is funded by the Valencian Education Council and the European Social Fund.

REFERENCES [1]

S. Thrun, W. Burgard, D. Fox, “Probabilistic Robotics”, The MIT Press, 2005.

[2]

L. Payá, A. Gil, O. Reinoso, " A State-of-the-Art Review on mappping and localization of mobile robots using omnidirectional vision sensors", Journal of Sensors, Ed. Hindawi, ISSN: 16877268, vol 2017, no. 2017, pp.1-21, 2017.

[3]

D. Cole, P. Newman, “Using laser range data for 3D SLAM in outdoor environments”, Proceedings of the IEEE International Conference on Robotics and Automation, pp.1556-1563, 2006.

[4]

S. Guadarrama, A. Ruiz-Mayor, “Approximate robotic mapping from sonar data by modeling perceptions with antonyms", Information Sciences, Ed. Elsevier, ISSN: 0020-0205, vol 180, no. 21, pp.4164-4188, 2010.

[5]

D. Valiente, A. Gil, L. Payá, J.M. Sebastian, O. Reinoso, " Robust Visual Localization with Dynamic Uncertainty Management in Omnidirectional SLAM ", Applied Sciences, Ed. MDPI, ISSN:2076-3417, vol 7, no.12 pp.1-26, 2017.

[6]

M. Liu, C. Pradalier, R. Siegwart, “Visual homing from scale with an uncalibrated omnidirectional camera”, IEEE Transactions on Robotics, ISSN: 1552-3098, vol. 29, no. 6, pp.1353-1365, 2013.

[7]

J. Engel, T. Schöps, D. Cremers, “LSD-SLAM: Large-scale direct monocular SLAM”, European Conference on Computer Vision, pp.834-849, 2014.

[8]

Y. Berenguer, L. Payá, A. Peidró, O. Reinoso, "SLAM Algorithm by using global appearance of omnidirectional images", Proceedings of the 14th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2017), pp.382-388, 2017.

[9]

E. Menegatti, T. Maeda, H. Ishiguro, “Image based memory for robot navigation using properties of omnidirectional images”, Robotics and Autonomous Systems, Ed. Elsevier, ISSN:0921-8890. vol. 47, no. 4, pp.251-276, 2004.

[10]

N. Dalal, B. Triggs, “Histograms of oriented gradients for human detection”, “IEEE Conference on Computer Vision and Pattern Recognition”, pp.886-893, 2005.

[11]

A. Friedman, “Framing pictures: The role of knowledge in automatized encoding and memory for gist”, Journal of Experimental Psychology: General, Ed. Elsevier, ISSN:0022-1032, vol. 108, no. 3, pp.316-355, 1979.

[12]

C. G. Harris, M. Stephens, “A combined corner and edge detector”, Alvey Vision Conference, pp.147-151, 1988.

[13]

D. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints”, International Journal of Computer Vision, Ed. Springer, ISSN:0520-5691, vol. 60, no. 2, pp.91-110, 204.

[14]

H. Bay, A. Ess, T. Tuytelaars, L. Van Gool , “Speeded-up robust features (SURF)”, Computer Vision and Image Understanding, Ed. Elsevier, ISSN: 1077-3142, vol. 110, no. 3, pp.346-359, 2008.

[15]

E. Rublee, V. Rabaud, K. Konolige, G. Bradski, “ORB: An efficient alternative to SIFT or SURF”, International Conference on Computer Vision , pp.2564-2571, 2011.

[16]

http://www.mathworks.com/products/matlab/

[17]

F. Amorós, L. Payá, O. Reinoso, D. Valiente, “Towards relative altitude estimation in topological navigation tasks using the global appearance of visual information”, International Conference on Computer Vision Theory and Applications, pp.194-201, 2014.