Reactive Collision Avoidance and Control System for

0 downloads 0 Views 14MB Size Report
Oct 11, 2013 - 9.2 The boat performance during a control test in the pool. The test was run ..... a renewable and endless source of energy. Yachts have the ...
Reactive Collision Avoidance and Control System for an Autonomous Sail Boat A system for the detection and avoidance of obstacles in an ocean environment

Prepared by: Nicholas de Klerk

Prepared for: R.A. Verrinder Department of Electrical Engineering University of Cape Town

Submitted to the Department of Electrical Engineering at the University of Cape Town in partial fulfilment of the academic requirements for a Bachelor of Science degree in Mechatronics.

October 11, 2013

Declaration

1. I know that plagiarism is wrong. Plagiarism is to use another’s work and pretend that it is one’s own. 2. I have used the IEEE convention for citation and referencing. Each contribution to, and quotation in, this report from the work(s) of other people has been attributed, and has been cited and referenced. 3. This report is my own work. 4. I have not allowed, and will not allow, anyone to copy my work with the intention of passing it off as their own work or part thereof.

Signature:. . . . . . . . . . . . . . . . . . . . . . . . . . . Nicholas de Klerk

Date:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Acknowledgments The following are thanked for their valuable assistance with the completion of this project and report:

• Robyn Verrinder for allowing me to research this topic and for her guidance and assistance while supervising me throughout the project. • Fedrico Lorenzi and Yashren Reddi for their assistance with Linux and OpenCV. • Fred Nicolls for his guidance with stereo vision. • My mother Dalene for all the tea, coffee, food and support while working late nights. • Kirsty Cox and Justin Coetser, for proof reading my project.

i

Abstract Autonomous sail boats are currently required for oceanic research. To allow these boats to sail the worlds oceans safely, they require a system for collision avoidance. This work explores the topic of designing a reactive collision avoidance system for an autonomous sail boat to provide a solution to this requirement. The various obstacle sensing options that can be used were investigated and an image sensor using a stereo-vision camera configuration was selected. Code algorithms to detect obstacles and calculate their range from the boat were developed and tested in a high level language. These algorithms were then combined and reprogrammed in a low level language for an embedded system implementation. The system was then implemented on a low level, mobile platform for real world testing. The system was thoroughly tested in a reservoir with various obstacle configurations and verified to operate correctly, by reliably detecting various obstacles. This project proposed a novel method of detecting obstacles in an ocean environment and provides an alternative to current methods used. Its success provides developers of autonomous sail boats with a new low cost, reliable method for collision avoidance. This ensures that their creations can sail the seas safely.

ii

Contents

1 Introduction

1

1.1

Background to the study . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.2

Objectives of this study . . . . . . . . . . . . . . . . . . . . . . . . . . .

2

1.2.1

Problems to be investigated . . . . . . . . . . . . . . . . . . . . .

2

1.2.2

Purpose of the study . . . . . . . . . . . . . . . . . . . . . . . . .

2

1.3

Scope and Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

1.4

Plan of development . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

2 Literature Review

5

2.1

Background to sailing . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5

2.2

Development and motivation for an autonomous yacht . . . . . . . . . .

6

2.2.1

Structure development . . . . . . . . . . . . . . . . . . . . . . . .

7

2.2.2

Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

2.2.3

Actuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9

2.2.4

Autonomous yacht control . . . . . . . . . . . . . . . . . . . . . .

9

iii

2.2.5

System architecture and sensors . . . . . . . . . . . . . . . . . . .

11

2.2.6

Power requirements and power sustainability . . . . . . . . . . . .

12

2.3

Collision avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

12

2.4

Obstacle detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13

2.4.1

Monocular vision using image sensor . . . . . . . . . . . . . . . .

16

2.4.2

Stereo-vision using image sensor . . . . . . . . . . . . . . . . . . .

17

2.4.3

3D sensor using XBox Kinect . . . . . . . . . . . . . . . . . . . .

21

Obstacle avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

22

2.5.1

. . . . . . . . . . . . . . . . . . . . . . . . .

23

2.6

Development Software . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

2.7

Embedded System Technology . . . . . . . . . . . . . . . . . . . . . . . .

25

2.8

Current autonomous yachts . . . . . . . . . . . . . . . . . . . . . . . . .

26

2.9

Summary of literature review . . . . . . . . . . . . . . . . . . . . . . . .

27

2.5

A reactive approach

3 Selection and configuration of obstacle detection sensor 3.1

3.2

Obstacle detection sensor selection . . . . . . . . . . . . . . . . . . . . .

28

3.1.1

Selection of image sensor . . . . . . . . . . . . . . . . . . . . . . .

30

Image sensor configuration options . . . . . . . . . . . . . . . . . . . . .

32

4 Camera mount development 4.1

28

34

Camera mount design . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

iv

34

4.2

Camera mount construction . . . . . . . . . . . . . . . . . . . . . . . . .

5 Development of obstacle detection algorithm

36

38

5.1

Available feature detection algorithms

. . . . . . . . . . . . . . . . . . .

39

5.2

Selection between feature detection algorithms . . . . . . . . . . . . . . .

40

5.2.1

Speed Comparison . . . . . . . . . . . . . . . . . . . . . . . . . .

40

5.2.2

Number of detected features . . . . . . . . . . . . . . . . . . . . .

41

5.3

Selection of algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41

5.4

Understanding SURF and tuning parameters . . . . . . . . . . . . . . . .

42

5.4.1

Integral images . . . . . . . . . . . . . . . . . . . . . . . . . . . .

42

5.4.2

Hessian matrix based interest points . . . . . . . . . . . . . . . .

43

5.4.3

Scale space representation . . . . . . . . . . . . . . . . . . . . . .

44

5.4.4

Interest point localisation . . . . . . . . . . . . . . . . . . . . . .

44

5.4.5

SURF summary . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45

5.4.6

SURF input parameters . . . . . . . . . . . . . . . . . . . . . . .

45

Tuning the algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

46

5.5.1

Tuning the Hessian Threshold . . . . . . . . . . . . . . . . . . . .

46

5.5.2

Tuning the number of octaves . . . . . . . . . . . . . . . . . . . .

46

5.5.3

Tuning the number of octave layers . . . . . . . . . . . . . . . . .

46

5.6

Obstacle detection algorithm testing results . . . . . . . . . . . . . . . .

47

5.7

Obstacle detection algorithm testing conclusions . . . . . . . . . . . . . .

48

5.5

v

6 Development of range estimation algorithm 6.1

49

Camera Calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

49

6.1.1

Outputs from calibration procedure . . . . . . . . . . . . . . . . .

50

6.1.2

Calibration procedure . . . . . . . . . . . . . . . . . . . . . . . .

51

6.1.3

Individual calibration . . . . . . . . . . . . . . . . . . . . . . . . .

51

6.1.4

Stereo calibration . . . . . . . . . . . . . . . . . . . . . . . . . . .

52

Algorithm development . . . . . . . . . . . . . . . . . . . . . . . . . . . .

53

6.2.1

Video Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

53

6.2.2

Obstacle location . . . . . . . . . . . . . . . . . . . . . . . . . . .

54

6.2.3

Block matching . . . . . . . . . . . . . . . . . . . . . . . . . . . .

54

6.2.4

Image rectification and triangulation . . . . . . . . . . . . . . . .

54

Testing and optimisation . . . . . . . . . . . . . . . . . . . . . . . . . . .

55

6.3.1

Range Algorithm testing procedure . . . . . . . . . . . . . . . . .

55

6.4

Range algorithm testing results . . . . . . . . . . . . . . . . . . . . . . .

55

6.5

Range algorithm testing conclusions . . . . . . . . . . . . . . . . . . . . .

59

6.2

6.3

7 Integration of obstacle detection and obstacle range calculation algorithms 60 7.1

Complete algorithm testing results . . . . . . . . . . . . . . . . . . . . .

62

7.2

Complete algorithm testing conclusions . . . . . . . . . . . . . . . . . . .

63

8 Practical implementation

64

vi

8.1

Selection of embedded computer . . . . . . . . . . . . . . . . . . . . . . .

64

8.1.1

Assessment of STM32 . . . . . . . . . . . . . . . . . . . . . . . .

64

8.1.2

Assessment of Beaglebone . . . . . . . . . . . . . . . . . . . . . .

65

8.1.3

Assessment of Raspberry Pi . . . . . . . . . . . . . . . . . . . . .

65

Configuration for coding development . . . . . . . . . . . . . . . . . . . .

66

8.2.1

Installation of Virtual Box and Linux guest

. . . . . . . . . . . .

66

8.2.2

Development on computer alone . . . . . . . . . . . . . . . . . . .

67

8.2.3

Cross Compilation . . . . . . . . . . . . . . . . . . . . . . . . . .

67

8.2.4

Mounting the BBB and Secure Shell . . . . . . . . . . . . . . . .

68

8.3

Interfacing the BBB with peripherals . . . . . . . . . . . . . . . . . . . .

68

8.4

Converting Matlab to C/C++ . . . . . . . . . . . . . . . . . . . . . . . .

69

8.4.1

Matlab coder . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

69

Coding in C/C++ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

70

8.5.1

Retrieving camera frames simultaneously . . . . . . . . . . . . . .

70

8.5.2

C/C++ implementation of SURF . . . . . . . . . . . . . . . . . .

71

8.5.3

Completing the C/C++ algorithm . . . . . . . . . . . . . . . . .

72

8.5.4

Ensuring up to date samples . . . . . . . . . . . . . . . . . . . . .

72

8.5.5

Required frame process speed . . . . . . . . . . . . . . . . . . . .

73

8.6

Wireless communication with Beaglebone black . . . . . . . . . . . . . .

77

8.7

Test platform selection . . . . . . . . . . . . . . . . . . . . . . . . . . . .

78

8.2

8.5

vii

8.7.1

Test platform actuation . . . . . . . . . . . . . . . . . . . . . . .

78

Boat control hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79

8.8.1

Control module development . . . . . . . . . . . . . . . . . . . . .

79

Boat control software and sensor requirements . . . . . . . . . . . . . . .

80

8.9.1

Arduino Mega Firmware . . . . . . . . . . . . . . . . . . . . . . .

80

8.9.2

Collision avoidance algorithm . . . . . . . . . . . . . . . . . . . .

81

8.9.3

Software control requirements . . . . . . . . . . . . . . . . . . . .

81

8.9.4

Selection of heading sensor . . . . . . . . . . . . . . . . . . . . . .

82

8.9.5

Heading sensor integration . . . . . . . . . . . . . . . . . . . . . .

82

8.10 Controller design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

83

8.10.1 Functional requirements . . . . . . . . . . . . . . . . . . . . . . .

83

8.10.2 Control system Structure . . . . . . . . . . . . . . . . . . . . . . .

84

8.10.3 Component blocks . . . . . . . . . . . . . . . . . . . . . . . . . .

84

8.10.4 Design of controllers . . . . . . . . . . . . . . . . . . . . . . . . .

85

8.10.5 Implementation of the PID controller . . . . . . . . . . . . . . . .

85

8.10.6 Pseudo code of PID control algorithm . . . . . . . . . . . . . . . .

86

8.10.7 Controller tuning methods . . . . . . . . . . . . . . . . . . . . . .

87

8.11 Power Integration and supply . . . . . . . . . . . . . . . . . . . . . . . .

88

8.11.1 Selection of voltage regulator . . . . . . . . . . . . . . . . . . . .

89

8.11.2 Voltage regulator integration . . . . . . . . . . . . . . . . . . . . .

89

8.8

8.9

viii

8.11.3 Selection of batteries . . . . . . . . . . . . . . . . . . . . . . . . .

90

8.11.4 Battery Care . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

90

8.12 System Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

92

9 System Testing and Results

96

9.1

Initial bench testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

96

9.2

Initial water and obstacle avoidance algorithm testing . . . . . . . . . . .

97

9.2.1

No obstacle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

99

9.2.2

Single obstacle . . . . . . . . . . . . . . . . . . . . . . . . . . . .

100

Final obstacle avoidance algorithm testing . . . . . . . . . . . . . . . . .

101

9.3.1

No obstacle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

102

9.3.2

Singe obstacle . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

104

9.3.3

Multiple obstacles . . . . . . . . . . . . . . . . . . . . . . . . . . .

106

9.3.4

Crowded water space . . . . . . . . . . . . . . . . . . . . . . . . .

108

PID controller testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

111

9.3

9.4

10 Conclusions

112

10.1 Summary of research findings . . . . . . . . . . . . . . . . . . . . . . . .

113

10.1.1 Conclusions from testing the mobile platform . . . . . . . . . . .

113

10.1.2 Conclusions from testing the code algorithms . . . . . . . . . . . .

114

10.1.3 Conclusions from testing complete system on a mobile platform .

114

ix

10.2 Limitations of the current study . . . . . . . . . . . . . . . . . . . . . . .

115

10.2.1 Test limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . .

115

10.2.2 Algorithm limitations . . . . . . . . . . . . . . . . . . . . . . . . .

115

10.3 Significance of the research contribution and success of the project . . . .

116

11 Recommendations

117

11.1 Further testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

117

11.2 Comparison of depth map method and feature detection method . . . . .

117

11.3 Install a magnetometer for PID controller

. . . . . . . . . . . . . . . . .

118

11.4 Increase processing power . . . . . . . . . . . . . . . . . . . . . . . . . .

118

11.5 Auto-tune SURF and filter . . . . . . . . . . . . . . . . . . . . . . . . . .

118

11.6 Implement a cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

119

11.7 Implement a hybrid sensing system . . . . . . . . . . . . . . . . . . . . .

119

12 Addenda

120

12.1 Object detection algorithm time comparison results . . . . . . . . . . . .

120

12.2 Feature point recognition comparison results . . . . . . . . . . . . . . . .

121

12.3 Object detection results using SURF . . . . . . . . . . . . . . . . . . . .

123

12.4 Range test procedure and results . . . . . . . . . . . . . . . . . . . . . .

125

12.5 Simplified Beaglebone Black main code function . . . . . . . . . . . . . .

126

12.6 PID Control Code

129

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

x

12.7 PCB Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

131

12.8 Video Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

132

xi

List of Figures 2.1

Figure showing an average yacht’s polar diagram. The red region shows the area where the yacht cannot sail. Often a running dead-band is included at 180◦ to improve stability of the vessel as sailing downwind will result in uncontrolled heeling. Image adapted from [1]. . . . . . . . . . . . . . . .

8

2.2

Figure showing the two components making up the collision avoidance. .

13

2.3

Figure showing the six degrees of freedom the sail boat has to contend with. Image adapted from [2]. . . . . . . . . . . . . . . . . . . . . . . . .

14

Figure demonstrating the basic trigonometry the monocular system utilizes in the distance calculation of a obstacle. The algorithm uses geometry from the image to calculate the obstacle’s distance. With the camera’s known height above the surface, the distance from the horizon calculated and the height of the obstacle calculated trigonometry is applied to calculate the obstacles distance from the boat. Image adapted from [3] . . . . . . . . .

16

Figure showing the variables used in calculating a target distance from the camera. Basic trigonometry is used to calculate the target distance, Z. Image adapted from [4] . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

Figure showing how the distance between the cameras (Baseline) effects the accuracy of the range calculation. . . . . . . . . . . . . . . . . . . . .

19

The different stereo-vision camera configuration options using one camera and mirrors as opposed to using two cameras. . . . . . . . . . . . . . . .

21

A comparison between a deliberative and reactive approach showing the properties of each system. . . . . . . . . . . . . . . . . . . . . . . . . . .

22

2.4

2.5

2.6

2.7

2.8

xii

2.9

Figure showing the modified polar diagram used for obstacle avoidance. The red region shows the navigable regions the yacht can sail. Obstacle four (O4 ) is outside rmax so it is ignored. O1 and O2 are within rmax , so their headings are penalised relative to their distance from the yacht. O3 is within rmin so its heading is completely blocked. Image adapted from [5]. 24

2.10 A sketch of the Robotboat design. Note the yacht is a catamaran design rather than a monohull for stability [6]. . . . . . . . . . . . . . . . . . . .

4.1

27

Figures showing the targets disparity at the two critical points. The disparity decreases as the target distance increases. At 5m there is enough disparity to accurately determine distances. At 7m the system is not able to measure distances as there is no disparity. . . . . . . . . . . . . . . . .

36

A graph showing the disparity of the cameras versus the range. The further the target, the less disparity there is between the camera images. . . . . .

37

4.3

Camera platform design and final product . . . . . . . . . . . . . . . . .

37

5.1

Figure showing the average detection time per algorithm when tested on grey-scale 320X240 images. The test was run five times on each image then averaged to produce the shown results. It is clear that SURF and SIFT are slow algorithms whereas FAST has a very fast detection time of about 5ms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

40

Figure showing the average number of detected features per algorithm when tested on grey-scale 512X512 image. The test was run five times on each image then averaged to produce the shown results. The FAST algorithm produces the most key points. This is due to its edge detection base, thus there are more edges in an image than blobs. The quality of the FAST features in of lesser quality than that of say the SURF. Image sourced from [7]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41

Using integral images it only takes three addition operations and four memory access operations to calculated the sum of intensities of a rectangular area of any size. Image sourced from [8]. . . . . . . . . . . . . . . . . . .

43

4.2

5.2

5.3

xiii

5.4

5.5

6.1

6.2

6.3

6.4

6.5

7.1

7.2

Figure showing a summary of the testing results of the obstacle detection algorithm once the parameters were tuned. The circles indicate the obstacle and the size of the circle indicates how strong the feature is. . .

47

Images showing the algorithm false detecting on open water with no obstacle. After applying a smoothing filter the algorithm no longer false triggered. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

48

To calibrate the camera’s parameters a checker grid is used. Four corner points of the grid are manually located, the code then counts the blocks and using the known block dimensions, the cameras parameters are calculated.

52

Figure showing the calculated extrinsic parameters of the cameras using multiple checker board orientations. . . . . . . . . . . . . . . . . . . . . .

53

Figure showing the calculated camera extrinsic parameters using multiple checker board orientations and positions. Note the checker board is positioned over the entire focal range for an accurate calibration. . . . . .

57

A graph showing the percentage error in each reading at a particular range. The error is calculated as the percentage error between the set target distance and the average of the measured distances. . . . . . . . . . . . .

58

A graph showing the standard deviation of the reading at a particular range. A low standard deviation represents consistent readings. . . . . .

58

The inputs and outputs of each algorithm module. It is clear that the obstacle detection algorithm and obstacle range calculation algorithm can easily be combined to form the complete algorithm. . . . . . . . . . . . .

60

The flow diagram complete obstacle detection and range calculation algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61

xiv

7.3

A screen shot of a complete test using Matlab in a constant light source room. The two algorithms interfaced correctly by autonomously detecting obstacles and their accurate ranges. Note the negative range due to an incorrect block match. This test is to ensure that both algorithms interface correctly and not how each algorithm performs individually. Hence any test scene can be used. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

62

8.1

The BeagleBone Black PCB . . . . . . . . . . . . . . . . . . . . . . . . .

66

8.2

The empty hull of the boat used for testing . . . . . . . . . . . . . . . . .

78

8.4

The control architecture can be described by this block diagram linking all the transfer functions in a control loop. . . . . . . . . . . . . . . . . . . .

84

A PID block diagram showing its three components, a proportional term, integral term and derivative term. . . . . . . . . . . . . . . . . . . . . . .

85

A basic power diagram showing the connections between each main component and the power sources . . . . . . . . . . . . . . . . . . . . . . . .

90

A UML diagram showing the code flow of the firmware on the Arduino Mega micro-controller. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

The system diagram of the boats components showing their connections, communication protocol and data flow directions. . . . . . . . . . . . . .

94

8.8

A top view of the boat showing the components as fitted. . . . . . . . . .

94

8.9

A view of the boat from the front left. . . . . . . . . . . . . . . . . . . .

95

8.10 A top view of the boat. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

95

8.5

8.6

8.3

8.7

9.1

9.2

The boat during its initial controlled water test. The waterline is below the seals and the boat handles safely. . . . . . . . . . . . . . . . . . . . .

97

The boat performance during a control test in the pool. The test was run five times to ensure the boat did not false trigger. This image shows the boats path during a single test. . . . . . . . . . . . . . . . . . . . . . . .

99

xv

9.3

The boat performance during a single obstacle test in the pool. The boat avoided the obstacle consistently in each test. This image shows the boats path during a single test. . . . . . . . . . . . . . . . . . . . . . . . . . . .

100

The GPS data from the control test overlaid onto Google maps using a GPS visualiser to monitor the speed and distance of the control run. . . .

102

The boat performance during a control test in reservoir. The boat did not false trigger in either of the three runs. This image shows the boat’s path during a single test. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

103

The GPS data from a single obstacle run overlaid onto Google maps using a GPS visualiser to monitor the speed and distance of the single obstacle run. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

104

The boat performance during a single obstacle test in reservoir. The boat successfully avoids the obstacle in all of the three runs. This image shows the boat’s path during a single test. . . . . . . . . . . . . . . . . . . . . .

105

The GPS data from a multiple obstacle run overlaid onto Google maps using a GPS visualiser to monitor the speed and distance of the multiple obstacle run. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

106

The boat performance during a multiple obstacle test in reservoir. The boat successfully avoids the obstacles in all of the three runs. This image shows the boat’s path during a single test. . . . . . . . . . . . . . . . . .

107

9.10 The GPS data from a crowded water space test overlaid onto Google maps using a GPS visualiser to monitor the speed and distance of the multiple obstacle run. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

108

9.11 The boat performance during a crowded water space test in reservoir. The boat successfully avoids the obstacles in all of the three runs. This image shows the boat’s path during a single test. . . . . . . . . . . . . . . . . .

109

9.12 A frame by frame view from the on board camera of the boat performing the S-bend. The camera is seen to pan hard left once the first obstacle is detected. It then slowly returns to centre tracking the second obstacle and then repeats, avoiding the second obstacle in time. . . . . . . . . . . . . .

110

9.4

9.5

9.6

9.7

9.8

9.9

xvi

12.1 SURF algorithm

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

122

12.2 The configuration of the range test. The cameras are mounted in a fixed position in front of the laptop, running the testing algorithm. The target is then moved from the centre position to the left peripheral, then to the right peripheral for each distance, and three samples are taken for each position. The test was run over a range of 1m → 7m with 0.5m intervals.

125

12.3 The PCB for the Arduino Mega used to interface the servos, motor and BeagleBone Black . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

131

12.4 The PCB layout of the GPS breakout board. . . . . . . . . . . . . . . . .

131

xvii

List of Tables 2.1

Table showing a comparison between the relevant aspects of the shortlisted embedded systems that can be used for the real time implementation 26

3.1

A table summarizing the advantages and disadvantages of each sensor technology proposed for use in the system. . . . . . . . . . . . . . . . . . . .

29

A table showing the different categories each sensor is compared on. Each category is weighed based on its importance. The score for each sensor is the sum of the category weighting multiplied by the sensors score in that category for all the categories. . . . . . . . . . . . . . . . . . . . . . . . .

29

3.3

Table showing the selected Logitech C210 camera specifications. . . . . .

32

5.1

A table showing the filter sizes used in each octave which spans a number of scales. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

46

A comprehensive range test was performed where three samples were taken at random positions around the target. These three samples were then averaged to produce the value for each distance below. . . . . . . . . . .

57

Table showing the voltage required and current consumed by the major components in the boat during operation. . . . . . . . . . . . . . . . . .

88

A summary of the weather conditions during the initial water test in the swimming pool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

98

3.2

6.1

8.1

9.1

xviii

9.2

A summary of the weather conditions during the final water testing in Silvermine Dam. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

101

12.1 Table showing the full results form the speed test of each algorithm. The test was run on a 1GHz Beaglebone Black embedded system using a 320X240 RGB24 image with a single obstacle. The test was run five times and the average used to compare the speeds. . . . . . . . . . . . .

120

12.2 A comprehensive range test was performed, where three samples were taken at random positions around the target. The results are tabulated below.

125

xix

Chapter 1 Introduction

1.1

Background to the study

With global warming and climate change being a prevalent issue in modern society, causes and solutions are being investigated to help combat the effects, prevent irreversible damage to the planet and to provide sustainable solutions. Researchers investigating climate change are requiring new reliable methods to gather data from around the world for analysis to help find solutions. Owing to 72 percent of the Earth being covered by water, researchers are often very restricted in the locations which they can take data samples. Sea research vessels require a large crew and cost millions of dollars to operate daily. This is often not an option for academic researchers or researchers with a budget. What is required is a low cost, autonomous ocean vessel that will reliably sail the oceans of the world supplying researchers with data of the environment. Such a vessel would need to reliably navigate the oceans, take samples and avoid obstacles in its path. With many other ocean vessels and objects in the ocean it is essential that the ocean vessel has a solid foundation for collision avoidance.

1

1.2. OBJECTIVES OF THIS STUDY

1.2

Objectives of this study

The objectives of the research project are therefore to investigate the various methods currently used for collision avoidance in autonomous platforms, select the most appropriate technology for marine use, then design and test a reliable low cost reactive collision avoidance system for an autonomous sail boat using a mobile test platform. The system will be required to verify whether or not obstacle detection techniques used on land can in fact be used in a continuously varying environment such as the ocean.

1.2.1

Problems to be investigated

The main engineering questions that will be investigated in the research project are:

1. What sensor technologies are available and currently being used in autonomous platforms of obstacle sensing? 2. Which is the most appropriate sensor to use for a reactive collision avoidance system on an autonomous sail boat? 3. How accurate and reliable a system can be designed for reactive collision avoidance using a low cost sensor? 4. Is it feasible to reliably detect obstacles in an ocean environment with such an unpredictable base surface and avoid them? 5. Is the algorithm suitable for remote deployment?

1.2.2

Purpose of the study

Whenever there are unmanned vehicles in operation, there is the possibility of a collision with obstacles. A collision could cause serious damage to the autonomous vessel or indeed the obstacle which has been struck. It is therefore essential that any autonomous vehicle has a solid baseline for collision detection to prevent damage and ensure the safety of all those around it. The purpose of this study is therefore to propose a reliable obstacle detection system that can be used accurately in an ocean environment for the autonomous vehicle.

2

1.3. SCOPE AND LIMITATIONS

1.3

Scope and Limitations

A complete collision avoidance system will require a hybrid of sensor technologies. The reactive collision avoidance system proposed in this research project is only one component of this hybrid system that will be required in a complete reactive collision avoidance system. The system proposed is designed to operate in daytime, monitoring obstacles above the ocean surface. It does not monitor sub-surface obstacles or obstacles flat on the ocean surface, such as, a floating piece of wood. Furthermore, it is assumed that the ocean surface is relatively flat with few if not any white horses 1 . Therefore, sub-surface and night time sensors would be required for a complete system. The system is, however, designed to tolerate ocean swells, multiple obstacles, wind and rotation of the boat. The system is limited in its cost, available resources and development time. All components used are extremely low cost and thus limited in their performance. During construction only the tools and resources available at the University of Cape Town were accessible, thus the system may not be optimised in all construction areas. The reactive collision avoidance system also had to be completed and tested within the specified deadline, limiting the scope of the project. Therefore, the system shown is a proof of concept and can be improved with more time and resources.

1.4

Plan of development

Firstly a comprehensive literature review is completed covering all general aspects of an autonomous boat and specific aspects of collision avoidance in a marine environment. Both obstacle detection and obstacle avoidance techniques are investigated to ensure the obstacle detection technique used can be integrated and used in an obstacle avoidance algorithm. The various available obstacle sensing technologies and techniques are then investigated and compared using a weighted decision table and by comparing the advantages and disadvantages of each sensor. The most appropriate sensor is then selected using these decision tables. A housing for the selected sensor is then designed to ensure it can be fitted to a sail boat and to protect the electronics from water. 1 White horses is a term to describe the ocean surface during rough conditions when the water surface breaks forming small waves of white water.

3

1.4. PLAN OF DEVELOPMENT

The code algorithm used to detect the obstacles and calculate their range is then developed, tested and tuned through an iterative process. Each algorithm is then thoroughly tested in isolation to ensure it functions correctly before they are integrated into the complete algorithm. The complete algorithm is then further tested. Once the system has been verified to work correctly on a high level system it is recoded into a low-level language and implemented on a mobile platform. A complete electronic control system is then developed to allow the obstacle detection algorithm to have full control of an actual boat in water for testing. The mobile platform is then placed in a real world scenario and tested. Based on the test results, conclusions of the system’s performance are drawn. Recommendations on how to further improve the system are then made based on these conclusions.

4

Chapter 2 Literature Review Before the reactive collision avoidance system can be designed and implemented on an autonomous sail boat the current status of technologies and methods used in these areas must be understood. This Section therefore aims to investigate the old and current technologies of sail boats, autonomous ocean vessels and collision avoidance. A brief overview of the autonomous sail boat and factors that need to be taken into account when designing an autonomous sail boat system will be investigated. Current obstacle detection systems and procedures in autonomous surface vehicles (ASV) will then be investigated in further detail and will be the focus of this review.

2.1

Background to sailing

A sail boat is an ocean going vehicle that is propelled solely by wind power. Sailing can be traced back as far as 5000BC and has been utilized by Arab, Chinese, Indian and European explorers since the 15th century. The Chinese were the first to use sail boats to successfully circumnavigate the world in the 14th century by sail boat in a journey that took over two years to complete [9]. The advantages of sailing over motorized boating are clear; the driving force is the wind, a renewable and endless source of energy. Yachts have the ability to sail continuously with the only limiting factor being the human element requiring supplies. This makes them an incredibly elegant and efficient machine.

5

2.2. DEVELOPMENT AND MOTIVATION FOR AN AUTONOMOUS YACHT

Over the years, yachts evolved from being slow and cumbersome to fast and elegant forms of transport. The sail rigs have evolved from Brig rigs used by pirates to the modern wing sail. This advancement has allowed the yacht greater flexibility in the directions it can sail. The wood and steel have been replaced with nida-core and carbon fibre. Modern electronics and Global Positioning System(GPS) technology has made the trusted sextant redundant [10]. With these advancements in technology the limiting factor to the performance these yachts can achieve are the human control elements. One way in overcoming this limitation is to automate the sail boats control systems. This will allow for yachts to be used as continuous and sustainable oceanic research platforms and a more reliable method of transportation.

2.2

Development and motivation for an autonomous yacht

As the ocean covers 72% of our ocean, research at sea is vital in evaluating the health of our waters and to monitor climate change. However, it is an incredibly expensive task requiring a large amount of manpower. As mentioned in Section 2.1, yachts have the ability to theoretically sail continuously without any external energy or cost input. This makes them a very attractive platform from which to base marine research. By further automation of the yacht, it can be launched and left to carry out oceanic research sustainably without any human interference or input. There has been a lot of work into the autonomous yacht, however there is still more development that is needed to be done before it operate will effectively [5], [11], [12], [13]. Prior to 2005 there was very little research done in the field of autonomous sail boats [5]. Most of the research was based on autonomous underwater vehicles (AUV) and motorised surface vessels. Since then competitions such as the Microtransat challenge 1 have been initiated, kicking off a new era of collaborated research and development of the Autonomous sail boat [5]. Competitions like these will hopefully increase the level of competition between development teams increasing the rate of development. 1

www.microtransat.org

6

2.2. DEVELOPMENT AND MOTIVATION FOR AN AUTONOMOUS YACHT

2.2.1

Structure development

The structure of an autonomous yacht is very similar to that of a manned yacht. However, some modifications can be implemented to make the yacht easier to control and require less energy to operate.

The Hull

The hull structure of the autonomous yacht is identical to that of a manned yacht. However, if a monohull2 design is used, more ballast is often added to the Keel to reduce the heel angle of the yacht and prevent capsizing [5]. This is required as there are no humans to hike out3 when sailing. With autonomous yachts, the design is often geared more towards stability than performance. The type of ocean vessel, size and mass all play a part in how the yacht will react and handle in the water. With a focus on autonomous yachts there is a limitation on the manoeuvrability of the yacht. The dynamics of the specific yacht will need to be modelled to provide predictions of how the yacht will behave. The behaviour of the yacht in certain wind conditions is formalised into a polar diagram as shown in Figure 2.1. It can be seen that there is a dead zone in the direction of the wind where the yacht cannot sail. Furthermore, the yacht performs better at certain relative wind angles. With these limitations, a sail boat may not always be the best method for oceanic research. A powered ocean vessel may be preferable in certain weather conditions.

2.2.2

Communication

Although the yacht is designed to operate without human intervention, it is advantageous to have a communication link between the yacht and the base station on shore. This allows the user to take control of the yacht in an emergency situation or to bring the yacht manually back to shore. It also allows for ’on the fly’ control variable tuning and to monitor sensor information. There are three main options available for communication, namely: 2

A monohull is a boat with only one hull, as opposed to a catamaran or multihull with two hulls In sailing, hiking is the action of moving the crew’s body weight as far to windward (upwind) as possible, in order to decrease the extent the boat heels (leans away from the wind). 3

7

2.2. DEVELOPMENT AND MOTIVATION FOR AN AUTONOMOUS YACHT

Figure 2.1: Figure showing an average yacht’s polar diagram. The red region shows the area where the yacht cannot sail. Often a running dead-band is included at 180◦ to improve stability of the vessel as sailing downwind will result in uncontrolled heeling. Image adapted from [1]. • Wireless LAN, • 3G network, • Satellite transceiver,

each having its own benefits and drawbacks. They are not mutually exclusive and a hybrid of communications can be used on the boat. The first option of wireless LAN allows for the best communication speed with a large bandwidth. A good quality video feed can be transmitted using this technology as a transfer rate of 300Mbps can be achieved. However, it has a limited range and requires bulky antennas [5]. Therefore this technology would be used for close range testing. The 3G option also offers high data transfer speeds with the benefit of a small compact package. However, as the infrastructure is provided by the mobile network there is a subscription fee and is limited to the coverage of the service provider. Finally the satellite transceiver has coverage all over the earth surface where there is line of sight to the sky and also provides GPS data. However, this too has a base fee, low data rate and a large latency. Thus it cannot be used for real time control. However, it would be useful to transmit essential data back to shore when the boat is far offshore.

8

2.2. DEVELOPMENT AND MOTIVATION FOR AN AUTONOMOUS YACHT

For the optimisation of the collision avoidance system on the boat and real time control, a live video feed will be required. Therefore, the only viable communication options are that of Wireless LAN and 3G. A satellite transceiver can also be included if the boat will venture far offshore.

2.2.3

Actuation

On board a manned sail boat there are many control lines to efficiently sail the boat. However, in the boat that will be used in this research project there are only two main actuation requirements, namely:

• Sails (Jib and Main) or Propeller. • Rudder Position.

To reduce the size and power requirements of the actuators, the force required to trim and control the boat must be minimized. A balanced rudder set-up is a novel way to reduce the power required to control the rudder. By adjusting the axis of the rudder the surface area fore and aft of the centre axis can be matched to cancel out the turning moment each surface creates [5]. Thus a smaller actuator can be used. The rudder is usually controlled by means of a linear drive such as those used by Raymarine in the Autohelm but a servo drive is another common actuator [5].

2.2.4

Autonomous yacht control

Currently most manned sail boats utilize a rudder control system in the form of an autopilot. The sails are still operated and trimmed by hand. However, for an autonomous sail boat to function fully autonomously, a control algorithm is required for the control of both the rudder and sails. These two control systems function completely independently from one another and run in parallel.

9

2.2. DEVELOPMENT AND MOTIVATION FOR AN AUTONOMOUS YACHT

Sail Control

Automatic sail control is a recent idea and is currently under research [5], [11], [12], [13]. The most basic control utilized in autonomous sail boats is that of the sail angle relative to the wind. Therefore, only the main sheet4 requires control. All other aspects of sail trim for altering the sail shape are ignored. The only inputs into this basic system are the apparent wind angle and strength. This algorithm can be applied to both conventional and self-trimming wing sails. The sail control algorithm must be smooth to prevent oscillations damaging the line, to allow the yacht to maintain a course, and to allow the yacht to execute a tack and a jibe.

Rudder control

A fuzzy control algorithm for rudder control has been proposed [5]. However, a standard proportional-integral-derivative (PID) controller will perform the function adequately. The system has a simple function. If the yacht heading differs from the desired heading, it should control the rudder to steer the yacht back onto the desired course. The system inputs are:

• Current heading • Desired heading • Turn rate (Prevents over shoot of the desired heading)

The output is then the rudder position. The PID controller should have a minimal overshoot to prevent the yacht stalling when beating5 upwind.

Control architecture

The autonomous yacht can feature a variety of control systems. The most attractive being that of a subsumption architecture as proposed by Brooks and Flynn in 1989 [14]. In essence, the subsumption architecture is a parallel and distributed computational 4 5

The rope that controls the angle at which a mainsail is trimmed and set Beating is the procedure by which a yacht tacks in a zig-zag in order to advance directly upwind

10

2.2. DEVELOPMENT AND MOTIVATION FOR AN AUTONOMOUS YACHT

formalism for connecting sensors to actuators in robots. These layers operate in complete parallel with no central processing unit. If a higher ranked layer needs to override a lower layer’s command, it suppresses the lower layer and overrides it. This allows the yacht to react instantly to changes in the environment.

2.2.5

System architecture and sensors

For the yacht to operate effectively the following sensor inputs are required on a continuous basis:

• Wind direction • Wind speed • Current heading • Desired heading • Geometric Position • Turn rate • Heel angle

However the sail boat can be fitted with a range of sensors that can be used for research purposes. These include, but are not limited to:

• Depth of water • Water temperature • Air temperature • CO2 and other greenhouse gas measurements • Humidity • Atmospheric pressure

11

2.3. COLLISION AVOIDANCE

2.2.6

Power requirements and power sustainability

The purpose of the autonomous yacht will be to continuously monitor the ocean surface and record geographical data. The reason a sail boat is used rather than a self-powered vehicle is to allow the vehicle to operate continuously without human intervention, as explained in Section 2.2. To perform this main objective, the yacht should be sustainable from an energy point of view. Therefore, it should have a positive energy balance. In other words, the power being generated and supplied to the storage system should exceed that being used by the computers and actuators. This is currently very difficult to achieve as the autonomous yachts have limited surface area for solar panels, yet require large amounts of power for computation and actuation. Even the best of the autonomous yachts, such as the Roboat by the Austrian Society for Innovative Computer Sciences, suffer from negative energy balances [5]. This negative energy balance defeats the purpose of using the autonomous sail boat, as it cannot sail sustainably without being refuelled. Thus, the technology used for the sail boat needs to be developed further to provide a positive energy balance. To achieve this, either the supply needs to be increased (more solar panels and/or wind turbines), or the power requirement needs to be decreased by increasing efficiency. The majority of the power is consumed by the communication, actuation and processing systems. Communication power requirements are only high when the yacht is close to shore supplying high bandwidth data such as a live video feed as mentioned in Section 2.2.2. When on a solo offshore mission, communication will be kept to a minimum, transmitting only GPS coordinates of position. To reduce the power of actuators, smaller, less powerful actuators will need to be used. To allow this the yacht should feature a balanced rig and rudder set up. As show in Section 2.2.3, when using the balanced rudder system, less force is required for actuation. Therefore, smaller actuators can be used, reducing the power requirement. The processing power can be optimised by programming more efficient algorithms and using modern, low current circuitry.

2.3

Collision avoidance

For an autonomous yacht to succeed in accomplishing its objectives, it must have a solid baseline for collision avoidance. The yacht should be able to detect obstacles from the surrounding environment and execute the necessary course adjustments required to 12

2.4. OBSTACLE DETECTION

safely navigate the obstacle. The subject of collision avoidance is one of great research and is indeed the focus of this research project [5], [11], [12], [13]. This area is of great importance to the sail boat as it is used in emergency scenarios to prevent damage to the sail boat, or indeed other ocean vessels. Collision avoidance comprises of two main Sections as shown in Figure 2.2. These two sections are namely obstacle detection and obstacle avoidance with obstacle detection being the focus of this review.

Figure 2.2: Figure showing the two components making up the collision avoidance.

2.4

Obstacle detection

In order for the sail boat to execute the necessary manoeuvres to avoid the obstacle, the sail boat should have a reliable, constant and efficient obstacle detection system. Therefore, it should be able to identify objects in the area and determine whether or not they are an obstacle that needs to be avoided or not. There are a number of factors which make this a difficult task to accomplish. The largest of which is the fact that the sensor will be moving in a six dimensional space while the sail boat is on the ocean surface. These degrees of freedom are detailed in Figure 2.3. It will have to contend with all these movements of the boat, the ocean swell and potential breaking waves causing spray.

13

2.4. OBSTACLE DETECTION

Figure 2.3: Figure showing the six degrees of freedom the sail boat has to contend with. Image adapted from [2]. Taking all these factors into account makes this a challenging task. The most viable sensor technologies for obstacle detection on the autonomous sail boat are:

• Image Sensor • Ultrasonic sensor • SONAR • RADAR • LIDAR • Automatic Identification System

Image Sensor The image sensor simply converts a visual image into an electronic signal representing the visual data in the image. The data can then be processed using many different image processing techniques to provide the required information. This makes the image sensor an incredibly versatile sensor as it can be used to sense various physical quantities by using multiple algorithms [15].

Ultrasonic sensor Ultrasonic sensors (often called transceivers because they transmit and receive signals) generate high frequency sound waves and evaluate the returning echo received by the sensor. The time of flight of the signal is known from the transmission to 14

2.4. OBSTACLE DETECTION

the reception as well as the signals propagating speed. Therefore the distance to a object can be identified [16].

Sound Navigation And Ranging (SONAR) SONAR is a technique similar to that of ultrasonic, except it propagates its signal on the surface or under the water and at a lower frequency (Infrasonic). The system uses the same principles for detecting an objects range as that of ultrasonic [17].

Radio Detection And Ranging (RADAR) RADAR transmits radio or microwaves using a radar dish or antenna. The waves bounce and reflect off objects in their path. The object returns a small amount of the waves energy to the dish which is used to locate the object [18].

Light Detection And Ranging (LIDAR) LIDAR uses the same technique as RADAR, except a laser is used in place of radio waves. LIDAR illuminates an object and by measuring the returning light, it can detect obstacles and their ranges [19].

Automatic Identification System (AIS) AIS is a system used on ships and ocean vessels for identifying and locating vessels by electronically exchanging data such as position and speed with other nearby ships, base stations and satellites [20]. Many current systems utilise only one of these technologies in the obstacle detection system [5]. However, as expected, each of these technologies has strong and weak points. Therefore, the best obstacle detection system will most likely be a hybrid of these technologies. Furthermore, using a hybrid system will also increase the system’s reliability as extra redundancy will be added to the system. Although a hybrid system will produce the best results, the system under design is intended for a low cost, low speed autonomous yacht platform. Therefore, an incredibly detailed, accurate and high speed system such as that provided by LIDAR is not required [21]. The system should be simple, low cost and robust. The following methods can be used with an image sensor to allow for obstacle detection and indeed mapping of the sail boat environment.

15

2.4. OBSTACLE DETECTION

2.4.1

Monocular vision using image sensor

Monocular vision is a vision based obstacle detection method which can be used for real time navigation. It has the benefits of being a financially low cost method with simple computation algorithms and construction, as it uses only one image sensor. Monocular vision uses geometric data from the image to determine an object’s distance from the point of reference. In the case of an autonomous sail boat the camera will use the horizon distance as a baseline for the calculations. From known geometries of the earth and a known camera height, a rough distance from the horizon can be calculated. This distance, coupled with the measurement of the relative angle between the horizon and the waterline of the obstacle, allows for the rough calculation of the distance to the obstacle [3].

Figure 2.4: Figure demonstrating the basic trigonometry the monocular system utilizes in the distance calculation of a obstacle. The algorithm uses geometry from the image to calculate the obstacle’s distance. With the camera’s known height above the surface, the distance from the horizon calculated and the height of the obstacle calculated trigonometry is applied to calculate the obstacles distance from the boat. Image adapted from [3] This procedure is demonstrated in Figure 2.4. In the presence of a shoreline, nautical charts and GPS position provide an analogous baseline distance to the shore, allowing for a similar calculation to be performed. Researchers have found it beneficial to apply a smoothing filter such as a 5X5 Gaussian filter to the images to reduce the levels of noise in the image, providing better accuracy [22]. Monocular vision has also been used to create a visual sonar using the monocular camera to detect the objects and store their location in memory. This allows the system to avoid objects which are no longer in the view of the camera. This does away with the need for a wide angle lens or camera panning platform. However, they are still beneficial. This method is covered in more detail in Section 2.5. Although this method has been successfully implemented on a boat by the Space and Naval Warfare Systems Centre (SSC) in San Diego, the method is not optimal for marine use. The water, on which the yacht is sailing, is not always flat. This can cause problems 16

2.4. OBSTACLE DETECTION

for the range detection [22]. Difficulties arise in estimations of the distance to the horizon from the sail boat, which can often not be determined due to ocean swell. Thus, filters are applied to lay down a rough artificial horizon over the swell. However this only adds to the error in the readings. SSC has used Hough-transform based line detectors on a frame by frame basis with reasonably good horizon estimation results. However, this adds extra computation time to the process and still introduces an error. Furthermore, the camera height above the base of the obstacle will contain an error due to ocean swell. Therefore, the two parameters required for the distance estimation both contain varying degrees of errors. This is a cause for concern and this technique may not be the most suitable. Indeed monocular vision can be used, and has been, however it is not the optimal algorithm due to the uncertainties associated with the ocean surface. Using two cameras in a stereo-vision set up has been shown to yield far more promising results [3].

2.4.2

Stereo-vision using image sensor

Stereo-vision is a real time obstacle detection and range estimation system which makes use of two cameras displaced horizontally from each other. By using two cameras, two different views of the scene are obtained. By comparing these two images the depth of objects can be determined by use of parallax6 . Therefore both the distance and angle of the object from the camera can be determined. Several image processing techniques including averaging filters, edge detection and colour segmentation are utilized to determine the target and its distance away [4],[24].

Mathematics behind the depth calculation

Stereopsis is the impression of depth in a scene obtained when viewed by a person with binocular vision. A stereo-vision system uses the same principals as a human does to measure depth with their eyes. By obtaining two reference points of a scene, the differences in the two images can be processed to determine depth. The mathematics behind the principal is shown in Figure 2.5. The two cameras are aligned parallel and fixed on the same horizontal plane. The distance B is the distance between the two cameras, known as the baseline. This distance can be optimised as will be discussed later 6

Parallax is the displacement or difference in apparent position of an object when viewed along two different lines of sight [23]

17

2.4. OBSTACLE DETECTION

in this Section. In general, the greater the distance between the cameras, the further the detectable range. The distance from the target to the center line of the left camera is XL and xL is the projective length of XL on the left image plane. The same applies for XR and xR . The maximum visual angle of the camera is 2θ, f is the camera focal length and W is the width of the image plane [4]. Using trigonometric manipulation, the distance of the object from the cameras, Z, can be calculated using: Z=

WB 2(xL + xR ) tan(θ)

(2.1)

.

Figure 2.5: Figure showing the variables used in calculating a target distance from the camera. Basic trigonometry is used to calculate the target distance, Z. Image adapted from [4]

Assumptions

Many existing algorithms make the assumption that the ground, in this case the ocean, is flat in the view field. This allows for simple methods based on threshold or on image differences to be used to detect the obstacle. However, there are methods which use histograms on different coordinates or modelling the ground as a quasi-plane to make the method more robust, in the cases where the surface is not flat [25],[26].

18

2.4. OBSTACLE DETECTION

Accuracy limits

The accuracy that can be obtained using a stereo-vision arrangement is dependent on two important parameters:

• Image quality (Width of pixels). • Distance between the cameras.

If one considers the field of view (FOV) of a pixel on each camera as shown in Figure 2.6, it can be seen that the point at which the two pixels FOV lines intersect, a diamond is formed. The distance between the front and back of the diamond indicates the error in depth perception. To reduce this error the two parameters can be altered. Firstly if a camera with a higher resolution is used, the FOV of each pixel will be narrower. Therefore the diamond will decrease in size. Similarly, if the cameras are moved further apart, the diamond will compress. This will further increase the accuracy of depth measurement at long range, but decrease the accuracy at short range.

Figure 2.6: Figure showing how the distance between the cameras (Baseline) effects the accuracy of the range calculation. Further inaccuracies lie in assumptions made when creating a mathematical model of the environment. Methods such as sub-pixel estimation can be applied to further improve the accuracy. 19

2.4. OBSTACLE DETECTION

Obstacle detection technique

The most common technique used is that of the disparity map. Here the left image and right image are compared to create a disparity map or “depth map” of the environment. Using this map, any obstacle that is too close to the robot will be detected, as the distance for every part of the environment is calculated and displayed in the disparity map. The disparity map can be thought of as matrix overlaid on the image with entries, representing the distance to a point it covers on the image. The most common procedures used to create disparity maps are:

• Basic Block Matching • Sub-pixel Estimation • Dynamic Programming • Image Pyramiding • Combined Pyramiding and Dynamic Programming • Backprojection • Graph Cut Algorithm

This can be a time consuming process. However techniques such as epipolar geometry and regions of interests (ROI) are employed in the block matching process to increase the speed. The result is a matrix representing the distance to each pixel in the image, used to avoid obstacles too close [27], [28].

Camera configuration

A stereo-vision system does not, in fact, need two cameras. Rather two FOVs of a scene are required. Therefore, a stereo-vision system can be implemented using one camera as shown in Figure 2.7(a). This system uses two mirror and prism pairs to create two virtual cameras [29]. The two images are combined onto one imaging surface and processed accordingly. Although this configuration does provide the required results, it is very sensitive to the configuration of the mirrors and prism [29]. Indeed this would not be a viable option for the sail boat, as it is not robust. Furthermore, the mirrors could reflect direct sunlight and damage the image sensor array. 20

2.4. OBSTACLE DETECTION

The alternative is to use two standard identical cameras mounted on the same horizontal plane as shown in Figure 2.7(b). This is the standard tried and tested mounting arrangement and is very robust to camera movement. The optimal distance between the cameras is unknown for its use on the sail boat and will be determined from experimental data. The use of a panning platform for the cameras to be mounted on would allow for a larger field of view. The use of a panning platform and the calculation of an optimal baseline will be investigated in Sections 3 and 4 respectively.

(a) Figure showing how the use of mirrors and prisms can be used to create a stereovision system with one camera. Image adapted from [29].

(b) Figure showing the two configuration for stereo-vision. adapted from [30].

camera Image

Figure 2.7: The different stereo-vision camera configuration options using one camera and mirrors as opposed to using two cameras. One further option using a hybrid of optical and laser sensors is that of a 3D sensor which combines cameras laser based projectors.

2.4.3

3D sensor using XBox Kinect

The Xbox Kinect is brilliantly summarized by B. Peasley and S. Birchfeld as follows, “The Xbox Kinect is a 3D depth camera consisting of two cameras and a laser-based infra-red (IR) projector. One of the cameras is a standard RGB camera, while the other camera is an IR camera which looks for a specific pattern projected onto a scene by the laser-based IR projector. This sensor calculates the disparity of each pixel by comparing the appearance of the projected pattern with the expected pattern at various depths.” [31].

21

2.5. OBSTACLE AVOIDANCE

2.5

Obstacle avoidance

Once an obstacle has been identified and located, its position is applied to an obstacle avoidance algorithm to navigate the yacht around the obstacle. Two approaches on how to deal with obstacles [5]. These are:

• Deliberative. • Reactive (subsumption).

The deliberative approach uses a world model to plan a route and then navigate that route [5]. This is suitable to long distance path planning. If the world model changes, for example an obstacle is detected, the route will need to be re-calculated. Therefore, it is not suited to obstacle avoidance where the environment is constantly changing and fast reaction times are required. A reactive approach takes in sensory data and acts on it immediately - all actions are spontaneous with no planning done. An example of this method is the Brooks subsumption architecture as explained in Section 2.2.4 [14]. This approach is extremely fast but offers a low level of intelligence. Therefore the robot cannot see the “bigger picture” and is focused on the specific task at hand. A comparison between the two approaches is shown in Figure 2.8. Each approach has its uses, therefore, most modern robotics use a hybrid of these two approaches[32].

Figure 2.8: A comparison between a deliberative and reactive approach showing the properties of each system. There has been research and development into efficient obstacle avoidance methods for ASVs [5], [6], [3], [11], [12]. However, these methods cannot be directly applied to 22

2.5. OBSTACLE AVOIDANCE

an autonomous yacht owing to yachts limited movements, whereas the systems these have been developed on are self-powered and have 360◦ of movement. This makes the obstacle avoidance algorithm for an autonomous yacht a unique problem that requires a highly sophisticated algorithm. Furthermore, if the yacht is navigating controlled waters it should conform to the International Rules for the Prevention of Collisions at Sea (COLREGs). COLREG specifies manoeuvres that must be followed in specific scenarios to prevent collisions. Following is a current method used for autonomous yacht navigation that has shown promising results [5].

2.5.1

A reactive approach

A reactive approach builds on and modifies the polar diagram shown in Figure 2.1, which is used by the path planning algorithm to determine which headings are navigable [5]. The system sets a safe distance rmax from the yacht, also known as the safe horizon, which is based on the handling characteristics of the yacht and its speed. Any objects beyond this distance are ignored as they are not currently a threat to the yacht. Any obstacle within this horizon is detected and marked. A penalty is added to the obstacle in relation to its distance from the yacht. Thus the further the object is from the sail boat, the lower the penalty and vice versa. Should the object fall within a set rmin horizon, the obstacle is extremely close to the yacht and that heading will be marked as completely unnavigable. Often there are many obstacles within rmax which can confuse the system. Therefore, some obstacles are removed to simplify the computation. This is done using specific algorithms using a caching mechanism and taking the allowable moving direction into account [5]. There are two main cases for a heading in a collision avoidance scenario:

• Unnavigable: These are headings for which the yacht cannot sail. This may be due to the wind direction or obstacles blocking that heading. Courses can be marked as unnavigable due to safety. For example, although yachts can sail downwind, the downwind heading is often blocked. This is to prevent the yacht capsizing in large ocean swell. • Navigable: These are headings which can be navigated by the yacht. Some headings may be preferred to others and this is shown by the penalty associated with each heading. The larger the penalty, the less likely the yacht is to sail in that direction.

23

2.5. OBSTACLE AVOIDANCE

Figure 2.9: Figure showing the modified polar diagram used for obstacle avoidance. The red region shows the navigable regions the yacht can sail. Obstacle four (O4 ) is outside rmax so it is ignored. O1 and O2 are within rmax , so their headings are penalised relative to their distance from the yacht. O3 is within rmin so its heading is completely blocked. Image adapted from [5]. A modified polar diagram produces reliable results; however, using only a modified polar diagram is not sufficient. Without additional computation the boat would navigate a route too close to obstacles. An example of this is sailing parallel extremely close to a large ship. Looking ahead, the obstacle may be at an acceptable distance; however the side distance may become insufficient. A flower-algorithm is often implemented to prevent this [5]. Once the modified polar diagram has been determined, the routing algorithm is applied. The aim of the routing algorithm is simply to decrease the distance from the target as fast as possible. A sail boat has different efficiencies at different relative wind angles. This algorithm will determine the best compromise between speed and distance sailed to minimise the time taken to reach the obstacle.

24

2.6. DEVELOPMENT SOFTWARE

2.6

Development Software

For the algorithm development and testing, a high level language capable of testing the equipment and capable of producing graphical results for investigation are required. The Matlab environment is the ideal platform to use as a development language for the system, as it offers a powerful graphical interface and is well supported and documented. Furthermore, it features a Matlab coder for converting the Matlab coder into C/C++ code for low level implementation [33]. Once the system is implemented in low level hardware, C/C++ will be used for its speed ease of programming. Using C/C++ allows the use of Opencv, a documented library containing many computer vision functions which is perfect for use in this system [34].

2.7

Embedded System Technology

For implementation into a real world, real time application, the coding algorithms will be transferred from the Matlab development environment into a portable embedded system running in C/C++ code. There are various options and flavours of embedded system to select from. The short-listed options are:

• STM32 F4 • Raspberry PI • BeagleBone • BeagleBone Black

A Table summarizing the performance of each embedded system is shown in Table 2.1, where each board is compared.

25

2.8. CURRENT AUTONOMOUS YACHTS

Table 2.1: Table showing a comparison between the relevant aspects of the short-listed embedded systems that can be used for the real time implementation Board

CPU

Memory

STM32 F4

ARM Cortex-M4F 180MHz

2MB Flash 256KB SRAM

Raspberry PI

ARM1176JZF-S 700MHz

BeagleBone

ARM Cortex-A8 720MHz

256 MB DDR2

ARM Cortex-A8 1GHz

512 MB DDR3 2GB EMMC

BeagleBone Black

2.8

External SD

Interfaces 2 X USB2.0 UART, SPI, I 2 C GPIO 2 X USB2.0 8 X GPIO, UART, SPI,I 2 C 4x UART,8x PWM,2x SPI, 2x I2C, GPIO 4x UART,8x PWM,2x SPI, 2x I2C, GPIO

Standard OS ROSH

Linux(Raspian)

ˆ Linux(Angstr¨ om)

ˆ Linux(Angstr¨ om)

Current autonomous yachts

The autonomous yacht has undergone rapid development in the last twenty years. Early robotic yacht attempts such as the Station Keeping Autonomous Mobile Platform (SKAMP) and the Atlantis project of Stanford University were able to sail autonomously and navigate [5]. However, they were certainly not reliable, energy sustainable or efficient. Competitions such as the Microtransat challenge really upped the game and yachts such as the Roboat emerged showing true development in terms of reliability, efficiency and energy balance. However, even the Roboat suffered from a negative energy balance [5]. Recently a new development has emerged, the Robotboat MarkVI. This project recently received funded using a public funding scheme on kickstarter and is a fully autonomous yacht which is the first commercial autonomous boat. It is energy sustainable by achieving a positive energy balance while sensing a variety of marine parameters. It is robust, being designed to capsize and roll in the waves during rough ocean conditions [6]. The Robotboat markVI appears to be the first successful, true autonomous yacht. A picture of the Robotboat markVI is shown in Figure 2.10. The boat is currently under construction with some previous versions already sailing.

26

2.9. SUMMARY OF LITERATURE REVIEW

Figure 2.10: A sketch of the Robotboat design. Note the yacht is a catamaran design rather than a monohull for stability [6].

2.9

Summary of literature review

The literature review has provided all the necessary information required to begin designing the reactive collision avoidance system for an autonomous sail boat. All general aspects of an autonomous sail boat have been covered with specific detail in the required areas. Based on the information gathered the objectives of this research project are achievable. The process of designing and testing the system can now be initiated, beginning with selecting the most suitable sensor for detecting obstacles.

27

Chapter 3 Selection and configuration of obstacle detection sensor There are many different sensor technologies that can be used to sense the sail boat environment and detect obstacles in its path. In this chapter some of the available obstacle detection methods will be investigated and compared. Finally, the most suitable sensor is selected based on the comparison results.

3.1

Obstacle detection sensor selection

As shown in Section, 2.4 the short listed technologies that will be compared for this application are:

• Image Sensor • Ultrasonic sensor • SONAR • RADAR • LIDAR • Automatic Identification System

28

3.1. OBSTACLE DETECTION SENSOR SELECTION

Table 3.1: A table summarizing the advantages and disadvantages of each sensor technology proposed for use in the system. Digital Camera Advantages Disadvantages -Fairly complex -Versatile image processing algorithms -Cheap implementation -Requires a powerful -Accurate processor object detection -Only works in daylight RADAR Advantages Disadvantages -Expensive and complex -Superior implementation weather -Large targets penetration close to radar -Very reliable can saturate receiver

Ultrasonic Advantages Disadvantages -Simple implementation

-Small detection range

-Will work in all weather, lighting conditions

-Not reliable, irregular surfaces disrupt signal

LIDAR Advantages Disadvantages -Accurate -Expensive and obstacle complex detection and implementation ranging in all -Requires a weather powerful conditions processor - High accuracy

SONAR Advantages Disadvantages

-Simple implementation - Doubles as a depth sensor

Advantages

-Interferes with marine communication AIS Disadvantages

-Inexpensive implementation -Simple

-Only senses below the water surface

-Only fitted to ocean vessels above 300 tons -Cannot accurately locate obstacles

Table 3.2: A table showing the different categories each sensor is compared on. Each category is weighed based on its importance. The score for each sensor is the sum of the category weighting multiplied by the sensors score in that category for all the categories.

25 15

Digital Camera 8 7

Ultrasonic sensor 8 5

10

7

15 25 100

6 8 665

Weighting Cost Detection range Ease of implementation Robustness Versatility Score

SONAR RADAR LIDAR AIS 7 6

3 9

3 9

9 10

5

5

3

3

8

4 2 435

7 7 595

9 4 475

9 4 475

9 1 615

The selection of the appropriate sensor is no easy task as each available technology has something to offer. Therefore, two comparison tables are made to make an informed decision. One table compares the advantages and disadvantages of each technology, and the other shows a weighted comparison of important aspects that need to be considered. By examining Table 3.1, it is clear that certain technologies will not be suitable for a small, low cost autonomous sail boat. Firstly, although RADAR and LIDAR offer outstanding weather immunity and accuracy, they are both relatively expensive and complex. They are very well suited to a large, high speed autonomous robot, which requires a fast sampling time and extreme accuracy. However, these are unnecessary for a low speed, low cost autonomous sail boat operating at slow speeds. Therefore, RADAR and LIDAR are suitable but are excessively powerful. Thus, a simpler sensor will be investigated. 29

3.1. OBSTACLE DETECTION SENSOR SELECTION

Secondly, AIS will not be an acceptable technology, as it is only fitted to ocean vessels weighing more than 300tons. Therefore it is not fitted to small ocean vessels, rocks, buoys and so forth. Furthermore, it only indicates the ship as a point position without its dimensions or heading, to allow for accurate and efficient avoidance. AIS is a very necessary technology for any autonomous vessel requiring COLREG compliance, which is not a requirement for this system. Finally, the only remaining sensors, after examination of the advantages and disadvantages table, are the image sensor, SONAR and ultrasonic sensors. The weighted decision table is now used to make an informed decision between the remaining sensors as shown in Table 3.2. Based on the above information, it is clear that the image sensor will be the best solution. It features the highest score when compared to the ultrasonic and SONAR sensors. This is mainly due to the versatility of the sensor and all the different quantities it can in fact monitor. Owing to a digital camera simply processing visual images into a digital signal, various algorithms can be applied to process the image and extract the necessary data. This makes the digital camera an extremely versatile sensor. It does, however, have some disadvantages, such as only sensing in daylight. For the sail boat implementation, this is an acceptable limitation, as the boat is assumed to only sail in daylight during fair weather conditions. An image sensor is also cheap to implement with accurate consistent readings and a large range. Therefore, the focus of this research project is obstacle detection for a reactive collision avoidance system using image sensing methods.

3.1.1

Selection of image sensor

There are various off the shelf cameras that can be used in this task. A very popular device that is widely used in computer vision applications is the Xbox Kinect.

XBox Kinect

As explained in Section 2.4.3, the XBox Kinect is a 3D sensor consisting of two cameras and a laser based projector. This configuration has shown extremely promising results [31], however it was avoided for two reasons.

30

3.1. OBSTACLE DETECTION SENSOR SELECTION

Firstly, any object covered with a reflective material such as shiny metal or IR absorbing paint will prevent the reflected light from the IR projector from reaching the IR camera or will attenuate the IR signal. Therefore, the IR camera will not receive the desired return signal and accurate depth measurements will not be made. This is of concern as many ocean obstacles are unpainted galvanised steel, which is reflective. Secondly this is a high level sensor with complex drivers and a high level interface. It is of concern that one will not be able to get drivers for the system on a simple embedded system or direct hardware access if required. As the system will be programmed in C/C++ on the embedded system, this may be of concern. Therefore, a low level image sensor is investigated.

Digital Camera

Using an isolated image sensor will increase the complexity of the implementation drastically. This is because all control for the sensor will need to be programmed manually. Therefore, a digital camera package will be used as it features on board circuitry to automatically deal with focusing the camera, exposure, buffer management and data transfer. There are many variations of digital cameras to select from. First and foremost the two cameras used in the stereo-vision implementation need to be identical. This is because the code algorithm compares targets in the left and right image to calculate distance, requiring two identical image formats and image qualities. Although a high resolution image at a high frame rate is preferred, only a low resolution image at a low frame rate is required. For obstacle detection a high quality image is unnecessary as most of the information is discarded immediately in the code to reduce computational power. Therefore, a cheap low cost camera that can supply an image no larger than 640X480 pixels is sufficient. Furthermore, with the low frame rate and quality, a relatively low bandwidth communication link is required. As the sensor is being used on a fairly slow, small sail boat with a sharp turning circle it is acceptable to use a low cost image sensor. There are a number of camera interface systems such as Firewire, USB and Camera Serial Interface (CSI). A USB 2.0 compatible camera was selected as USB2.0 is compatible on both the PC for code development and most embedded systems for practical implementation. Furthermore, USB provides a sufficient data transfer rate of 48MB/sec for the resolution being transferred making Firewire unnecessary. 31

3.2. IMAGE SENSOR CONFIGURATION OPTIONS

A pair of Logitech C210 USB cameras was selected for their low cost, USB2.0 connection interface, wide range of supported colour formats and a maximum resolution of 640 X 480 pixels. The Logitech C210 is also favourable because it has a very low distortion lens fitted on the sensor. The Logitech C210 specifications are shown in Table 3.3. Table 3.3: Table showing the selected Logitech C210 camera specifications. Camera Specifications Connection Lens and sensor type Focus type Field of view Optical resolution Max frame rate Cost

3.2

Corded USB 2.0 Plastic, CMOS Fixed 53◦ 640 X 480 VGA 15fps @ 640 X 480 R200

Image sensor configuration options

Although the sensor has been selected, there are various ways in which it can be configured for implementation, namely monocular vision or stereo vision. As explained in Section 2.4.1, monocular vision is computer vision using only one image sensor or camera. It requires a steady baseline in the image for accurate distance measurements. As the sail boat will not have a fixed reference point owing to the uncertain nature of the ocean, it is of concern that an accurate baseline will not be able to be acquired. Furthermore, heeling over of the boat and ocean ripples could add to the inaccuracies. There are methods to work around this problem, however, they are extremely complex to implement. This method could result in inaccurate and unreliable results. Stereo-vision, using two cameras, is more complex to implement but offers a more reliable and consistent measurement. The computational efficiency is slightly reduced over monocular vision; however it will not be of concern due to the low frame rate required. Stereo-vision does not require any prior knowledge about the environment, enabling the algorithm to deal with rapidly appearing obstacles. The obstacle detection code remains the same for both configurations, as it only uses one of the cameras for obstacle detection. Only the range estimation of the obstacle will require the second camera. This system has the added benefit that regardless of the camera orientation, it will produce consistent accurate results. This is dues to the triangulation process only relying on the camera’s 32

3.2. IMAGE SENSOR CONFIGURATION OPTIONS

relative positions (which is fixed) and the targets position in each image. Furthermore, it offers redundancy. Should one camera fail, there will still be some form of detecting obstacles. Potentially, a backup monocular vision algorithm could then be used.

Therefore, image sensors will be used as the obstacle detection sensor in the reactive collision avoidance system. The image sensor will be used in a computer web-camera package with a USB2.0 interface and configured in a stereo-vision configuration.

33

Chapter 4 Camera mount development Before algorithm development and testing the stereo-camera rig that will be used in the implantation is required for calibration of the cameras. As the code will be based on the cameras it is advantageous to have a robust camera platform that will provide consistent results throughout the algorithm development, testing and implementation stages. Therefore, suitable cameras were first sourced as shown in Section 3.1.1 followed by the rig design and construction detailed below.

4.1

Camera mount design

The camera mount will be secured to the sail boat mast to provide a good view of the area ahead and to keep the cameras out of the ocean spray. As the sail boat rolls in the waves, the cameras will rotate along the roll axis of the boat. This should not provide too much of an error in the obstacle readings, as explained previously in Section 3.2. However, it would be better if the cameras remained horizontal to provide consistent results and a better field of view. A simple and elegant solution was designed, where the camera mount hangs below a pivot point, like a pendulum relying on gravity, to keep the cameras horizontal. This was chosen over a more complex accelerometer and stepper motor configuration for robustness and ease of implementation. The pivot is dampened to prevent the cameras swinging rapidly on the pivot.

34

4.1. CAMERA MOUNT DESIGN The Logitech C210 offers a 53◦ field of view (FOV). This is sufficient for obstacle detection. However, as the sail boat is limited in its movements, a wider FOV would allow the yacht to make more efficient movements. This will prevent the sail boat navigating into areas with restricted spacial availability. Therefore, the cameras are mounted onto a rotating platform to pan the cameras and provide a full 180◦ FOV. The two cameras need to be mounted to a firm fixed frame with a set distance between them (baseline). The further the distance between the cameras, the more accurately the system can track targets at long range. This comes at the cost of increasing the minimum operating distance. Furthermore, the wider the baseline, the more difficult it is to match template pairs in the left and right images [35]. It was determined that a detection range of less than five meters was adequate for the speed and turning circle of the boat used. Therefore, the cameras have been mounted at a horizontal distance of 90mm for accurate measurements within this range. This distance was determined by experimentation with various camera baselines. The overlap between the left and right images (disparity) were compared, as shown in Figure 4.1, in order to find the baseline where the images showed a slight disparity at 5m. A slight safety margin was then included to allow the boat to detect slightly further than five meters. Another constraint was the size on the cameras casing. Since were to be fitted onto a small model boat, the housing could not be too large. For a larger boat with slower turning dynamic,s the cameras can simply be mounted further apart. After a recalibration the system with a wider camera baseline will function as designed in this project. In order to quantify the results and ensure the baseline distance was adequate, the disparity versus range from the camera was investigated, and the graph shown in Figure 4.2 was created. Here it is clear that the system will be able to predict the target range to approximately 5.5m before the disparity is too low for an accurate measurement. The Logitech C210 has bulky, unstable mounts that will be difficult to mount accurately and firmly on the sail boat. It is essential that the cameras maintain a fixed extrinsic orientation to each other. Otherwise they will not produce consistent results and the calibration procedure will be unable to determine the characteristics of the cameras. The cameras were removed from this package, modified, and a custom mounting rig was designed using the Solidworks program [36]. The mounting was designed to be small, splash proof, simple and robust. The rig design is shown in Figure 4.3(a).

35

4.2. CAMERA MOUNT CONSTRUCTION

(a) Image showing the disparity at 5m. It is clear that at the target has a slight disparity which can be used to calculate the depth accuratly and consistently.

(b) At 7m the target has no disparity. Therefore, the system will be unable to detect targets at this distance.

Figure 4.1: Figures showing the targets disparity at the two critical points. The disparity decreases as the target distance increases. At 5m there is enough disparity to accurately determine distances. At 7m the system is not able to measure distances as there is no disparity. The casing is designed to be robust and splash proof. The back plate is designed for the case to be fully waterproof by fitting a gasket. However, it was not fitted with a gasket as non-distorting clear lenses could not be sourced within budget. Therefore the case is only splash proof and not water proof.

4.2

Camera mount construction

The final camera mount was laser cut from 3mm and 5mm frosted Perspex and constructed using clear epoxy as a binding agent. The housing is compact, neat and robust providing consistent readings from the cameras by preventing any internal camera movement. The pivot performs as designed maintaining a horizontal orientation as the mast keels over. A nylock nut is used to vary the damping of the pivot. This prevents the camera from swinging too rapidly from the mast. A JR591 6kg nylon gear servo was selected to pan the camera housing for its light weight, moderate torque and speed. The servo allows the camera housing to pan through about 150◦ in 0.6s, providing a full 180◦ view for the camera.

36

4.2. CAMERA MOUNT CONSTRUCTION

Figure 4.2: A graph showing the disparity of the cameras versus the range. The further the target, the less disparity there is between the camera images.

(a) The camera mount was designed and tested on Solidworks to ensure it mounted the cameras correctly and panned as desired. The casing has a back plate held with four spaced bolts with space for a gasket to waterproof the housing.

(b) The final camera platform constructed from frosted perspex to allow the camera status LEDs to be seen while providing a neat casing.

Figure 4.3: Camera platform design and final product The final camera platform, once constructed, is shown in Figure 4.3(b), where it is mounted to a temporary platform for testing.

37

Chapter 5 Development of obstacle detection algorithm The method proposed for analysing the environment to detect obstacles in this system does not follow that of the ”depth map” approach outlined in Section 2.4.2. To produce an accurate depth map is a lengthy process requiring large amounts of processor power to achieve a suitable frame rate, as the range for each pixel is calculated. Using a low power system for real time application places limitations on the intensity of the algorithm that can be used. A depth map is essential in a highly cluttered area, such as a house or workshop, where there are many small obstacles that the robot will need to be aware of. In the water, however, there are less small obstacles in random positions, such as a chair under a table or a gate in a doorway. The large open water surrounding an obstacle provides a large flat surface upon which an obstacle can be easily detected in isolation. Therefore, a new approach is proposed. The new approach will use a feature detection algorithm to quickly point out the most prominent features in the cameras FOV. Based on the position of these obstacles in the FOV and their strength, they are selected and rated as high ranking points. The range of these limited points is then calculated in isolation and the boat can then take evasive action based on whether or not the obstacle is too close to the boat or sufficiently far away. Using this method only three to five points are rectified and used to calculate a range, significantly increasing the speed. The first step is therefore to detect obstacles reliably and fast.

38

5.1. AVAILABLE FEATURE DETECTION ALGORITHMS

5.1

Available feature detection algorithms

A feature is a part of an image that stands out from its immediate surroundings. It can take the form of an edge, blob, corner or line. Feature detection allows these areas of interest to be identified and located. Fortunately, only feature detection is required and not feature extraction, saving process time. Feature extraction is the process whereby a set of feature vectors known as descriptors is created that describes the feature from a set of detected features. This step will not be used as only the location of the feature in the image is required and the feature does not need to be identified. There are many different algorithms that can be used for this task of feature detection. Although the algorithms proposed are designed for feature recognition, i.e. identifying similar objects in two images, they can be used to detect prominent features in an image, such as an obstacle on a water surface. The algorithms investigated for use are the most commonly applied algorithms in the Matlab and OpenCV languages. These are:

• Speeded Up Robust Features (SURF) • Scale-Invariant Feature Transform (SIFT) • FAST • Maximally Stable Extremal Regions (MSER) • STAR • Oriented-BRIEF (ORB) • Good Features To Track (GFTT)

Each algorithm is different in its process; therefore the results from each can vary quite substantially. Thus some are better suited to the required task than others. A comparison between each algorithm is investigated to ensure that the algorithm selected is best suited to the required task.

39

5.2. SELECTION BETWEEN FEATURE DETECTION ALGORITHMS

5.2

Selection between feature detection algorithms

Each algorithm is compared in various aspects to select the most suitable algorithm for the task of detecting obstacles in an ocean environment.

5.2.1

Speed Comparison

Firstly, the speed of each algorithm is investigated. Each algorithm uses a different feature detection technique, and therefore the speeds can vary greatly. To compare the speed of each algorithm, a control test using a 320X240 grey-scale image with a single buoy obstacle was used. The test was run five times on a 1GHz BeagleBone Black embedded system, with the average time across all five readings used. A graphical representation of the results can be seen in Figure 5.1 and a table showing the full results can be seen in Section 12.1 of the Addenda.

Figure 5.1: Figure showing the average detection time per algorithm when tested on grey-scale 320X240 images. The test was run five times on each image then averaged to produce the shown results. It is clear that SURF and SIFT are slow algorithms whereas FAST has a very fast detection time of about 5ms. In this system the time for detection is not as important as is would be in a high performance racing boat. The autonomous sail boat, on which which the system will be implemented, will travel relatively slowly. However, because the system is low cost, a slow processor will be used when implemented in a mobile platform. Therefore, a faster algorithm will be preferred. 40

5.3. SELECTION OF ALGORITHM

5.2.2

Number of detected features

Each algorithm will detect a different number of features. The number of features detected can be tuned. However, certain algorithms, such as FAST, detect too many features by nature. Many of the features FAST detects are in fact feature noise and not features. Ideally the selected algorithm will not detect too many features as only the main ones, ie the obstacle are required and not noise, such as a rocky mountain. Each algorithm uses a different technique. For example, the FAST method is based on corner detection, while the SURF algorithm detects blobs and regions [37]. Therefore, SURF algorithm is expected to detect fewer features and features of a larger size, as it detects blobs and not edges like FAST does. This should make SURF the better algorithm for this application.

Figure 5.2: Figure showing the average number of detected features per algorithm when tested on grey-scale 512X512 image. The test was run five times on each image then averaged to produce the shown results. The FAST algorithm produces the most key points. This is due to its edge detection base, thus there are more edges in an image than blobs. The quality of the FAST features in of lesser quality than that of say the SURF. Image sourced from [7].

5.3

Selection of algorithm

With the gathered information it is clear that the SURF and SIFT algorithms will not be preferred as they are very slow. Although the FAST algorithm is exceptionally fast, it will not be ideal as it detects too many features and feature noise. The FAST algorithm may also lead to many false triggers because the edge detection is expected to pick up 41

5.4. UNDERSTANDING SURF AND TUNING PARAMETERS

ripples in water. To select a clear winner the algorithms are tested on a dataset of 12 images to test their ability to detect obstacles in the FOV consistently and reliably. The results of one image from the database for each method can be seen in Section 12.2 of the Addenda. From the testing results, it is clear that SURF and MSER produce the best results. SURF lives up to its name and is the most robust. Although MSER is very close in terms of robustness and slightly faster in detection time. However, SURF was selected as the robustness of the detection algorithm is of the most importance. Furthermore, structure of SURF is better documented than MSER. Both MSER and SURF are comparatively slow when compared to the other algorithms in the test. Therefore, a compromise was made to have a slower detection time for improved robustness in object tracking.

5.4

Understanding SURF and tuning parameters

The SURF algorithm is a complex algorithm with various tuning parameters. To ensure the algorithm is tuned correctly and its implementation is well understood, the algorithm process is investigated. The SURF feature detection or interest point detection algorithm uses a basic Hessian-matrix approximation. This is due to the fact that Hessian-based detectors are more stable and repeatable than their Harris-based counterparts. The use of Hessian-matrix approximation encourages the use of integral images which reduces the computation time drastically [8].

5.4.1

Integral images

Integral images are based on the framework of Boxlets, allowing for fast computation of box type convolution filters [38]. A pixel value of an integral image IP (X) at location X = (x,y)T represents the sum of all the pixels in a square region surrounding that pixel in the original image as shown in Equation 5.1.

I (X) = P

i≤y i≤x X X i=0 i=0

42

I(i,j)

(5.1)

5.4. UNDERSTANDING SURF AND TUNING PARAMETERS

Once the integral image has been computed, it takes three additions to calculate the sum of intensities over any rectangular area. Thus the computation time becomes independent of the image size [8]. This process is best represented graphically, as can be seen in Figure 5.3.

Figure 5.3: Using integral images it only takes three addition operations and four memory access operations to calculated the sum of intensities of a rectangular area of any size. Image sourced from [8].

5.4.2

Hessian matrix based interest points

The Hessian matrix from the image is then used to calculate the position of blob-like structures. Areas where the determinant of the Hessian matrix is at a maximum are identified as blobs or features. The Hessian matrix is constructed as follows: " # Lxx (X, σ) Lxy (X, σ) H(X, σ) = Lxy (X, σ) Lyy (X, σ)

2

δ where Lxx (X, σ) is the convolution of the Gaussian second order derivative δx 2 G(σ) with the input image I in the point X. Similarly for Lxy (X, σ) and Lyy (X, σ). Gaussians are optimal for scale-space analysis in theory. However, in practise the have to be discretised and cropped reducing their effectiveness. For efficiency, the second order Gaussian derivatives can be approximated by use of box filters. These filters can be used at a very low computational cost using the integral image and have been shown to produce better results than a discretised and cropped Gaussian [8].

43

5.4. UNDERSTANDING SURF AND TUNING PARAMETERS

The determinant is then calculated as follows: det(HApprox ) = Dxx Dyy − (ωDxy )2

(5.2)

where Dxx is a 9X9 filtered approximation of a Gaussian with σ =1.2 and ω ≈ 0.9. Similarly for Dxy and Dyy . The approximated determinant of the Hessian represents the blob response in the image at location X.

5.4.3

Scale space representation

Interest points or features need to be detected at different scales. For example, if the obstacle is very close or very far it will have a different blob response so multiple size filters are required. Therefore multiple size box filters are applied to the image in the scale space with 9X9 being the base filter. The scale space is divided up into octaves, each representing a series of filter response maps obtained by convolving the input image with a filter of increasing size. The filter sizes used increase as the octave increases [8].

5.4.4

Interest point localisation

In order to localise the interest points in the image over the scales, a non-maximum suppression in a 3X3X3 neighbourhood is applied [8]. This keeps track of the detected key points in each scale. The feature points are maxima of the determinants of the Hessian matrix.

44

5.4. UNDERSTANDING SURF AND TUNING PARAMETERS

5.4.5

SURF summary

The SURF algorithm therefore follows the following procedure in locating the features [39]:

• Build Integral Image. • Use block filters to approximate Laplace of Gaussian. • Filters will span several octaves and with fixed number of scales in each. • Integral image helps to keep the running speed constant as it is insensitive to increasing filter sizes. • Uses discrete box filter to calculate Hessian determinant. • All filters run on the the original image allowing parallel execution. • Feature points are maxima of the determinants.

With this knowledge on the basis of the SURF algorithm the parameters can be tuned with an idea of how they affect the algorithm.

5.4.6

SURF input parameters

The input parameters for the SURF algorithm are:

• HessianThreshold Threshold for hessian keypoint detector used in SURF • nOctaves Number of pyramid octaves the keypoint detector will use. • nOctaveLayers Number of octave layers within each octave • extended Extended descriptor flag (true - use extended 128-element descriptors; false - use 64-element descriptors). This parameter is not applicable as the descriptors are not used. • upright Up-right or rotated features flag (true - do not compute orientation of features; false - compute orientation). This is not applicable as the descriptors are not used. This parameter is set to false as it speeds up detection time significantly. 45

5.5. TUNING THE ALGORITHM

5.5

Tuning the algorithm

The Surf algorithms parameters were tuned using a graphical Monte Carlo method. The algorithm was run on all 12 test images.

5.5.1

Tuning the Hessian Threshold

The Hessian threshold was tuned through an iterative process. If the SURF did not detect the obstacle, the Hessian threshold was increased. If it did detect the obstacle as well as many other non-relevant objects, it was decreased. This process was followed over the 12 image data set until a good medium was found.

5.5.2

Tuning the number of octaves

Higher octaves use larger filters and sub-sample the image data. Therefore the larger number of octaves will result in finding larger size blobs. A table showing the filter sizes used for each octave is shown in Table 5.1. At least three filters are required to analyse the data in a single octave. The number of octaves used is also based on the image size. For an image size of 640X320, the number of octaves should be three or four. Therefore the number of octaves is set to three to allow for a decrease in image resolution during the practical implementation. Table 5.1: A table showing the filter sizes used in each octave which spans a number of scales. Octave Filter sizes 1 2 3 4

5.5.3

9-by-9, 15-by-15, 21-by-21, 27-by-27, . . . 15-by-15, 27-by-27, 39-by-39, 51-by-51, . . . 27-by-27, 51-by-51, 75-by-75, 99-by-99, . . . ...

Tuning the number of octave layers

This parameter sets the number of scale levels to compute per octave. The higher this parameter, more blobs will be detected at finer scale increments. This parameter is set at a general multi-purpose value of three. 46

5.6. OBSTACLE DETECTION ALGORITHM TESTING RESULTS

5.6

Obstacle detection algorithm testing results

The obstacle detection algorithm was tested on the set of 12 images taken in expected weather and lighting conditions, using an external digital camera. A sample is shown in Figure 5.4 with the full results in Section 12.3 of the Addenda.

Figure 5.4: Figure showing a summary of the testing results of the obstacle detection algorithm once the parameters were tuned. The circles indicate the obstacle and the size of the circle indicates how strong the feature is. It was noted that a false trigger occurred in Figure 12.2(g), showing flat water with no obstacle. This was corrected by implementing a light smoothing filter on the water surface to rid the water texture of any sharp edges. The original image with the false trigger is reproduced in Figure 5.5(a) for convenience and the result of applying a smoothing filter can be seen in Figure 5.5(b).

47

5.7. OBSTACLE DETECTION ALGORITHM TESTING CONCLUSIONS

(a) Figure showing a false trigger occuring on flat, unobstrucred water.

(b) Figure showing the result of applying a 12X12 smoothing filter to rid the water surface of sharp ripple edges and prevent false triggers. The water is smoothed and no false triggers have occurred.

Figure 5.5: Images showing the algorithm false detecting on open water with no obstacle. After applying a smoothing filter the algorithm no longer false triggered.

5.7

Obstacle detection algorithm testing conclusions

It is clear that the algorithm has been tuned correctly and is operating as desired, reliably detecting obstacles in its path. Therefore, it can be concluded that the obstacle detection algorithm is operating correctly, and the range calculation algorithm can be developed to calculate the range of the detected obstacles.

48

Chapter 6 Development of range estimation algorithm This chapter aims to outline the process followed in developing the code algorithm and hardware used to calculate the range of a detected obstacle or target from the boat. As explained in Section 3, a stereo-vision camera configuration was selected for the task of range calculation. The stereo-vision configuration allows the use of triangulation to determine an objects range from the cameras. However, before the cameras can be used accurately for range calculation, they must be calibrated to determine their intrinsic and extrinsic properties used to rectify the images for accurate calculations.

6.1

Camera Calibration

Although the two cameras selected are identical models from the same manufacturer, they still have slight discrepancies that need to be taken into account. Furthermore, when used together in a stereo configuration they need to be calibrated as a pair to ensure accurate distance measurements. Fortunately, a camera calibration toolbox for Matlab has been developed and was used to aid in the calculation of the camera parameters [40].

49

6.1. CAMERA CALIBRATION

6.1.1

Outputs from calibration procedure

The aim of the camera calibration procedure is to determine the camera’s intrinsic and extrinsic parameters. These parameters define the quantities that affect the imaging process. The camera parameters are then used in the code algorithm to rectify the images received from the cameras. The purpose of this was to reverse the effects created by, but not limited to, lens distortion and sensor array discrepancies. This allows for accurate image processing.

Intrinsic parameters

The intrinsic parameters are the parameters of the internal camera model. They affect how the image is seen on the image plane once it has entered the camera. The following are all intrinsic parameters of the camera [41]:

• Focal Length: The distance from the camera’s center to the image plane. A measure of how strongly the system converges the input light. • Principal point: The coordinates of the principal point represent the point at which the optical axis intersects with the imaging plane. • Skew coefficient: The pixels of the sensor may not be perfectly square, thus the image may be slightly distorted in the X and Y direction. The skew coefficient is the number of pixels per unit length in both the X and Y direction. • Distortion: The lens of the camera induces a slight radial and tangential distortion. This distortion is quantified by the distortion coefficients.

These four parameters form the camera calibration matrix K and need to be determined by experiment for each individual camera.

50

6.1. CAMERA CALIBRATION

Extrinsic parameters

The cameras extrinsic parameters are not unique to a specific camera. They indicate the relationship between the cameras orientation and position within a world coordinate system. The two extrinsic parameters are:

• Rotation: The cameras pitch, roll and yaw with relation to the outside world coordinate. • Translation: Position of the camera with relation to the world.

The intrinsic and extrinsic parameters of the camera define the cameras projection matrix P [41].

6.1.2

Calibration procedure

To calibrate the stereo-cameras, firstly the cameras need to be calibrated individually to determine their intrinsic parameters. Secondly, they need to be calibrated as a pair for stereo-vision to determine their extrinsic parameters. They are calibrated by taking multiple snapshots of a black and white checker board in various orientations. These images are fed into the program were the edges of each block are identified and analysed. As the checker board block dimensions are known, the expected versus actual results of the length of block side is compared and the parameters determined [40]. The procedure followed for moving the checker board orientations was that shown in a tutorial for camera calibration [40].

6.1.3

Individual calibration

Firstly, each camera was calibrated individually. Multiple photos of a checker board were taken in multiple positions. The four corners of the grid are manually located on each image as shown in Figure 6.1. The code then automatically locates all the blocks. Using the same pattern of known dimensions in various orientations allows the toolbox to calculate the cameras parameters.

51

6.1. CAMERA CALIBRATION

Figure 6.1: To calibrate the camera’s parameters a checker grid is used. Four corner points of the grid are manually located, the code then counts the blocks and using the known block dimensions, the cameras parameters are calculated. After the camera calibration has been completed for both the left and the right camera, the toolbox produces the camera projection matrix P for both the left and right individual cameras.

6.1.4

Stereo calibration

Once again the Matlab camera calibration toolbox is utilized for the calibration, this time for the stereo calibration. The input to the stereo calibration procedure is the camera projection matrix P from each individual camera. The calibration procedure recomputes both the intrinsic and extrinsic parameters and automatically determines the cameras extrinsic parameters with relation to each other. By performing a re-computation of the parameters, the errors in the parameters are mitigated, as the global stereo-optimization is performed over a minimal set of unknowns [40]. A view of the calculated cameras’ extrinsic parameters is shown in Figure 6.2. Once the cameras have been calibrated, producing the camera projection matrix P, the code algorithm was developed to determine a targets distance from the camera.

52

6.2. ALGORITHM DEVELOPMENT

Figure 6.2: Figure showing the calculated extrinsic parameters of the cameras using multiple checker board orientations.

6.2

Algorithm development

The algorithm for the range estimation of a target was developed in five stages. A complete regression test was performed on each stage before proceeding to the next stage to ensure that each component operates correctly individually and in conjunction with the other code modules during development. Naturally the first stage is to take in a video feed from each camera.

6.2.1

Video Input

Both cameras were connected to the computer and verified to operate correctly individually. Matlab code was then developed to input a picture from each camera in turn and store the image matrix in a variable for each camera. Image frames are received from the cameras with a resolution of 640 X 480 pixels in a RGB24 format, uncompressed. For range detection, colour information is not required. Therefore, the image is converted into an 8bit grey-scale intensity image to simplify the calculations and improve the computational efficiency. Once the image has been received from the camera and converted, the location of the target is required. 53

6.2. ALGORITHM DEVELOPMENT

6.2.2

Obstacle location

The obstacle detection code, as shown in Section 5, is used to locate the targets position in the left image during runtime. This code locates a user set amount of prominent features and stores their locations in the left image. In order to implement triangulation to determine the target’s range, the targets location in the right image is required. Therefore, code to locate the target’s location in the right image is developed. The following code is run for each of the detected obstacles.

6.2.3

Block matching

A block matching scheme is used to locate the target in the right image. Once the target has been located in the left image, the code creates a 30 X 30 pixel square template around the centre of the target in the left image. The code then searches for the best match of this template in the right image by computing the sum of absolute differences (SAD) to compare the template with the right image. To increase the computational efficiency and indeed the accuracy, the whole right image is not searched. Only a specified region of interest (ROI) is searched. A region of interest of size 150 X 150 pixels is created around the target centre and the template only searches this region. The use of epipolar lines could not be achieved, as the cameras’ image sensors are not accurately aligned vertically on the PCBs. This results in a vertical and horizontal disparity, justifying the use of a template rather than a line match. Once the code has found the most accurate match, the centre of this match is saved as the location of the target in the right image. The size of the template and ROI is varied and optimised for different ranges and target positions in the image. To calculate the distance of the target, the triangulation code is then applied.

6.2.4

Image rectification and triangulation

Before the image triangulation can be applied, the images need to be rectified using the cameras intrinsic parameters, as determined in the camera calibration stage shown in Section 6.1. Rather than rectifying the entire image, the two coordinate locations of the target in the left and right image are rectified before being submitted to the triangulation algorithm. This is a far more efficient approach as opposed to rectifying the entire image, where image information that is not required is rectified. Once the coordinates are rectified, trigonometry is applied using the coordinates of the target in

54

6.3. TESTING AND OPTIMISATION

the left and right image as shown in Section 2.4.2. The output of this algorithm is the raw distance to the target.

6.3

Testing and optimisation

To ensure the range calculation algorithm functions correctly and to verify its results before practical implementation, it is tested in isolation from the obstacle detection code algorithm. Thus targets of known distance are manually selected rather than using the obstacle detection code. This reduces possible errors and allows accurate results of the algorithm under development to be measured by detecting obstacles of a known distance. A Matlab function was created to allow the user to manually point out targets.

6.3.1

Range Algorithm testing procedure

To test the range estimation algorithm, the camera mount was fixed in place on a solid mount. A large target was moved from 1m to 7m with 0.5m intervals straight along the camera’s focal axis and along the left and right peripheral. To ensure reproducibility of the results, three measurements were taken by randomly clicking on the target for each distance and angle. The measurements were taken in a room with a constant light source. The test configuration can be seen in Figure 12.2 of Section 12.4 in the Addenda.

6.4

Range algorithm testing results

The initial tests of the range estimation algorithm were not promising. Accurate results above 1.5m could not be obtained and the readings were not consistent. Each component had been checked and verified to operate correctly during development stage and the fact that the system could accurately detect obstacles closer than 1.5m eliminates a logic error in the code. The inconsistencies and error at a larger range suggested an error in the camera projection matrix P. In the manual covering camera calibration, discussed in Section 6.1.2, the checker board is only moved in orientation and angle during the calibration process. Very rarely is its position in the frame and distance from the camera varied. Through multiple calibration

55

6.4. RANGE ALGORITHM TESTING RESULTS

and testing cycles the following observations were made:

1. For accurate distance measurements, the cameras must be calibrated in the complete region of measurement. In other words, photos of the checker board must be taken at varying distances from the camera over the complete measurement range. For this, a special checker board with very large blocks was made to allow for accurate calibration over a further distance. 2. The board must be moved from the centre to the left and right peripheral. This drastically increases the camera’s distortion accuracy and extrinsic parameter accuracy by increasing the accuracy of the line of best fit for the camera’s distortion along the peripheral. 3. The more sample photos taken, the more accurate the calibration parameters. This increases the algorithm’s tolerance to outliers. The calibration runs an iterative procedure attempting to model the parameters as closely to the data samples provided as possible. Therefore, the more data samples, the better the rejection of outliers in the data.

After a recalibration using this technique, as shown in Figure 6.3, the range test was re-performed. The results of the range test are summarized in Table 6.1. However the complete Table can be seen in Section 12.4 of the Addenda. A graph of error in the reading along the test distances can be seen in Figure 6.4. To ensure the readings were consistent a graph of the standard deviation is plotted in Figure 6.5.

56

6.4. RANGE ALGORITHM TESTING RESULTS

Figure 6.3: Figure showing the calculated camera extrinsic parameters using multiple checker board orientations and positions. Note the checker board is positioned over the entire focal range for an accurate calibration.

Table 6.1: A comprehensive range test was performed where three samples were taken at random positions around the target. These three samples were then averaged to produce the value for each distance below. Set Distance (m)

Measured Distance Left peripheral (m) -

Measured Distance Straight ahead (m) ↑

Measured Distance Right peripheral (m) %

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7

1.0667 1.6240 2.0900 2.5535 2.9986 3.4301 3.9315 4.6767 4.8776 5.3186

1.0080 1.5347 1.9661 2.4720 3.0322 3.4062 3.9674 4.6356 4.9187 5.6007 7.7363 8.3579 8.1375

0.9307 1.4161 1.7963 2.3082 2.7465 3.2432 3.8085 4.2364 4.6306 5.4066

57

6.4. RANGE ALGORITHM TESTING RESULTS

Figure 6.4: A graph showing the percentage error in each reading at a particular range. The error is calculated as the percentage error between the set target distance and the average of the measured distances.

Figure 6.5: A graph showing the standard deviation of the reading at a particular range. A low standard deviation represents consistent readings.

58

6.5. RANGE ALGORITHM TESTING CONCLUSIONS

6.5

Range algorithm testing conclusions

As can be seen in Figure 6.4 a large error spike occurs after 5.5m - this is to be expected. As shown in Section 4, the disparity in the images was designed for accurate measurements to 5m. Therefore, at 5.5m, there is minimal disparity. Beyond that, the images are seen as identical, and a measurement cannot be made. The larger error in the camera’s peripheral is due to the camera lens. The lens of the camera is plastic, resulting in a non-uniform distortion. This is due to small bubbles in the plastic and its low cost manufacture. Thus the line of best fit for the distortion may not accurately model the distortion in certain angles. This leads to a larger error in the camera’s peripherals when compared to straight ahead error as shown in Section 6.4. Based on the above results, the range estimation algorithm has proved itself to be highly accurate, considering its low cost implementation and simple interface. The system is more than capable of consistently and reliably measuring obstacle distances in a marine environment and is ready for integration with the obstacle detection code and practical implementation.

59

Chapter 7 Integration of obstacle detection and obstacle range calculation algorithms Once both the obstacle detection algorithm and obstacle range calculation algorithm have been thoroughly tested individually, they are combined into the complete algorithm. Each individual algorithm’s input and output is summarized in Figure 7.1, where it can be seen that the two algorithm modules are completely additive.

Figure 7.1: The inputs and outputs of each algorithm module. It is clear that the obstacle detection algorithm and obstacle range calculation algorithm can easily be combined to form the complete algorithm.

60

The two algorithms are easily combined by interfacing the global variables and the complete simplified flow diagram of the program loop is shown in Figure 7.2.

Figure 7.2: algorithm.

The flow diagram complete obstacle detection and range calculation

61

7.1. COMPLETE ALGORITHM TESTING RESULTS

7.1

Complete algorithm testing results

The complete system was tested by identifying obstacles in a room and ensuring that the calculated distance to the obstacles is accurate. As the system was tested in Matlab, the test had to be performed on a static scene. This is because Matlab cannot simultaneously take photos with the cameras and the two camera scenes have to be identical. Therefore, if the cameras cannot simultaneously capture images, only a static scene must be used. This prevented a field test with a moving boat. The algorithm was run and the target distance was overlaid on top of the image. The target distances were then manually measured and verified to be correct. The result from a test can be seen in Figure 7.3.

Figure 7.3: A screen shot of a complete test using Matlab in a constant light source room. The two algorithms interfaced correctly by autonomously detecting obstacles and their accurate ranges. Note the negative range due to an incorrect block match. This test is to ensure that both algorithms interface correctly and not how each algorithm performs individually. Hence any test scene can be used.

62

7.2. COMPLETE ALGORITHM TESTING CONCLUSIONS

7.2

Complete algorithm testing conclusions

The complete system performed as expected. However, a negative range was calculated due to an incorrect block match. This is not of concern as all values out of the 0.5m → 5.5m range will be ignored. In conclusion, each algorithm functions correctly individually and the two algorithms interface correctly. The complete algorithm is considered ready for implementation into a mobile test platform for real time, real environment testing.

63

Chapter 8 Practical implementation Once the code algorithm had been designed and thoroughly tested in the Matlab environment it is implemented on a portable system to be run and tested in real time. The limitations imposed by Matlab, namely frame process speed and large time gap between left and right camera sampling a scene, restrict the algorithm regardless of the power of the computer. It is thus necessary to implement it in C/C++ code for rapid execution. To ensure the C/C++ code works as designed, it must be tested on a platform in real time, requiring an embedded system. This will validate the systems simulations by exposing the algorithm to a real world scenario.

8.1

Selection of embedded computer

As shown in Section 2.7, there are four main embedded systems to select from. As the coding algorithms used in this system are quite processor intensive and a reasonable frame rate is required, a powerful processor is required.

8.1.1

Assessment of STM32

The STM32 does not have the high end functionality as can be found in the other embedded systems. With a processor of only 168MHz, it is definitely not fast enough to process the images at the required frame rate. It is a low level micro-controller, which is perfect for very low level programming which may be applicable in the control of the 64

8.1. SELECTION OF EMBEDDED COMPUTER

boat, but not in the processing of the image frames.

8.1.2

Assessment of Beaglebone

The Beaglebone, Beaglebone Black (BBB) and Raspberry Pi are all Linux based systems ˆ running a low level Linux distribution such as Debian, Raspbian or Angstr¨ om. They are essentially personal computers in a small package. The BBB can run a full Ubuntu distribution if required. The Beaglebone black is a newer model and upgrade of the standard Beaglebone. It features faster DDR3 RAM, a faster processor and a very useful device tree for pin muxing. The device tree alone is enough to warrant the selection of the BBB over the standard Beaglebone as it allows pix muxing without recompiling the Kernel each time. And all this added functionality comes at a lower cost than the standard BeagleBone. Therefore the BeagleBone Black is selected over the standard BeagleBone.

8.1.3

Assessment of Raspberry Pi

The Raspberry Pi versus the BBB is an on-going debate. The Raspberry Pi offers a better GPU than the BBBs 3D accelerator, a better HDMI compatibility driver and is better documented. However, this is the only advantage over the BBB. Although the Raspberry Pi has a 700MHz processor and the BBB a 1GHZ processor, the BBB is twice as fast due to the updated processor architecture. The BBB also has 2GB of on board eMMC flash storage with a micro flash slot while the Raspberry Pi requires a full size SD card. The BBB also features a floating point accelerator and 92 header pins that can be configured as required [42]. Therefore the Beaglebone Black will be used for real time implementation for its powerful 1GHz processor, DDR3 RAM, floating point accelerator, device tree and on board 2GB flash memory. An image of the BBB used can be seen in Figure 8.1.

65

8.2. CONFIGURATION FOR CODING DEVELOPMENT

Figure 8.1: The BeagleBone Black PCB

8.2

Configuration for coding development

Although the Beaglebone Black is a very powerful computer it is not desirable to develop ˆ the C/C++ algorithms on the BBB itself. This is because the installed system Angstr¨ om is very basic in its functionality. Furthermore, although it is powerful when compared to other embedded computers, it is still not a fully-fledged computer and the BBB will struggle to run a few high end programs, such as Eclipse, simultaneously due to is limited amount of RAM and processor power. It is therefore preferred to do all the development on the laptop, which has high end functionality, speed and many applications to aid, and then simply compile the code on the BBB. This method allows for the rapid development of code in a safe, stable environment while allowing compilation onto any embedded computer.

8.2.1

Installation of Virtual Box and Linux guest

As most embedded computers run on a Linux based operation system (Debian, archlinux, ˆ Angstr¨ om), it is favourable to develop the code in a Linux based environment. The code can then simply be copied across to the embedded system and compiled. As the laptop computer used runs a Windows 7 64bit operating system, a Linux guest was required to develop the code. A virtual box running Linux Xubuntu was selected in favour of a dual boot system to protect the computer from any mishaps or corrupt commands in Linux. Virtual Box V4.2.16 was installed and configured to run Xubuntu V4.1 as a guest.

66

8.2. CONFIGURATION FOR CODING DEVELOPMENT

Xubuntu was selected over regular Ubuntu owing to Ubuntus slow speed on Virtual Box. Xubuntu was much faster with its basic user interface and in the eyes of the author is more visually appealing. Once Xubuntu was installed, there were a few configurations for prototyping with the BBB that were tried.

8.2.2

Development on computer alone

Using the Eclipse integrated development environment, C/C++ code was to be developed on Xubuntu in the virtual-box, compiled using the Intel compiler and tested on the laptop computer. Once the code was working correctly, it would then simply be copied across onto the BBB and compiled using the ARM g++ compiler and run on the BBB. However, this configuration did not work it practice. It was found that although Virtual Box allows the connection of USB devices, it does not function with external web-cameras. Xubuntu will detect the web-cameras, verify the drivers, communicate with them but the video feed will simply be black. After research it was found that this has been an on-going problem with Virtual Box that is still not solved [43]. Therefore, either Xubuntu would need to be installed as a dual boot or another method would need to be found.

8.2.3

Cross Compilation

Once again avoiding a dual boot, a cross-compile configuration was attempted. A remote link to the bone was configured using Eclipse, allowing remote browsing to the BBB through eclipse. This way code would still be developed and compiled on the computer, however it would be compiled using an ARM compiler for the BBB. The executable could then be copied across and run on the BBB. This method worked for simple programs but failed once OpenCV libraries were included. It was found that the OpenCV libraries that were installed on Xubuntu contain object files compiled for Intel processors. Therefore, they would not work with the ARM compiler as they were not compatible during the linking stage. The process of compiling OpenCV onto Xubuntu for ARM processors is an extremely complex procedure and was avoided. One final method was attempted.

67

8.3. INTERFACING THE BBB WITH PERIPHERALS

8.2.4

Mounting the BBB and Secure Shell

The BBB was mounted to the computer through Secure Shell File System (SSHFS) to allow access to the file system through the file explorer. This essentially makes the BBB accessible like a flash drive. A Secure Shell (SSH) link was also created to allow executable commands and a live video feed through the terminal. An Eclipse project was created on the BBB and run on the laptop. The code is then backed up onto Xubuntu and linked onto Google Drive for a cloud back-up. A custom makefile was created to compile the code, include all the necessary libraries and link all the objects together. This allows for the code to be compiled and tested on its intended platform while using the high end functionality of the laptop for development. It’s a best of both worlds scenario.

8.3

Interfacing the BBB with peripherals

Although the BBB features many communication interfaces and GPIOs, all that is required is two USB2.0 type A connections and one Tx line. The USB connections are for the cameras and the UART Tx line is used to transfer commands to the boats control system. As the BBB only has one USB type A connection, a USB hub is required to interface both of the cameras. A Lexma 4 Port USB2.0 hub was sourced to connect the cameras and any other USB devices that may be required. The divers for the hub were installed and the cameras operated correctly through the hub. The hub offered an optional power connection to add external power. This was connected to reduce the load on the BBB by supplying current directly to the hub for the peripherals. However, it was found that once this was connected, the BBB failed to identify the hub or its peripherals. After inspection of the BBB circuit diagram and the USB hub PCB, the cause of the problem could not be determined. It was however noticed that the BBB supplies 4.8V to the USB and the hub was supplied with 5V from the voltage regulator. Therefore, there may have been a large current flow due to the voltage imbalance once the two terminals were connected. This would result in the BBB shutting down its USB for protection. After monitoring the current drawn by the cameras, it was determined that in fact, external power was not required. The power connection to the hub was then removed, solving the problem. Therefore, all USB peripherals are connected to the HUB and operate correctly without the need for an external supply.

68

8.4. CONVERTING MATLAB TO C/C++

The BBB need only transmit data to the boat’s controller, informing it of a target and its location. Therefore only a Tx UART is required for the data transmission. The pin muxing was configured to establish a UART connection through UART5 on the BBB and was tested via a serial monitor to function correctly. The green UART cable connecting the BBB to the boat controller can be seen in Figure 8.8.

8.4

Converting Matlab to C/C++

Once the BBB was configured, set up for coding and all the peripherals were installed, the Matlab code algorithm could be implemented in C/C++ on the BBB. Fortunately Matlab features a coder that can be used to directly convert Matlab to C/C++ code for implementation.

8.4.1

Matlab coder

The Matlab coder is a tool in Matlab were the Matlab entry file of the program code is identified and the coder simply converts the code to C/C++, producing the header and source files [44], [45], [46], [47]. This is a very attractive method for simple programs or functions but was generally avoided for three main reasons:

• The coder produces C/C++ code, which is very difficult to follow or read. This will make it very difficult to trouble shoot the algorithm if there are errors and may cause problems when interfacing with hardware. • The coder only supports certain built in functions in Matlab. Many functions used in the obstacle detection were not supported by the coder and the code contains many inefficient vectors. • The Matlab code is not optimized and the generated C/C++ code may not be either. This would result in an inefficient algorithm.

For these reasons the entire algorithm was rebuilt from scratch in C/C++, except for one function where the coder was used. A range calculation function, which takes in the coordinates of the target in the left and right image, applies the trigonometry and returns the target range, was converted using the Matlab coder. This is because the function is 69

8.5. CODING IN C/C++

a standalone function, not requiring any interfacing with hardware, and simply applies a mathematical procedure. Once the range calculation function was converted using the Matlab Coder, the produced C code was compiled on a Windows computer and compared to the Matlab function over the range of inputs across the image. The results were identical and thus the C code was verified to function correctly. This function was then compiled on the ARM g++ compiler and included in the C/C++ code on the BBB.

8.5

Coding in C/C++

Interfacing the hardware with C/C++ code can be a very tricky task. Fortunately, there is a C/C++ computer vision library, called OpenCV, which has many built in functions to help. Using OpenCV, the code was developed from scratch following the same stages as the Matlab code shown in Section 7.2. One of the largest areas of concern was whether or not the low cost web-cameras would allow simultaneous capturing through USB.

8.5.1

Retrieving camera frames simultaneously

One important criterion for the C/C++ code is that the two cameras must sample the environment at exactly the same time, or within a tolerable time interval. The Matlab code has a two second delay which is not acceptable. Using the standard cvQueryFrame function in OpenCV the time gap was 1.5 seconds on average between sampling, an unacceptable delay. Unfortunately, the cameras do not have hardware synchronization as they are low quality web-cameras. Therefore, this must be corrected in software. The most common solution to this problem is to build and compile custom drivers for the cameras to allow for simultaneous capture. The problem is each camera unnecessarily uses all of the USB bandwidth when capturing an image, even though it is not all required. This can be changed to allow both cameras to share the bandwidth [48]. This is a complex procedure and was avoided by using a different simpler approach achieving similar results. Both cameras are first told to sample the environment, the frames are then stored in the cameras on-board buffer. This is a very quick task, as it is simply a command sent to the cameras. The frames are then retrieved from the cameras after both cameras have sampled the environment. This way, the overhead on decoding and jpeg decompression

70

8.5. CODING IN C/C++

is eliminated, and the frames stored in the code will be far closer in time. Both methods were tested by sampling a stopwatch running on a computer screen. The results of a five sample test taken using both methods can be seen in Figure 8.5.1. Note the computer screen has a screen refresh rate of 60Hz. Therefore, the true result could lie anywhere in the region shown by the error bars.

(a) The sample delay between the left camera taking a picture and the right camera taking a picture using the standard method.

(b) The sample delay between the left camera taking a picture and the right camera taking a picture using the grab and retrieve method.

It is clear that the method of first requesting frames from both cameras and then grabbing them has shown a significant decrease in frame capture time difference. The average time difference between capture has decreased from 1.592s to 0.022s, a significant reduction in delay time. A delay of 0.022s between the left and right cameras capturing an image from the scene, is an acceptable delay. Therefore the camera control code is complete and the obstacle detection code can be developed.

8.5.2

C/C++ implementation of SURF

Fortunately the SURF algorithm implementation is similar in Matlab and C/C++. Therefore, the same algorithm and parameters were used in the C/C++ algorithm as used in the Matlab algorithm. The C/C++ algorithm was tested following the same procedure as in Section 5.5 and verified to work correctly. A few minor adjustments were made to the Hessian Threshold, but the final results were identical.

71

8.5. CODING IN C/C++

8.5.3

Completing the C/C++ algorithm

The remaining sections of the Matlab algorithm such as the block matching and various other components are coded across, and regression tested, along the algorithm development procedure. The image rectification and range calculation Matlab code were converted to a C function by means of the Matlab coder as explained in Section 8.4.1. The algorithm varies slightly from the Matlab algorithm, however, the final product produces the same results. A simplified version of the main code function as implemented on the BBB is shown in Section 12.5 of the Addenda. A few problems were encountered during the coding and testing process. The following subsections detail some problems and how they were solved.

8.5.4

Ensuring up to date samples

It was noted that the frame processed by the embedded system was not that of the latest sampled. There was a significant delay between an obstacle appearing in front of the cameras and the cameras detecting it. Testing the cameras on Windows, it was found that the problem was not of hardware or high level software but on the Linux operating system itself. After experimentation it was found that the Linux system on the BBB has a five frame buffer built into it in software. Therefore, each frame processed was that of a frame sampled five frames previously. There are two options to solving this problem:

1. Recode the camera drivers on the Linux system to remove the five frame buffer, rebuild the new drivers, recompile the Kernal for the new drivers and modify and recompile OpenCV to interface the drivers and hardware. 2. Manually flush the frame buffer each time a sample frame is required.

The second option was chosen in place of the first. The first option will be the most efficient during runtime and ensure that the picture taken is always the latest. However, it is extremely complex to implement and the results may not warrant the time required. The second option slows down the frame rate slightly as a for loop manually flushes the buffer, however it is still fast enough and is satisfactory for this implementation. The second option was implemented and verified to operate correctly. The embedded system 72

8.5. CODING IN C/C++

now processed the most recent frame sampled, ensuring rapid obstacle detection. The execution time for the buffer flusher varies greatly but was found to be about 1.5s on average for a 320X240 pixel image. After development, shown in later sections, this was decreased to 0.08s for a 160X120 pixel image. This is an acceptable cost as it has increased the response time from 5.5s without the buffer flusher to 0.1s with the buffer flusher. This calculation assumes a 1s frame process time and a 0.1s frame capture time.

8.5.5

Required frame process speed

In order to determine the maximum allowable frame process time, the speed of a boat, detection range and handling characteristics all need to be considered. The boat is capable of speeds up to 4.5ms but will be tested in the range 0.3ms to 1.5ms. To ensure a safety margin, the boat requires about 1m to turn without collision. This includes a 0.5m safety margin. The cameras have a detection range up to 5.5m reliably. To ensure accurate detection, a threshold of 3m is set. Therefore any obstacle within 3m will be avoided. Consider a worst case scenario where the boat is travelling 1.5m/s and a sample is taken just outside the 3m threshold. The boat must be able to take another sample and process it before it crosses the no-go 1m threshold. It therefore has 2m to process two frames. At a speed of 1.5m/s, this is 1.33s. Therefore the frame process time must be below 0.667s per frame or equivalently the frame rate must be above 1.5fps. This can be seen graphically in Figures 8.2(c), 8.2(d) and 8.2(e) showing the sample rate resulting in a collision, avoiding a collision but entering the safety margin and the maximum allowable to avoid the safety margin respectively. When first implemented a frame process speed of 2.5s → 3s on average was observed. By adding the frame buffer flusher this value increased to 3s → 4.5s. This process time needed to be decreased substantially. The following methods to increase the frame rate were attempted.

Increasing executable priority and processor speed

Although the BBB has a 1GHz processor, it is not always running at this speed. It often goes into a power save mode or may have a lower maximum clock frequency set. Therefore a script was written to initialize the clock to full speed at start up. Unfortunately this

73

8.5. CODING IN C/C++

did not decrease the frame process time. The Main executable was given the maximum “Niceness” to place it as a high ranking executable with top priority. Unfortunately, this did not make a significant difference either.

Effects of resolution

Each frame was originally pulled from the camera in a RGB24 format of 640X480 pixels in the Matlab code as shown in Section 6.2.1. However, the drivers for the cameras on the BBB did not support this resolution; therefore the resolution was lowered to 320X240 pixels on the BBB. Images are converted to grey-scale for processing, which increases the speed by a factor of three, as each pixel is represented by one 8bit number rather than three 8bit numbers in RGB. To decrease the frame process time, the resolution could be further decreased. The cameras were instructed to grab pictures at a resolution of 160X120 pixels rather than the previous 320X240 pixels. This should theoretically decrease the process time by a factor of four. Indeed it did and the new process time over a number of samples was 0.7s → 0.85s. Although this is still above the required 0.667s, it was assumed to be the lowest possible.

Reducing processor load

The BBB had many programs installed on it during coding development that were no longer necessary. Many background scripts that were no longer used. However these ran in the background and were assumed not to interfere with the Main program. Eventually the BBB became corrupt after implementing an erroneous script and required reformatting. ˆ The BBB was reformatted using the latest Angstr¨ om image, reconfigured and the code installed. When the code was run on the new clean system a frame rate of 0.3sec was observed consistently. This was due to the reduced load in background applications. Therefore the boat has twice the required frame rate, increasing robustness and achievable test speeds. The achieved results can be seen graphically in Figure 8.2(f).

74

8.5. CODING IN C/C++

Mapping the new resolution

The cameras were calibrated using a 640X480 pixel image on Matlab. Reducing this resolution requires a new calibration or a mapping from a 640X480 pixel image to a 160X120 pixel image. The code was configured to take this into account and map the pixel position across. The reduced resolution decreased the frame process time by a factor of four at the cost of range accuracy. Indeed the mapping reduces the accuracy by a factor of four however; the range estimation is still well within the tolerable bounds for its environment.

75

8.5. CODING IN C/C++

(c) Figure showing the frame process time at which a collision will occur at max speed.

(d) Figure showing the frame process speed slightly above the desired. The boat clears the obstacle but enters the 0.5m safety margin.

(e) Figure showing the threshold frame process speed at which the boat will safely avoid the obstacle with a 0.5m safety margin.

(f) The achieved frame process speed. This allows for robust sensing or an increase in speed of the boat.

76

8.6. WIRELESS COMMUNICATION WITH BEAGLEBONE BLACK

8.6

Wireless communication with Beaglebone black

The BBB is Secure Shelled (SSH) with executable commands to allow video feed, GUIs and other visual data to be viewed on the host PC during run time. Programs such as “Cheese” are used to configure the cameras and check functionality all from the remote laptop. This is very useful to inspect false triggers and feedback frames of the boats view. However, this requires the boat to be tethered as it is communicated to via a USB cable and an on board USB to Ethernet driver. For accurate testing and safety of electronics, a wireless link needs to be established. As shown in Section 2.2.2 there are various methods for establishing a wireless link with the boat where it was shown that the most suitable technologies were that of 3G or Wireless LAN. As the boat is simply in testing stage it will not operate outside of Wifi range, which is quite large for a line of sight communication. Therefore Wifi was selected for wireless communications. The boat is required to establish a remote access point (AP) to which the laptop can connect. To achieve this, the boat needs a Wifi adaptor to establish the remote AP. Initially a D-Link DWA-125 USB wireless adapter was used. However, once the drivers were configured it was found to use too much USB bandwidth and power. Furthermore, it was not able to establish an AP and could only join one. A solution was to fit a wifi nano router onto the boat, and connect it to the BBB through Ethernet. This solves the bandwidth issue by communicating via Ethernet and not USB, it also enabled the boat to establish a remote AP and as it was a standalone unit it could be fitted with its own regulator to solve the power requirement issue. A small Edimax BR-6258n 150Mbps Wireless Broadband Nano Router was installed as shown in Figure 8.8 and connected to the BBB Ethernet via a custom yellow Ethernet cable with RJ45 connectors. Once configured the Edimax nano router enabled a remote connection over a tested range of 50m with a maximum range possibly further. Full access to the boats file system, remote code editing and video feedback was now established to the boat wirelessly.

77

8.7. TEST PLATFORM SELECTION

8.7

Test platform selection

Although the system is designed to be implemented on a sail boat, the obstacle detection system functions independently of whether or not the boat is powered by wind or motor. Only the obstacle avoidance algorithm is affected as a yacht cannot navigate into the wind whereas the power boat can. Therefore, the chosen test platform is a powered mono-hull model boat. The reason for testing on a power boat is to eliminate the inconsistencies due to the wind and allow for repeat tests to be conducted regardless of wind direction. The boat used was selected for its wide stern and size. The battery was mounted as low in the hull as possible to increase stability. A picture of the hull can be seen in Figure 8.2.

Figure 8.2: The empty hull of the boat used for testing

8.7.1

Test platform actuation

The boat has two controllable parts, the rudder and the propeller. The boats rudder is actuated by an analogue JR591 hobby servo which is mounted in the stern of the boat. A balanced rudder system as detailed in Section 2.2.3 would be used for final implementation to save input energy, however for this test platform a standard rudder will suffice. Propulsion comes from a BCL51G brushed DC Motor. A brushed DC motor was selected for its high torque and ease of control. The motor is mounted in the bow for weight distribution and to ensure the prop-shaft seal is above the waterline.

78

8.8. BOAT CONTROL HARDWARE

8.8

Boat control hardware

Once the BBB was configured and all the peripherals verified to work correctly, the system was installed onto the boat. Although the BBB and camera module is of concern and development for this project, a low level module is required to control the boats actuators and run basic commands for the system. This would simulate the large system of the yacht the module is intended to integrate with. For this job any micro-controller with the required interfaces would suffice.

8.8.1

Control module development

The micro-controller will need to have the following:

• 2 X UART • 3 X PWM • 3 X GPIO

An Arduino Mega micro-controller was selected as the control module, as it satisfies all these requirements and for its ease of programming, it is well documented and 5V logic. A shield was designed and hand made for the mega to interface the high current component’s such as the servos and motor. As the mega features 5V, logic the servos and controller interface perfectly and all that was required was header pins for their connections. The shield therefore has a LM7805 linear 5V regulator in a TO220 package supplying the 5V for each servo. Two LED’s were connected to GPIO ports for general use during testing. Molex connectors were used to provide data and power connections for the GPS module, servos and data connections for the BBB.

Motor control hardware

Owing to the boat simulating a yacht, which cannot reverse, an H-bridge is not required for the motor control. A simple MOSFET switching configuration was chosen to control the drive motor’s speed. A high power HUF75339P3 MOSFET was selected for its fast 79

8.9. BOAT CONTROL SOFTWARE AND SENSOR REQUIREMENTS

switching time and high current rating [49]. A 1A freewheel diode was fitted to cycle the back EMF from the motor, protecting the MOSFET from large current and voltage spikes. The PCB design on the shield can be seen in Figure 12.3 of Section 12.7 in the Addenda. The designed system worked well with the MOSFET running cool for all low speed tests. However, when the speed of the motor was increased the MOSFET began to get very hot. This should not occur as the MOSFET is switching in its saturation region and not linear region so the power dissipated in the MOSFET is minimal. The losses occurred during switching are minimal as the slew rates of this MOSFET are very fast and the saturated resistance is very low. Eventually the external free-wheel diode popped, blowing the diode into a short, breaking a track in the PCB. The cause was found to be due to the increased inductive kick created by the motor, which was now drawing more current at high speed. Simply replacing the diode could solve the problem, however the MOSFET would still run hot and the diode could potentially blow again. A solution was to add another diode in parallel to the original to increase the number of paths for current to flow and add a high power diode for very high speed tests. This has been tested by running the motor at full speed, increasing the load until a stall occurs. The MOSFET does not get hot and the diodes cope perfectly at all PWM frequencies and motor speeds. A heat sink was added to the MOSFET simply to dissipate any heat if necessary.

8.9

Boat control software and sensor requirements

The BBB module is a standalone module which can be fitted to any autonomous ocean vessel. For testing, it requires a control system to interface into, to execute the commands in hardware. The Arudino Mega and shield is used for this job on the boat. It is programmed with firmware to interpret the BBB’s commands and execute the necessary manoeuvres.

8.9.1

Arduino Mega Firmware

The code firmware programmed onto the Arduino Mega simply reads the serial port from the BBB and performs the required operations as commanded. In parallel it continuously

80

8.9. BOAT CONTROL SOFTWARE AND SENSOR REQUIREMENTS

updates its current heading using a GPS and has a timer interupt to ensure constant sample times for the controller on board. A flow diagram of the code flow can be seen in Figure 8.3.

8.9.2

Collision avoidance algorithm

No dynamic collision avoidance algorithm is implemented in the system at this stage. A predetermined operation is executed when an obstacle is detected and the boat is moved into a free area of water. The boat is turned rapidly while the cameras pan ahead monitoring the oncoming obstacles and then move the boat into an area with no obstacle should another obstacle appear. This system should be able to interface directly with the path planning system for a dynamic approach. As shown in Section 2.5.1 the Arduino Mega can feed the obstacles position and range into the path planning polar diagram. The path planner will then do a recalculation of the optimal path to take using the modified polar diagram, avoiding the areas of obstacles.

8.9.3

Software control requirements

For the boat to track various headings and follow set paths, a controller for the boat is required. This will allow it to maintain a set heading, follow a predetermined course and allow it to avoid obstacles with minimal energy. A minimal energy controller is vital to reduce wear on components and reduce input energy into the actuator. As shown in Section 2.2.4 the three inputs required to achieve adequate control are:

• Current heading • Desired heading • Turn rate (Prevents over shoot of the desired heading)

Where the current heading is measured using a sensor, the desired heading is predetermined where θ is the current heading and dt is the by software and the turn rate is simply dθ dt sample time.

81

8.9. BOAT CONTROL SOFTWARE AND SENSOR REQUIREMENTS

8.9.4

Selection of heading sensor

A sensor is required to measure the current heading of the boat. There are two options of heading sensors that can be fitted to the boat:

• Magnetometer • GPS

The GPS sensor was selected for its ease of implementation and ability to provide a wealth of information as well as heading. It does not require any calibration and can be used to integrate many more features such as weigh-point tracking if required. A GlobalTop FGPMMOPA6H Standalone GPS Module was selected for its small size, low power consumption (66mW) and built-in 15X15X2.5mm ceramic patch antenna [50].

8.9.5

Heading sensor integration

The GPS module is in a small 16X16mm SMD package, therefore a breakout PCB is required to integrate it with the system. A surface mount PCB was designed and hand made as shown in Figure 12.4 in the Addenda. Care was taken when hand soldering the module not to overheat the pins. A temperature profile given in the data sheet was used as a guide when soldering. The break out board features an ultra-bright LED to indicate the GPS 3D fix status and a CR2032 button cell lithium battery used for warm starts during testing. The GPS is not integrated directly into the Ardunio Mega breakout board so that it can be removed should the path planning system featuring its own GPS be fitted. Custom code was written to extract the heading information from the GPS.

GPS software

The GPS streams in strings using a National Marine Electronics Association (NMEA) protocol [51]. The GPS module is capable of feeding many strings containing a wealth of information. However, the following strings are automatically transmitted:

• GPGGA: Global Positioning System Fix Data 82

8.10. CONTROLLER DESIGN

• GPGSV: GPS satellites in view • GPGSA: GPS DOP and active satellites • GPRMC: Recommended minimum specific GPS/Transit data

Where GPRMC is the string containing the required data. Code was written to read the strings and search for the heading information in the GPRMC string, extract the bytes and convert them into an integer. This integer is then passed into the controller as the current heading.

8.10

Controller design

A controller is designed to adjust the boat’s heading with minimal actuator movement, overshoot and oscillations. The process followed in the design of the controller for the heading control of the boat is shown below.

8.10.1

Functional requirements

The designed controller must therefore be designed to ensure that:

• The closed loop control is stable. • The closed loop system rejects input and output disturbances effectively. • The system is sufficiently damped and does not oscillate. • The system has an adequate settling time. • The internal system signals are within the range 0-255. • The sample rate is within hardware limitations, such as processor speed and ADC sample rates.

83

8.10. CONTROLLER DESIGN

8.10.2

Control system Structure

The rudder control system can be described and represented by block diagram algebra. The completed block diagram is shown in Figure 8.4. The system is a single input, single output (SISO) feedback configuration. It has input r(t) and output y(t). The system may experience input disturbances modelled by v(t).

Figure 8.4: The control architecture can be described by this block diagram linking all the transfer functions in a control loop.

8.10.3

Component blocks

Each component of the system is represented by a block transfer function. Each block’s function is described below.

Pre-filter (p(s)) can be used as a low pass filter to rid the input signal of any high frequency components such a noise or apply a simple gain to the signal if required.

Controller (k(s)) is responsible for the control of the loop. It can take the form of a P, PI or PID, state-space or quantum field theory (QFT) controller.

Feedback element (f(s)) is used in the control law often as a low pass filter to rid the feedback of sensor noise. In this case it will most likely be a moving average filter to smooth out the heading readings from the GPS sensor.

84

8.10. CONTROLLER DESIGN

System (g(s)) is a mathematical model representing the handling characteristics of the boat. It relates the input to the servo to the response of the boat.

8.10.4

Design of controllers

There are various architectures that could be used for the control of the boat. However, methods such as QFT and state-space require accurate models of the boats dynamics for a dynamic modelling approach. A mathematical model of the boat could not be obtained due to prevailing external weather conditions during the time of design, severely limiting access to water testing to determine a model. Therefore, a PID controller is selected as it can be implemented and tuned roughly on land during design then finely tuned on the water quickly.

8.10.5

Implementation of the PID controller

The PID controller can be considered as three controllers, a proportional, an integral and a derivative controller summed in parallel as shown in Figure 8.5. The effects of each component are explained below.

Figure 8.5: A PID block diagram showing its three components, a proportional term, integral term and derivative term.

The proportional term is responsible for the rate of turn of the boat. A large proportional gain results in a large change in the output for a given change in error. Should the proportional term be too large, the system will become unstable. However, 85

8.10. CONTROLLER DESIGN

should the proportional gain be too small the system will be slow, less sensitive and have poor rejection disturbance.

The integral term accelerates the movement of the boat towards the set-point and ensures accurate set-point tracking. Care must be taken when using the integral term as will be explained in Section 8.10.6.

The derivative term predicts system behaviour, improving settling time and system stability by attracting the loci and thus closed loop poles of the system into the negative splane. The derivative does however increase the control loops sensitivity to measurement noise. To mitigate this effect, the input from the GPS is passed through a moving average filter to smooth the output. The PID equations general formula is:

Control = (kP × Error) + (kI ×

X

Error) + (kD ×

dP ) dT

(8.1)

P Where error is the difference between the actual heading and the desired heading, Error is the rate at which the current heading is is the accumulation of previous errors and dP dT changing. The proportional coefficient kP, integral coefficient kI and derivative coefficient kD are gain coefficients which tune the PID controller [52], [53], [45].

8.10.6

Pseudo code of PID control algorithm

The pseudocode for the basic algorithm is implemented shown below: While(Actual != Setpoint) Error = Setpoint - Actual Integral = Integral + Error*dt Derivative = (Error - Error(n-1)/T) Control = (Error*kP) + (Integral*kI) + (Derivative*kD) ControlOutput = scalar * Control Error(n-1) = Error

86

8.10. CONTROLLER DESIGN

The full PID code function as implemented in C can be seen in Section 12.6 of the Addenda. Firstly the current heading of the boat is retrieved from the GPS using the GetHeading() function. The GPS heading often produces spurious headings which are checked for. If the heading is not accurate, the function is recalled. The use of an integral term can be problematic. If the boat’s set-point is far from its current point, the integral term will grow very large and produces a dominating effect, slowing the response time. This is known as “Integral windup”. The solution is to zero out the integral term if the error is too large. Therefore the integral term is only applied when the set point is close to the desired point. The control output must be in the range 0 → 255, therefore a scalar is applied to the control command. This prevents saturation of the servo commands. The requested servo position is then checked to ensure it is within the safe bounds of the servo. If not, the servo is moved to its maximum position in the required direction.

8.10.7

Controller tuning methods

Although there are analytical tuning methods using root locus diagrams, Nyquist and Nicols for example, the system will be tuned manually by observing the system response to a set of coefficients and tuning the controller through an iterative process. This method was selected as satisfactory results can be obtained during one test session, and with limited access to test water, it was the only viable option. Tuning the control loop is the process of adjusting the control parameters to the optimum values to achieve the desired response of the system. The coefficients kP, kD and kI are all zeroed initially then the following procedure is followed to tune the system.

Tuning proportional therm (P)

The proportional term is then increased until the boat turns towards the set-point rapidly and then begins to oscillate around its set-point. The proportional gain is then decreased slightly until the boat is just below the point of oscillation around its set-point. This produces the maximum response time while ensuring stability.

87

8.11. POWER INTEGRATION AND SUPPLY

Tuning derivative term (D)

The derivative term is then increased to prevent the start of oscillations. It is also used to mitigate oscillations due to system disturbances such as wind or water ripples.

Tuning integral term (I)

The integral term is then increased to provide accurate set-point tracking. There is a trade-off between overshoot and fast response time. Overshoot is tolerable in this test platform; however it may not be when implemented in a yacht as it may cause the yacht to stall when beating upwind.

8.11

Power Integration and supply

When all the components are integrated into the boat, a large amount of power is required. To ensure each component receives its required voltage and current, a table is created to formalise the requirements. The main components of the boat, their average power consumption and required voltage are shown in Table 8.1. Table 8.1: Table showing the voltage required and current consumed by the major components in the boat during operation. Component

Average Current

Rated Voltage

2 X JR Servo Edimax BR-6258n wifi router BeagleBone Black Arduino Mega GPS Module BCL51G DC Motor

200mA (typical) 700mA 500mA - 800mA 60mA - 100mA 25mA 2A - 4A

4.8V - 6V 5V 5V 7V - 12V 3.3V 4V - 9V

As can be seen in the table various components require various input voltages. Therefore, voltage regulators are required to provide a constant voltage to the components.

88

8.11. POWER INTEGRATION AND SUPPLY

8.11.1

Selection of voltage regulator

Ideally for the 3.3V and 5V components, switch mode power regulators would be used for efficiency, which is of great importance on remote systems. However for this test platform standard linear regulators were used as they are readily available and in a easy to use TO-220 dip package. This allows them to be used on veroboard and on homemade PCBs with ease.

8.11.2

Voltage regulator integration

For 5V supply an LM7805 linear voltage regulator is used. It is rated to 1.5A; therefore all the 5V components could not be connected onto one regulator. This would not be ideal as the servos have a 1A stall current which could strain the regulator if it is powering too many components. Therefore the BBB, Ethernet and Servos each have their own 5V regulator for protection and redundancy. The system’s power diagram is shown in Figure 8.6 where the power supply to each major component can be seen. The Arduino Mega has an on board 5V linear regulator used to power the ATMEL processor and on board sub-systems. External 5V components attached to the mega are powered by a 5V LM7805 regulator, fitted to the shield. This is to reduce the strain on the on board regulator. The Arduino Mega also features a 3.3V linear regulator, which is used to power the GPS unit. Therefore, an external 3.3V LM317 voltage regulator is not required. The on board 3.3v regulator supply current is limited to 50mA, however the GPS draws 35mA at most, so the regulator is not at risk.

89

8.11. POWER INTEGRATION AND SUPPLY

Figure 8.6: A basic power diagram showing the connections between each main component and the power sources

8.11.3

Selection of batteries

The boat requires a fair amount of power during operation, up to 6A with the motor engaged and running. Initially it was fitted with one 7.2V 4500mAh NiMH battery to power all the components. This battery was used as it was available and has a relatively large capacity. However, when the motor speed was increased during testing, the battery did not last long and could not supply the required current. This resulted in a brown out in the Arduino Mega and the cameras to malfunction producing errors during runtime. To solve this error, a 3 cell 11.2V 2200mAh LiPo battery was then added to power the Arduino Mega, motor and servos. This solved the energy capacity problem and reduced noise on the data lines. Ideally one high capacity LiPo battery would power the whole system as the battery technology has a high energy density reducing the boat weight. However one could not be sourced so two batteries were used.

8.11.4

Battery Care

This boat features two batteries of different technologies, a 4500mAh NiMH battery and a 2200mAh LiPo battery. Care must be taken to ensure they are maintained and charged correctly. The two different battery technologies require their own chargers and need to

90

8.11. POWER INTEGRATION AND SUPPLY

be looked after differently.

NiMH battery care

The nickelmetal hydride (NiMH) battery is a six cell battery pack. Care must be taken during its use not to over discharge the battery. A complete discharge of a battery can result in one or more cells going into polarity reversal, which can cause permanent damage to those cells. Fortunately, when the battery is low, it will cause the 5V regulators to drop supply voltage and the BBB will automatically shut down. A fully charged cell supplies an average 1.25 V/cell during discharge, down to about 1.01.1 V/cell when the battery is flat. The LM7805 has a rated drop out voltage of 2V, thus when the battery drops below 7V it will theoretically malfunction. However, during testing it was found that the battery could in fact drop to 6.5V before the regulators were effected. To charge the battery, it can be trickle charged at C/10 amperes for 15 hours or fast charged at C/3.33 amperes for 5 hours were C = 4500mAh. Therefore the battery is charged at 1.4A for 5 hours. An Apex α fuzzy logic charger is used to monitor the battery during charging.

LiPo battery care

LiPo batteries are somewhat more temperamental than a NiMH and require care when used. The battery pack contains three LiPo cells rated between 3V → 4.2V. During discharge, it is vital that the cell voltage does not drop below 3V. To prevent this, an alarm is fitted to the boat, monitoring each cell. If either cell drops below 3.3V an alarm is raised and the boat is reduced to low power to bring back to shore. Extremely high discharge currents of up to 70C can be reached with this battery. LiPo batteries cannot be trickle charged. They are charged at a constant current until each cell reaches 4.2V. Then the charger is switched to constant voltage mode until the current drawn is decreased below its initial value. The LiPo batteries are charged using a Supermate DC6 charger/balancer.

91

8.12. SYSTEM INTEGRATION

8.12

System Integration

The complete system diagram, showing the connections of each component, is shown in Figure 8.7. Here, the communication technique and important details of each component are shown. Each time a new component was added to the system, a complete regression test was performed to ensure that the boat operated correctly. The complete system, as is in the boat, can be seen in Figure 8.8 and the final boat can be seen in Figures 8.9 and 8.10. To ensure the system functions correctly, a thorough real world test is required. With all the extra components added, the boat has become quite heavy. Should any more components be added or testing in rougher water conditions be required, the system will need to be transferred to a larger hull. Any more weight will push the prop-shaft seal below the waterline, potentially causing a leak. Unfortunately, this hull is limited to flat water testing only, due to the low bow and exposed circuitry.

92

8.12. SYSTEM INTEGRATION

Figure 8.3: A UML diagram showing the code flow of the firmware on the Arduino Mega micro-controller.

93

8.12. SYSTEM INTEGRATION

Figure 8.7: The system diagram of the boats components showing their connections, communication protocol and data flow directions.

Figure 8.8: A top view of the boat showing the components as fitted.

94

8.12. SYSTEM INTEGRATION

Figure 8.9: A view of the boat from the front left.

Figure 8.10: A top view of the boat.

95

Chapter 9 System Testing and Results Once the system was installed and connected, the boat went through various testing stages. The first of which was land based to ensure all the electronics, communications and hardware function correctly. As the system does not feature obstacle avoidance algorithms built in, a simple manoeuvre is programmed were the boat will either turn left or right depending on its last manoeuvre, if an obstacle is detected.

9.1

Initial bench testing

The boat was placed on a test bench and a full diagnostics test was run. Firstly, the wifi was tested by logging into the system remotely. The BBB could be mounted onto the Xubuntu operating system on the laptop and remotely controlled through the terminal with a live video feed. The obstacle detection system on the BBB was then run and tested in a large clear room with no obstructions. An obstacle would suddenly be placed in the cameras FOV to check the response. Here, the error in the frame response time, detailed in Section 8.5.4, was detected and corrected. It was noticed by a large delay between inserting a obstacle into the FOV and the boat responding. Once the obstacle detection algorithm correction was verified to operate correctly by responding to objects in the cameras FOV instantly, the control servos were tuned and centred. The MOSFET driver for the motor was then checked by running the motor at various speeds and verified to operate cool under all speeds. Once land tests confirmed basic functionality, the boat was then tested in a sheltered, stable water environment. 96

9.2. INITIAL WATER AND OBSTACLE AVOIDANCE ALGORITHM TESTING

Before the boat was placed in water, a splash resistant cover was made to fit on top of the boat to protect the circuitry.

9.2

Initial water and obstacle avoidance algorithm testing

The boat was then placed in a sheltered swimming pool. Here it was placed in water and observed long enough to ensure no seals leaked and no water was entering the hull. The motors were then run at increasing speed while controlling the boat manually through wifi. After some testing, the MOSFET problem detailed in Section 8.8.1 was encountered. This was not picked up during bench testing, as the motor was unloaded during the bench test, while the propeller resistance increased significantly when in water. The MOSFET problem was corrected and the electronics ran correctly. After 30mins, of testing the obstacle avoidance code began to produce errors while accessing the cameras and the cameras shut down. After debugging the code and inspecting the circuit, it was found that the battery was not able to supply the current required when the motor was running at speed. This resulted in the camera’s malfunctioning as described in Section 8.11.3. After the additional battery was added, this error was corrected. An image of the boat during its first manual control water test is shown in Figure 9.1.

Figure 9.1: The boat during its initial controlled water test. The waterline is below the seals and the boat handles safely. 97

9.2. INITIAL WATER AND OBSTACLE AVOIDANCE ALGORITHM TESTING

Once the boat was confirmed to function safely in water, the autonomous control code was tested to ensure the system does respond to obstacles. A small buoy was made and used in the pool for testing. The boat was tested at a run speed of 0.7m/s during a sunny clear sky day. The weather conditions during this test are summarized in Table 9.1. Table 9.1: A summary of the weather conditions during the initial water test in the swimming pool. Swimming pool test conditions Temperature Lighting Wind Water surface condition Boat speed

98

20◦ C Sunny with shadows ≈ 2knts Flat 0.7m/s

9.2. INITIAL WATER AND OBSTACLE AVOIDANCE ALGORITHM TESTING

9.2.1

No obstacle

To ensure the boat is not false triggering and is in fact detecting the obstacle, a control test was run. The boat was run across the pool five times with no obstacle to check for false triggers. The boat did not false trigger and ran from one end to the other consistently. A superimposed image showing the boat’s path on one run is shown in Figure 9.2.

Figure 9.2: The boat performance during a control test in the pool. The test was run five times to ensure the boat did not false trigger. This image shows the boats path during a single test.

99

9.2. INITIAL WATER AND OBSTACLE AVOIDANCE ALGORITHM TESTING

9.2.2

Single obstacle

The buoy was then inserted into the pool and the test re-run five times. The boat avoided the obstacle each time consistently. The superimposed image of the boats performance can be seen in Figure 9.3. The boat avoided the obstacle with 1.5m to spare. Therefore, the speed of the boat can be increased and its response to multiple obstacles can be tested. For this, a larger test area is required.

Figure 9.3: The boat performance during a single obstacle test in the pool. The boat avoided the obstacle consistently in each test. This image shows the boats path during a single test.

100

9.3. FINAL OBSTACLE AVOIDANCE ALGORITHM TESTING

9.3

Final obstacle avoidance algorithm testing

The testing in the pool verified that the system does in fact work; however the limits of the system need to be investigated. For this a larger test area was required. A comprehensive test was performed in Silvermine Reservoir in the Silvermine Nature Reserve, Cape Town. Owing to the prevailing weather conditions in Cape Town at the time of testing, there was a single window for a wind free, rain free day in which to test over a three week period. Therefore, all the testing had to be done in one outing. The number of runs for each test was reduced from five to three to conserve the battery as the boat’s LiPo battery was good for about two hours run time, and there would not be time to recharge. The weather conditions during the test are summarized in Table 9.2. Table 9.2: A summary of the weather conditions during the final water testing in Silvermine Dam. Silvermine test conditions Temperature Lighting Wind Water surface condition Boat speed

18◦ C Sunny with high level clouds 0knts Flat 0.7m/s → 1.7m/s

101

9.3. FINAL OBSTACLE AVOIDANCE ALGORITHM TESTING

9.3.1

No obstacle

A control test was run to ensure the system did not false trigger. The boat consistently did not false trigger along the straight run consisting of three 20m straight runs at speeds ≈0.7m/s, ≈1m/s and ≈1.5m/s. The GPS recorded data of the run can be seen in Figure 9.4 and an image superimposing the boats position along the run in Figure 9.5.

Figure 9.4: The GPS data from the control test overlaid onto Google maps using a GPS visualiser to monitor the speed and distance of the control run.

102

9.3. FINAL OBSTACLE AVOIDANCE ALGORITHM TESTING

Figure 9.5: The boat performance during a control test in reservoir. The boat did not false trigger in either of the three runs. This image shows the boat’s path during a single test.

103

9.3. FINAL OBSTACLE AVOIDANCE ALGORITHM TESTING

9.3.2

Singe obstacle

The test was then re-run with a single obstacle in the direct path of the boat. Once again three tests were run at ≈0.7m/s, ≈1m/s and ≈1.5m/s. The boat successfully avoided the obstacle in all three runs with about 1 → 2m to spare between the boat and buoy. The GPS data of a run can be seen in Figure 9.6 and a superimposed picture of the boats position during the run in Figure 9.7.

Figure 9.6: The GPS data from a single obstacle run overlaid onto Google maps using a GPS visualiser to monitor the speed and distance of the single obstacle run.

104

9.3. FINAL OBSTACLE AVOIDANCE ALGORITHM TESTING

Figure 9.7: The boat performance during a single obstacle test in reservoir. The boat successfully avoids the obstacle in all of the three runs. This image shows the boat’s path during a single test.

105

9.3. FINAL OBSTACLE AVOIDANCE ALGORITHM TESTING

9.3.3

Multiple obstacles

To check the system’s tolerance to a crowded image, a multiple obstacle test was performed with three obstacles of varying shapes and sizes. The boat successfully avoided the obstacles on each run consistently with about 1 → 2m to spare. The GPS data of a run can be seen in Figure 9.8 and a superimposed picture of the boats position during the run in Figure 9.9.

Figure 9.8: The GPS data from a multiple obstacle run overlaid onto Google maps using a GPS visualiser to monitor the speed and distance of the multiple obstacle run.

106

9.3. FINAL OBSTACLE AVOIDANCE ALGORITHM TESTING

Figure 9.9: The boat performance during a multiple obstacle test in reservoir. The boat successfully avoids the obstacles in all of the three runs. This image shows the boat’s path during a single test.

107

9.3. FINAL OBSTACLE AVOIDANCE ALGORITHM TESTING

9.3.4

Crowded water space

To test the system’s response time and the camera’s panning ability to look into the corner, an obstacle was placed in the boat’s immediate path after avoiding an obstacle. The boat would therefore need to respond rapidly to a sudden obstacle. The boat was directed towards two obstacles, detecting them from about 1.5m. When it turned, it turned towards another obstacle and again successfully avoided the second obstacle with 1m to spare. The boat avoided all the obstacles in all three runs at speeds of ≈0.7m/s, ≈1m/s and ≈1.5m/s. The GPS data can be seen in Figure 9.10, a superimposed image of the boats position along the test run in Figure 9.11 and a frame sample from the on board camera in Figure 9.12.

Figure 9.10: The GPS data from a crowded water space test overlaid onto Google maps using a GPS visualiser to monitor the speed and distance of the multiple obstacle run.

108

9.3. FINAL OBSTACLE AVOIDANCE ALGORITHM TESTING

Figure 9.11: The boat performance during a crowded water space test in reservoir. The boat successfully avoids the obstacles in all of the three runs. This image shows the boat’s path during a single test.

109

9.3. FINAL OBSTACLE AVOIDANCE ALGORITHM TESTING

Figure 9.12: A frame by frame view from the on board camera of the boat performing the S-bend. The camera is seen to pan hard left once the first obstacle is detected. It then slowly returns to centre tracking the second obstacle and then repeats, avoiding the second obstacle in time.

110

9.4. PID CONTROLLER TESTING

9.4

PID controller testing

The batteries were fast charged after the obstacle avoidance tests to provide enough power for testing and tuning of the PID control system. The test was run a total of ten times at ≈1m/s speed with the boat initially travelling South. The boat was then commanded to travel West through wifi. After running the boat over the tests, it was found that the controller did not operate as desired. The boat could not track the set point and would oscillate slowly with large overshoots around the North position. After inspection, it was found that this is due to two factors which negatively impact the control algorithm. These factors are:

• The sample rate. The GPS module used cannot provide the current heading information fast enough, updating every 1.5s with a baud rate of 9600. Therefore the sample rate of the controller is too slow at 1.5s and cannot accurately control the boats heading. • GPS heading accuracy. The GPS heading accuracy depends on the speed of the boat and time it has been travelling in a certain direction. The GPS needs time to figure out where you are and where you are going and is only accurate to ≈1.5m. Therefore, on a small boat moving slowly and executing sharp turns, this method of sensing the boat’s heading is not viable for a controller.

111

Chapter 10 Conclusions This research project was undertaken to design and evaluate a reactive collision avoidance system for an autonomous sail boat to be used for oceanic research purposes. The following research questions were asked before development:

1. What sensor technologies are available and currently being used in autonomous platforms of obstacle sensing? 2. Which is the most appropriate sensor to use for a reactive collision avoidance system on an autonomous sail boat? 3. How accurate and reliable a system can be designed for reactive collision avoidance using a low cost sensor? 4. Is it feasible to reliably detect obstacles in an ocean environment with such an unpredictable base surface and avoid them? 5. Is the algorithm suitable for remote deployment?

The project has given an in-depth study and investigation into the design, development and testing of such a system. Various obstacle sensing technologies were compared to determine the most suitable sensor for the requirements. It was determined that an optical image sensor in a stereo-vision configuration would be the best sensor for sensing the obstacles in this application. Algorithms were developed to reliably detect obstacles in the marine environment and calculate their range from the boat. The system was then tested using the high level Matlab development environment, using a data set. The algorithms were then reprogrammed into a low level language and implemented into a 112

10.1. SUMMARY OF RESEARCH FINDINGS

mobile platform for further testing. A small model boat with all the required control circuitry and components was developed for this purpose. The algorithms were then tested in a real world, real time environment using the untethered test platform. This chapter presents the conclusions of the project in terms of both the work conducted and findings during testing.

10.1

Summary of research findings

10.1.1

Conclusions from testing the mobile platform

The mobile test boat was built to allow the BeagleBone Black to execute its desired commands and test the collision avoidance algorithms in a real word scenario. After initial troubleshooting and a few modifications, the electrical and mechanical components interfaced and operated correctly, and the boat operated as required by reliably and smoothly executing the desired operations. By delivering on its functional requirements, the boat developed for the mobile test platform is considered a success. A wifi module was added to the boat as an additional feature to allow for on the fly tuning, code editing, live video feeds and remote control of the boat. This was a successful addition to the system by allowing the boat to be truly untethered and allow for rapid tuning and testing. A further additional feature was added, that of a GPS and rudder controller. The GPS allows the boat to determine its position, speed and current heading. This was very useful during testing, by allowing the boat speed and path to be tracked and stored for evaluation. The GPS position and speed calculation was a success; however the heading and rudder controller did not function as required.

Conclusions from testing the PID controller

Following the testing of the PID controller, it can be concluded that the PID controller did not operate correctly, owing to it not satisfying all of the design requirements for the controller.

113

10.1. SUMMARY OF RESEARCH FINDINGS

10.1.2

Conclusions from testing the code algorithms

The two main code algorithms developed in this research project are the obstacle detection algorithm and obstacle range detection algorithm. Each algorithm was tested during the algorithm development stage to ensure that it operated correctly before further development. The following conclusions can be made regarding each algorithm.

Conclusions of obstacle detection algorithm

The obstacle detection algorithm was tested using a data set of 12 images using Matlab. The algorithm functioned as required, detecting single and multiple obstacles on an ocean surface. The obstacle detection algorithm is considered a success by reliably and consistently detecting obstacles in the ocean environment.

Conclusions of obstacle range calculation algorithm

A thorough test of the range calculation algorithm was conducted, testing the accuracy over the cameras entire FOV and range. The algorithm was highly successful, accurately determining the range of obstacles over the entire FOV and range. The results show that accurate range calculations can be made using low cost and simple cameras if calibrated correctly. Furthermore, the system was able to consistently reproduce constant ranges for a obstacle as shown by a very low standard deviation in the results.

10.1.3

Conclusions from testing complete system on a mobile platform

Once the two algorithms were combined, they were implemented into the remote embedded system and tested in a real world scenario. Based on the results shown, the following conclusions regarding the ability of the algorithm can be made. The boat can travel on a water surface without false triggering by detecting false obstacles as obstacles. Therefore, the algorithm does not false detect obstacles, or if it does, the range calculation shows the object to be too far to be an obstacle.

114

10.2. LIMITATIONS OF THE CURRENT STUDY

The algorithm can accurately detect a single obstacle or multiple obstacles in the cameras FOV. The algorithm identified objects in the boat’s path as an obstacle and calculated their range. When the range fell below the threshold, the boat executed an avoidance manoeuvre to turn away from the obstacle. The boat successfully avoided the obstacles in each test with an approximate average 1m safety margin, 0.5m larger than designed. By avoiding the obstacles in each run conducted, the system is robust and can reproduce results. It can therefore be concluded that the system allows for the detection and avoidance of single and multiple obstacles with a adequate safety margin. The embedded system had an adequate frame process speed by successfully avoiding obstacles with a 1m safety margin. On each of the congested water space tests conducted, the boat successfully avoided all of the obstacles. The boat avoided the first obstacle, then deliberately turned directly towards another close obstacle. The boat then managed to detect sudden obstacle, process its range and execute the avoidance manoeuvre in time. This shows that the frame rate of 0.35s was adequate for a boat speed of 1.5m/s.

10.2

Limitations of the current study

The following limitations are imposed on the designed system.

10.2.1

Test limitations

With all of the modules fitted to the test boat such as the extra battery and extra features, the waterline was higher than expected and the boat could only be used for testing in flat water conditions. The boat struggled to achieve high speeds during testing as the bow wake became too large, posing a threat to water damage. This limited the system to being tested in flat water conditions only, and the system could not be tested on multiple days, as perfect weather conditions were required.

10.2.2

Algorithm limitations

Although the obstacle detection algorithm is considered a success within its designed scope, like most engineering techniques, it does have a drawback. Using the method proposed for detecting obstacles will impose a limitation on the size of obstacles that 115

10.3. SIGNIFICANCE OF THE RESEARCH CONTRIBUTION AND SUCCESS OF THE PROJECT can be detected. The system will not detect obstacles smaller than those calibrated for when setting the Hessian threshold. This threshold is a static variable and is set to a value to detect obstacles of a general expected size. Therefore, very small obstacles may not be detected. Furthermore large obstacles may not be detected if the obstacle floods the entire FOV of the camera and do not have a texture to which the algorithm can lock onto. For example, the side of a very large tanker would flood the FOV, and the algorithm would not lock onto any features. This would, however, also be a problem for a depth map approach. Furthermore, the range calculation algorithm has an accurate detection range of only 5.5m. The camera housing was designed this way for small scale testing, limiting the current hardware to testing only and not full scale implementation.

10.3

Significance of the research contribution and success of the project

The approach taken in this research project stepped away from a common depth map approach and proposed a novel new method for obstacle detection and avoidance using feature detection algorithms. This method was developed and used to increase the speed of detection time for the system. This method shows promising potential for future applications in a range of obstacle detection applications. The method could be applied to aeronautical and land based autonomous vehicles. Further development of the algorithm will allow it to identify as well as detect the obstacles in its path, an incredibly useful feature. This novel algorithm, combined with the accurate range calculation algorithm, creates a very powerful, universal method for obstacle detection. It provides developers of autonomous sail boats with a new low cost, reliable method for collision avoidance. The system has proven to be a success in this application by rapidly detecting obstacles and calculating their ranges during real word, untethered testing. The system proved accurate and reliable during testing, with a 100% success rate. It is therefore feasible to reliably detect and avoid obstacles on an ocean surface using this system. By satisfying and delivering on the project requirements, the developed system is, therefore, considered a success within its scope.

116

Chapter 11 Recommendations Based on the foregoing conclusions, the following recommendations for further research are made.

11.1

Further testing

The system should be transferred into a larger and more stable hull for testing in an ocean environment while sailing at sea. This will provide results that are better suited to further analysis of the system. The various sizes of obstacles that will be detected should be tested with multiple obstacles increasing in size in various weather conditions. For this the stereo-vision casing will need to be modified with the addition of lenses for waterproofing.

11.2

Comparison of depth map method and feature detection method

It was initially assumed that the method proposed would be faster than the depth map process as the feature detection algorithm is very fast and only the detected points need to be rectified and used for a range calculation. However, this has not yet been proven, as a depth map algorithm has not been programmed and tested on a common system. It would therefore be highly beneficial to compare both algorithms side by side and compare 117

11.3. INSTALL A MAGNETOMETER FOR PID CONTROLLER

their speed, accuracy and limitations.

11.3

Install a magnetometer for PID controller

A solution to the PID sample rate and accuracy would be to fit a magnetometer to the boat to provide a heading reading that is more sensitive to changes in the boat’s heading. This would allow for a much faster sampling rate and a much more accurate heading reading, allowing the PID to perform accurately as required.

11.4

Increase processing power

The short-listed embedded systems were selected for their small physical size to allow for use in a small scale test. Should a larger test platform be used a larger computer can be used such as an Intel i7 processor, allowing for a faster process speed, better image resolution and thus more accurate range calculation results.

11.5

Auto-tune SURF and filter

Currently the SURF algorithms Hessian variable is static. Ideally this would be a dynamic variable to allow the system to detect larger and smaller obstacles. The variable could be adjusted according to the water surface condition, weather conditions and known information about the area. This would increase the system’s resilience to changes in the environment. Potentially a dynamic smoothing filter could also be applied to the ocean surface, allowing functionality in rough weather conditions as well as accurate detection in fair weather conditions.

118

11.6. IMPLEMENT A CACHE

11.6

Implement a cache

Very large obstacles will be detected when they are far away, as they will take up a large portion of the FOV at a distance. The range calculation will then determine that it is out of range and ignore the obstacle. When the very large obstacle becomes too close, it may flood the FOV and thus result in a collision. A cache monitoring the boat’s speed, heading and position of obstacles will solve this potential problem by ensuring that all obstacles and their positions are accounted for.

11.7

Implement a hybrid sensing system

A hybrid of sensors will result in the best results by eliminating the error of an obstacle flooding the camera’s FOV, monitoring subsurface obstacles and allowing for night time navigation. Therefore, the current system should be combined with further sensing techniques and tested before it is used in a production ocean vessel.

119

Chapter 12 Addenda

12.1

Object detection algorithm time comparison results

Table 12.1: Table showing the full results form the speed test of each algorithm. The test was run on a 1GHz Beaglebone Black embedded system using a 320X240 RGB24 image with a single obstacle. The test was run five times and the average used to compare the speeds. Detection Time Detection Time Detection Time Detection Time Detection Time Detection Time Detection Time SIFT ORB SURF FAST MSER STAR GFTT

Average

1.0172 0.9809 1.1083 1.0245 1.0166

0.0823 0.0823 0.0822 0.0821 0.0834

0.5939 0.5809 0.5153 0.5868 0.5725

0.0053 0.0056 0.0053 0.0054 0.0053

0.4328 0.4618 0.4573 0.4228 0.3915

0.1998 0.2002 0.1995 0.1996 0.2006

0.3262 0.3713 0.3713 0.3744 0.3717

1.0295

0.0824

0.5698

0.0053

0.4332

0.1999

0.3630

120

12.2. FEATURE POINT RECOGNITION COMPARISON RESULTS

12.2

Feature point recognition comparison results

(a) SIFT algorithm

(b) STAR Algorithm

(c) FAST algorithm

(d) ORB algorithm

(e) GFTT algorithm

(f) MSER algorithm

121

12.2. FEATURE POINT RECOGNITION COMPARISON RESULTS

Figure 12.1: SURF algorithm

122

12.3. OBJECT DETECTION RESULTS USING SURF

12.3

Object detection results using SURF

(a)

(b)

(c)

(d)

(e)

(f)

123

12.3. OBJECT DETECTION RESULTS USING SURF

(g)

(h)

(i)

(j)

(k)

(l)

124

12.4. RANGE TEST PROCEDURE AND RESULTS

12.4

Range test procedure and results

Figure 12.2: The configuration of the range test. The cameras are mounted in a fixed position in front of the laptop, running the testing algorithm. The target is then moved from the centre position to the left peripheral, then to the right peripheral for each distance, and three samples are taken for each position. The test was run over a range of 1m → 7m with 0.5m intervals.

Table 12.2: A comprehensive range test was performed, where three samples were taken at random positions around the target. The results are tabulated below. Distance (m)

1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7

Left peripheral (m) -

Straight ahead (m) ↑

Right peripheral (m) %

Reading 1

Reading 2

Reading 3

Reading 1

Reading 2

Reading 3

Reading 1

Reading 2

Reading 3

1.0535 1.6112 2.1296 2.5781 2.9890 3.4210 3.9279 4.6431 4.8807 5.3182

1.0535 1.6230 2.0705 2.5792 2.9818 3.3979 3.9312 4.6549 4.8774 5.3191

1.0932 1.6380 2.0700 2.5031 2.9950 3.4713 3.9353 4.7320 4.8746 5.3185

1.0107 1.5412 1.9852 2.4762 2.9820 3.4014 3.8476 4.7308 4.8939 5.3585 7.7279 7.6277 7.7797

1.0032 1.5310 1.9602 2.4543 3.1339 3.4030 4.0672 4.5926 4.9494 5.7654 7.7388 8.7185 8.7609

1.0102 1.5321 1.9531 2.4857 2.9807 3.4144 3.9876 4.5835 4.9130 5.6782 7.7421 8.7277 7.8721

0.9638 1.4335 1.7786 2.3072 2.7465 3.2390 3.8101 4.3343 4.6291 5.3789

0.8627 1.3988 1.7802 2.3092 2.7466 3.2474 3.8088 4.0537 4.6304 5.4203

0.9655 1.4050 1.8302 2.3070 2.7626 3.2500 3.8067 4.3212 4.6323 5.4208

125

12.5. SIMPLIFIED BEAGLEBONE BLACK MAIN CODE FUNCTION

12.5

Simplified Beaglebone Black main code function

int main() { // Create the Mat objects containing a header describing the image and a pointer to the matrix. Mat GreyL, GreyR, CamL, CamR, img_keypoints_1, img_keypoints_2, Template, result; unsigned int Count; double t; Point CordsL[2] , CordsR[2] ; float Range[2]; // Open serial link at 9600 bauds. Ret=LS.Open("/dev/ttyO4",9600); // Crate a vector of keypoints to store the location of the detected objects. std::vector SurfDetail; // Load an image from the specified file and returns the pointer to the loaded image. CvCapture* captureL = cvCaptureFromCAM(0); CvCapture* captureR = cvCaptureFromCAM(1); // Set the camera resolution. cvSetCaptureProperty(captureL, CV_CAP_PROP_FRAME_WIDTH, 160); cvSetCaptureProperty(captureL, CV_CAP_PROP_FRAME_HEIGHT,120); cvSetCaptureProperty(captureR, CV_CAP_PROP_FRAME_WIDTH, 160); cvSetCaptureProperty(captureR, CV_CAP_PROP_FRAME_HEIGHT,120); // Turn on the motor EngageMotor(); while(true){ for (int K=0 ; K 5){ // Remove any errors Range[i] = 0;}}

127

12.5. SIMPLIFIED BEAGLEBONE BLACK MAIN CODE FUNCTION

// Tell the mega that a target is in range and turn if (Range[i] 360){ return;} // Calculate the error Error = SetPt - Heading; // prevent integral ’windup’ and accumulate the error integral if (abs(Error) < IntThresh){ Integral = Integral + Error;} // Zero the integral if it is out of bounds else { Integral=0;} // Calculate control terms P = Error*kP; I = Integral*kI; D = (LastHeading-Heading)*kD; // Sum the components ServoCommand = P + I + D; // scale Drive to be in the range 0-255 ServoCommand = ServoCommand*ScaleFactor; // Change direction as commanded ServoCommand = Straight+ServoCommand; // Ensure the servo limits are not exceeded if (ServoCommand > LeftPos){ RudderServo.write(LeftPos);} else if (ServoCommand < RightPos){ RudderServo.write(RightPos);}

129

12.6. PID CONTROL CODE

else{ RudderServo.write(ServoCommand);} // Save current value for next iteration LastHeading=Heading; }

130

12.7. PCB DESIGNS

12.7

PCB Designs

Figure 12.3: The PCB for the Arduino Mega used to interface the servos, motor and BeagleBone Black

Figure 12.4: The PCB layout of the GPS breakout board.

131

12.8. VIDEO SUMMARY

12.8

Video Summary

A video summary of the boat performing a quick demonstration can be seen at: http: //www.youtube.com/watch?v=fXYw5PLu290

132

References [1] V. Kozlov. (2000) Mastery proven by sails, wind and current. [Online]. Available: http://vakfinn.narod.ru/MPS/EN/Articles/Wind-and-Current.html# .Ue5WtI3vii0 [Accessed: 23/07/2013] [2] T. Statheros, G. Howells, and K. McDonald-Maier, “Autonomous ship collision avoidance navigation concepts, technologies and techniques,” The Journal of Navigation, vol. 61, no. 1, pp. 129–142, 2008. [3] J. Larson, M. Bruch, R. Haltermana, and R. W. John Rogers, “Advances in autonomous obstacle avoidance for unmanned surface vehicles,” Space and Naval Warfare Systems Center, San Diego, Tech. Rep., 2007. [4] C.-H. Chao, B.-Y. Hsueh, M.-Y. Hsiao, S.-H. Tsai, and T.-H. Li, “Real-time target tracking and obstacle avoidance for mobile robots using two cameras,” in ICCASSICE, 2009, Aug 2009, pp. 4347 – 4352. [5] R. Stelzer, “Autonomous sailboat navigation, novel algorithms and experimental demonstration,” Ph.D. dissertation, De Montfort University, Leicester, 2012. [6] J. Keane. (2012) Unmanned robots push the limits on the high seas. [Online]. Available: http://www.industrytap.com/ unmanned-robots-push-the-limits-on-the-high-seas/823 [Accessed: 23/07/2013] [7] I. Khvedchenia. (2011) Comparison of the opencvs feature detection algorithms - ii. [Online]. Available: http://computer-vision-talks.com/2011/07/ comparison-of-the-opencvs-feature-detection-algorithms-ii/ [Accessed: 30/09/2013] [8] H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-up robust features (surf),” ETH Zurich, Tech. Rep., 2008. [9] G. Menzies, 1421 The year China Discovered The World, 1st ed. 2003.

Bantam books,

[10] C. Johnson, J. Hoffman, and J. Jansson, “The secret of sailing,” 2009. 133

[11] F. Jenne, “Simulation and control optimization of an autonomous sail boat,” Master’s thesis, Swiss Federal Institute of Technology Zurich, 2010. [12] P. Rynne and K. von Ellenrieder, “Unmanned autonomous sailing: Current status and future role in sustained ocean,” Student Papers, vol. 43, no. 1, pp. 21 – 30, 2009. [13] P. F. Rynne and K. D. von Ellenrieder, “Development and preliminary experimental validation of a wind- and solar-powered autonomous surface vehicle,” IEEE Journal of Oceanic Engineering, vol. 35, no. 4, pp. 971 – 983, October 2010. [14] R. Brooks and A. Flynn, “Fast cheap and out of control: A robot invasion of the solar system,” Journal of The British Interplanetary Society, vol. 42, pp. 478–485, 1989. [15] D. Litwiller, “Ccd vs cmos: Facts and fiction,” Laurin Publishing Co. Inc., January 2001. [16] Olympus, Ultrasonic transducers, technical notes, 1st ed., Olympus, 2006. [17] W. H. Page and A. W. Page, Wireless Under The Water: A Remarkable Device That Enables A Ship’s Captain To Determine The Exact Location Of Another Ship Even In The Densest Fog, 1st ed., 1914. [18] R. Watson, Radar Origins Worldwide: History of Its Evolution in 13 Nations Through World War II, 1st ed. Trafford Publishing (UK) Limited, 2009. [19] FEMA, “Lidar and digital evaluation data,” North Carolina Cooperating Technical State, Tech. Rep., January 2003. [20] D. L. Alexander, D. K. Schwehr, and R. Zetterberg, “Establishing a regional ais application specific message register,” in Maximizing the potential of AIS, no. 7, Cape Town, South Africa, March 2010, pp. 108 – 115. [21] R. Sudharsanan, Ed., Low Cost Scanning LiDAR Imager, vol. 3, no. 2, 2013. [Online]. Available: http://www.lidarnews.com/PDF/LiDARMagazine SudharsananMoss-LowCostLiDARImager Vol3No2.pdf [Accessed: 15/07/2013] [22] Q. Zhan, S. Huang, and J. Wu, “Automatic navigation for a mobile robot with monocular vision,” Robot Research Institute, Beijing University of Aeronautics and Astronautics, Tech. Rep., 2008. [23] D. Lau. (2002) Leading edge views three dimensional imaging advances in capabilities of machine vision part one. [Online]. Available: http://www.vision-systems.com/articles/print/volume-17/issue-4/departments/

134

leading-edge-views/3-d-imaging-advances-capabilities-of-machine-vision-part-i. html [Accessed: 23/07/2013] [24] C.Knoeppel, A.Schanz, and B.Michae1is, “Robust vehicle detection at large distance using low resolution cameras,” in Intelligent Vehicles Symposium, 2000. IV 2000. Proceedings of the IEEE, Oct 2000, pp. 267–272. [25] R. Labayrade, D. Aubert, and J. P. Tarel, “Real time obstacle detection in stereovision on non flat road geometry through ”v-disparity” representation,” in Intelligent Vehicle Symposium, 2002. IEEE, vol. 2, 2002, pp. 646–651 vol.2. [26] Q. Yu, H. Araujo, and H. Wang, “Stereo-vision based real time obstacle detection for urban environments,” in Proceedings of ICAR 2003; The 11th International Conference on Advanced Robotics; Coimbra, Portugal, June 2003, pp. 1671 – 1676. [27] E. Trucco and A. Verri, Motion, 1st ed. Prentice Hall, 1998, ch. 8, pp. 178 – 914. [28] D. G. Bailey, “Sub-pixel estimation of local extrema,” Institute of information Sciences and Technology, Tech. Rep., 2003. [29] W. Lovegrove, “Single-camera stereo vision for obstacle detection in mobile robots,” Department of Physics and Engineering, Bob Jones University, Tech. Rep., 2007. [30] S. corporation. (2009) Surveyor stereo vision system. [Online]. Available: http://www.surveyor.com/stereo/stereo info.html [Accessed: 23/07/2013] [31] B. Peasley and S. Birchfield, “Real-time obstacle detection and avoidance in the presence of specular surfaces using an active 3d sensor,” Department of Electrical and Computer Engineering Clemson University, Clemson, SC 29634, Tech. Rep., 2006. [32] R. A. Brooks, “A robust layered control system for a mobile robot,” IEEE Journal of Robotics and Automation, vol. RA2, no. 1, pp. 14 – 23, 1986. [33] Mathworks. (2013) Matlab, the language of technical computing. [Online]. Available: http://www.mathworks.com/products/matlab/ [Accessed: 15/07/2013] [34] OpenCV. (2013) Opencv, computer vision library. [Online]. Available: //opencv.org/ [Accessed: 15/07/2013]

http:

[35] R. Jain, R. Kasturi, and B. G. Schunck, Machine Vision, 1st ed. McGraw-Hill, Inc, 1995. [36] D. Systems, Solidworks, premium x64 edition ed., 2013.

135

[37] MathWorks. (2013) Computer vision system toolbox. [Online]. Available: http:// www.mathworks.com/help/vision/feature-detection-extraction-and-matching.html [Accessed: 30/09/2013] [38] P. Simard, L. Bottou, P. Haffner, and Y. LeCun, “Boxlets: A fast convolution algorithm for signal processing and neural networks.” NIPS, Tech. Rep., 1998. [39] ButterCookies. (2011) Opencv adventure, surf detector. [Online]. Available: http:// experienceopencv.blogspot.com/2011/01/surf-detector.html [Accessed: 30/09/2013] [40] J.-Y. Bouguet, Camera Calibration Toolbox for Matlab, 2010. [Online]. Available: http://www.vision.caltech.edu/bouguetj/calib doc/#system [Accessed: 04/08/2013] [41] R. Hartley and Zisserman, Multiview Geometry in Computer Vision, 2nd ed. Cambridge University Press, 2008. [42] M. Leonard. (2013) How to choose the right platform: Raspberry pi or beaglebone black. [Online]. Available: http://makezine.com/magazine/ how-to-choose-the-right-platform-raspberry-pi-or-beaglebone-black/ [Accessed: 20/08/2013] [43] Various. (2013) Virtual box online forum. [Online]. Available: virtualbox.org/ticket/242 [Accessed: 12/08/2013]

https://www.

[44] Mathworks, MATLAB Coder R2013a, 2013. [45] M. Braae, Control Engineering 2, 3rd ed. SA: UCT Press (Pty) Ltd, 2011. [46] Mathworks, Embedded MATLAB - Users Guide, 2013. [47] learnvst. (2012) Matlab coder vs hand coding. [Online]. Available: http: //stackoverflow.com/questions/10940669/matlab-coder-vs-hand-coding [Accessed: 25/08/2013] [48] S. Droppelmann, M. Hueting, S. Latour, and M. van der Veen, “Stereo vision using the opencv library,” 2010. [49] F. Semiconductor, 75A, 55V, 0.012 Ohm, N-Channel UltraFET Power MOSFETs, 2001. [Online]. Available: http://www.fairchildsemi.com/ds/HU/HUF75339P3.pdf [Accessed: 1/09/2013] [50] G. Technology, FGPMMOPA6H GPS Standalone Module Data Sheet, 2011. [Online]. Available: http://www.adafruit.com/datasheets/ GlobalTop-FGPMMOPA6H-Datasheet-V0A.pdf [Accessed: 15/09/2013] 136

[51] N. M. E. Association. (2013) National marine electronics association standards. [Online]. Available: http://www.nmea.org [Accessed: 19/09/2013] [52] M. Braae, Control Engineering 1, 2nd ed. SA: UCT Press (Pty) Ltd, 2001. [53] J. school of engineering. (2011) Pid control, a brief guide to using pid control with arduino. [Online]. Available: http://www.maelabs.ucsd.edu/mae156alib/control/ PID-Control-Ardunio.pdf [Accessed: 19/09/2013] [54] P. Shukla and K. Ghosh, “Revival of the modern wing sails for the propulsion of commercial ships,” International Journal of Civil and Environmental Engineering, vol. 18, pp. 75–80, 2009. [55] “Analysis of fuzzy thresholding schemes,” Pattern Recognition, vol. 33, no. 8, pp. 1339 – 1349, 2000. [Online]. Available: http://www.sciencedirect.com/science/ article/pii/S0031320399001223

137