Improving Orchard Efficiency with Autonomous Utility Vehicles

10 downloads 275 Views 921KB Size Report
May 7, 2010 - way the vehicle can drive entire orchard blocks autonomously, even if ... but share a common infrastructure (processors, hard disks, memory, .... range data from a row, with the vehicle facing upwards in the image. ... When the canopy is not thick, it is possible to get hits from trees in .... The dead reckoning.
An ASABE Meeting Presentation Paper Number: 1009415

Improving Orchard Efficiency with Autonomous Utility Vehicles Bradley Hamner Carnegie Mellon University 5000 Forbes Ave Pittsburgh, PA 15213 [email protected]

Sanjiv Singh Carnegie Mellon University 5000 Forbes Ave Pittsburgh, PA 15213 [email protected]

Marcel Bergerman Carnegie Mellon University 5000 Forbes Ave Pittsburgh, PA 15213 [email protected]

Written for presentation at the 2010 ASABE Annual International Meeting Sponsored by ASABE David L. Lawrence Convention Center Pittsburgh, Pennsylvania June 20 – June 23, 2010 Abstract. In modern orchards, many maintenance tasks call for a driver to steer a tractor through rows of trees at slow speeds over hundreds of acres as it mows or sprays. Similarly, manually-driven orchard platforms allow a crew of workers to perform tasks such as pruning, training, and thinning. In this paper we report on the development of vehicles capable of autonomous row following in orchards. Such vehicles increase efficiency and reduce production costs by moving a farm worker from an unproductive driving role to a productive one. In the past year, the technologies that enable such autonomous row following have been implemented on an electric utility vehicle capable of continuously driving orchard blocks; to date this vehicle has logged more than 130 km of driverless traversals. The vehicle uses laser range scanners to detect trees and other objects in its vicinity, The authors are solely responsible for the content of this technical presentation. The technical presentation does not necessarily reflect the official position of the American Society of Agricultural and Biological Engineers (ASABE), and its printing and distribution does not constitute an endorsement of views which may be expressed. Technical presentations are not subject to the formal peer review process by ASABE editorial committees; therefore, they are not to be presented as refereed publications. Citation of this work should state that it is from an ASABE meeting paper. EXAMPLE: Author's Last Name, Initials. 2010. Title of Presentation. ASABE Paper No. 10----. St. Joseph, Mich.: ASABE. For information about securing permission to reprint or reproduce a technical presentation, please contact ASABE at [email protected] or 269-429-0300 (2950 Niles Road, St. Joseph, MI 49085-9659 USA).

builds a model of the row of trees, and uses this model to safely steer the vehicle along the row without GPS. It detects when it has reached the end of a row, turns, and enters the next row. This way the vehicle can drive entire orchard blocks autonomously, even if the rows are of varied lengths or trees are missing in the rows. In addition to the laser scanners, the only other sensors necessary are wheel encoders that continuously measure distance traveled and the steering angle. All computation is performed on a rugged laptop onboard the vehicle. We present results of using this autonomous vehicle to tow various types of equipment, mowing an orchard block and spraying weeds. We also show how it was used to deploy a system for apple crop load estimation. Finally, we show how this autonomous navigation technology could aide in thinning, pruning and harvesting by adapting it to a variety of vehicles. Keywords. Agricultural robot, automatic steering, autonomous navigation, specialty crops.

The authors are solely responsible for the content of this technical presentation. The technical presentation does not necessarily reflect the official position of the American Society of Agricultural and Biological Engineers (ASABE), and its printing and distribution does not constitute an endorsement of views which may be expressed. Technical presentations are not subject to the formal peer review process by ASABE editorial committees; therefore, they are not to be presented as refereed publications. Citation of this work should state that it is from an ASABE meeting paper. EXAMPLE: Author's Last Name, Initials. 2010. Title of Presentation. ASABE Paper No. 10----. St. Joseph, Mich.: ASABE. For information about securing permission to reprint or reproduce a technical presentation, please contact ASABE at [email protected] or 269-429-0300 (2950 Niles Road, St. Joseph, MI 49085-9659 USA).

Introduction Specialty crop producers are starting to witness advances in production automation that to this date were the sole domain of program crops such as wheat, soy, and rice. Perhaps the most significant of these are vehicles capable of driving autonomously along rows of fruit and nursery trees while carrying a variety of sensors that increase management efficiency and implements that decrease labor cost. Consider, for example, an autonomous utility vehicle equipped with a boom of cameras and software that can locate, count, and size apples on images. Such a vehicle could provide accurate crop load estimates weeks ahead of the harvesting season, thus increasing significantly the grower’s capability to plan her harvesting labor pool. Consider also an autonomous tractor equipped with mechanical devices that perform gross, pre-thinning operations ahead of the finer, manual thinning; such a machine could decrease the total labor cost associated with this and other steps of the production process. Finally, consider a small, nimble autonomous vehicle equipped with sensors that can automatically measure a tree’s caliper; such a vehicle could count and caliper all of a nursery’s trees with a speed and precision unattainable by human scouts, again increasing efficiency and decreasing cost. Such scenarios are under development by the Comprehensive Automation of Specialty Crops (CASC) project. The CASC project focuses on the information, mobility, and manipulation technologies needed to make the specialty crop sector more efficient, profitable, and competitive. As part of CASC we have proposed the concept of the APM, or Autonomous Prime Mover. The APM is a family of autonomous vehicles with a common sensing and computing infrastructure, ranging from small ATVs to large orchard platforms and tractors. We suggest that the APM is analogous to the personal computer. Personal computers vary in terms of computing power and size, but share a common infrastructure (processors, hard disks, memory, operating system, etc.) that allows users to exchange applications and documents. The idea with the APM is the same: irrespective of the vehicle size, features, or carrying capacity, applications developed for one should be seamlessly transferable to others (as long as the appropriate hardware is available). Currently, these vehicles are capable of driving along a row of trees, turning at the end of the row and entering the next one. Row following is conducted at the center of the row independently of the row width (e.g., for sensing or mowing) or at a pre-defined distance from the trunk line (e.g., for spraying or thinning). In the future we expect a large number of tasks could be executed autonomously without producing customized systems. While autonomous driving is a well-studied problem in the robotics community, in-orchard and in-nursery driving is not yet fully solved. Whereas autonomous tractors and harvesters operating in wheat fields take advantage of the large, open, mostly unobstructed spaces and clear GPS visibility, autonomous vehicles operating in orchard-like environments must deal with significant challenges-- tight spaces, especially at the end of the rows; uneven canopy, poor GPS visibility due to the dense canopy; people and bins in their way; the need to not bruise limbs and fruit. In addition, viable solutions will necessarily be low-cost enough to be attractive to growers. Our approach is to develop scalable, low-cost navigation systems that do not rely on GPS and still provide reliable vehicle driving and accurate localization. More specifically, we use commercial off-the-shelf laser rangefinder sensors mounted on the APMs to find the trunks and/or canopy and intelligent processing methodologies to determine open lanes for the vehicle to drive. We believe that laser rangefinders are more appropriate sensors than cameras for the types of environments in which the APMs are expected to operate. For one, they are not susceptible to variations in illumination, which can vary

2

dramatically as, for example, clouds occlude the Sun; and they do not work at night. While cameras tend be cheaper than lasers, the cost of high-performance cameras is not much lower than that of good laser rangefinders; not to mention that the increased utilization of lasers by the robotics and automation industries has brought their prices down at least one order of magnitude in the last 10 years. Coarse vehicle localization along the row is provided by low-cost encoders on the steering wheel and driving axle. Again using an analogy, this is akin to a person walking down a corridor—the person does not need to know her exact latitude and longitude to traverse an office space, but simply needs to sense where the walls are and (roughly) how far she has travelled. The combination of distance travelled and the open space detected by the lasers at the end of the row allow the vehicle to switch from row following mode to turning mode. Again using the lasers the vehicle finds the next row, enters it, and the process is repeated. With this relatively simple and low-cost infrastructure, we have been able to run our first APM uninterrupted for several hours in commercial orchards in Pennsylvania and Washington. (In a companion paper by Libby and Kantor we describe our work on accurate, centimeter-scale vehicle localization in the row without GPS.) The first APM is based on a Toro Workman MDE electric vehicle retrofitted with the mechanical, electrical, and software elements necessary for autonomous driving. To date this vehicle has logged more than 130 km of traversals in both research and commercial orchards, in sparse and dense canopies, with row widths between 10 and 20 ft. In this paper we describe its design and the sensors and software that enable autonomous driving. We also present and discuss many of the applications demonstrated in practice with this vehicle, including autonomous mowing and spraying and autonomous crop load scouting. Finally, we briefly present our work on the second APM based on the N. Blosi orchard platform, and on user-friendly humancomputer interfaces that bridge the gap between these sophisticated machines and the agricultural workers that will operate them on a daily basis.

Safety Emphasis Autonomous vehicles will result in increased safety in orchard operations, since workers will spend less time operating machinery. Our autonomous vehicle was designed with safety in mind. Software processes running on the vehicle look for obstacles in the vehicle’s path and send commands to stop the vehicle immediately if a collision seems imminent. Similar health monitor processes stop the vehicle if any of the software or hardware systems fail. As a backup, there are five easily-identified emergency stop buttons on the vehicle, located on the corners and next to the steering wheel. Any one of them, when pressed, immediately triggers the controller to put on the brakes and cut all power to the throttle. The conduction of autonomous orchard operations will require rigorous safety standards to be maintained by the managers and workers. Anyone using the vehicle must be fluent in its operation. People should be kept out of the blocks when an autonomous vehicle is in use; but should they need to work near the vehicle, they must understand the blind spots in the vehicle’s sensing suite, so as to be sure that they will not be harmed.

Related Work Autonomous row following in orchard operations has traditionally been accomplished using computer vision. Billingsley and Schoenfisch (1997) demonstrated a vision guidance system for traversing cotton fields. Belforte, et al., (2006) designed a robotic system for navigating in greenhouses, and showed its effectiveness in spraying and precision application of fertilizer. Many groups in Japan use vision for a variety of agricultural applications, including rice planting, tilling, and spraying (Torii, 2000). Ollis and Stentz (1997) used computer vision to guide an

3

autonomous harvester through alfalfa fields. Similar to our work, Astrand and Baerveldt (2004) used Hough transforms for visual row detection on field machinery. In recent years laser range finders have been increasingly popular in the robotics research community for sensing purposes. Kirchner and Heinrich (1998) demonstrated the use of laser range finders for detecting the boundaries of roads. Numerous groups use lasers for obstacle detection and terrain modeling in off-road environments (Wellington, et al., 2006; Hamner, et al., 2008; Peynot, et al., 2009). In the 2005 DARPA Grand Challenge, an off-road race of autonomous robot vehicles, the top finishers all used laser range sensors for obstacle detection and detection of lane boundaries of dirt paths (Urmson, et al., 2006; Thrun, et al., 2006). In this paper we show how the computer vision techniques developed for row detection can be combined with the reliability and accuracy of laser range finders to make an autonomous vehicle capable of driving large distances in orchards.

Autonomous Prime Mover Our test vehicle for this project is a modified Toro MDE eWorkman that we call an Autonomous Prime Mover, or APM (Figure 1). Its purpose is to demonstrate basic autonomous navigation in orchards, and to show how a single vehicle can be used for a variety of applications. As such, the Workman’s size is ideal. It is easy to transport, and is agile enough to navigate narrow rows and turn around in small spaces. It has a bin for carrying cargo, as well as roll bars for safety, which also serve as mount points for sensors. Finally, it has a trailer hitch, which allows us to demonstrate how such a vehicle can be used for mowing, spraying, or any towing application.

Figure 1. The Autonomous Prime Mover (APM) is a modified Toro electric Workman. Laser range finders on the front corners of the vehicle provide range and bearing to trees and other objects near the vehicle.

4

Some modifications were necessary to convert the stock vehicle into an autonomous vehicle. Motors drive the steering wheel and brake pedal. Encoders on the drive axle and on the steering column provide feedback as to how far the vehicle has traveled and where the wheels are pointed. A controller board provides the interface to all communication with the vehicle. It sends commands to the motors and to the electronic throttle, and relays feedback to our software system. The only other sensors used by our system are two Sick LMS laser range finders, located on the front corners of the APM, as seen in Figure 1. These sensors each sweep a plane parallel to the ground and provide range and bearing information for any object in a 180-degree field of view, up to a range of 80 meters. Each laser is pointed 30 degrees to the side of the vehicle, so that together they provide a 240-degree field of view around the vehicle. Figure 2 shows a sample of the data received from the Sick lasers. We have installed a high-accuracy GPS localization system on the APM, which is used only to ground-truth our collected data for later analysis. During operation our software system performs dead reckoning using the encoders described above. A ruggedized laptop mounted in front of the passenger seat runs the entire software system, from communication with the laser sensors to data processing and sending commands to the vehicle controller.

Figure 2. At left, the vehicle drives down a row in an orchard. At right, a top-down view of the laser range data from a row, with the vehicle facing upwards in the image. The lasers can see objects as far as 30 meters away. When the canopy is not thick, it is possible to get hits from trees in nearby rows.

Autonomous Navigation Software System The software system is comprised of multiple modules to read in the sensor data, determine a path for the vehicle to drive, and send commands to the vehicle (Figure 3). An obstacle detection module reads the laser range data, filters out spurious points, and sends out a list of the locations of objects around the vehicle. A safety module reads in these obstacles, determines if any of them is too close to the front of the vehicle, and stops the vehicle if that is the case. At the heart of the system is the navigation module, which reads in the obstacle data and the vehicle’s location from dead reckoning and determines where the vehicle should drive.

5

Figure 3. Software architecture. Range and bearing data from the laser sensors are converted into 3-D points by the obstacle detection program. The driver process uses the point data to find the row edges and steer the vehicle. A safety module checks for obstacles directly in front of the vehicle and sends a stop command if one is found. The navigation module’s behavior is based on the mode of travel. Within the context of driving in an orchard block there are two modes of travel we are interested in: driving between rows of trees, and turning around from the end of one row into the start of the next.

Row Following There is a clear need for the autonomous navigation system to perform online detection of the row for the vehicle to drive in. GPS is a well-proven technology to track paths in open areas, but it can perform poorly in orchard situations. Modern apple orchard plantings are trending towards taller trees in rows spaced closely together. This has a two-fold effect on GPS navigation; the tall trees occlude GPS satellites, making localization estimates less accurate, and the narrow rows give the vehicle less margin for error. Our software system uses the laser range data to detect the lines of trees online; because it does not depend on GPS, it makes the vehicle navigation more reliable.

6

Our row detection method is described as follows (see Figure 4 for a graphical representation of each step). The trees are planted in straight lines, parallel to each other. We can treat the laser range data as an image and look for two parallel edges, one to each side of the vehicle. To accomplish this we use a Hough transform, a popular technique used in computer vision applications to find edges in an image (Duda and Hart, 1972). The Hough transform begins with the idea that each point in the data could make up a line of data. Consider a single point in the laser range data. A number of lines could go through that point. The row detector constructs an empty matrix of votes for lines, where lines are parameterized by their radius and angle to the vehicle. Then for each point seen by the laser range finder the system constructs potential lines going through that point, oriented every degree from 0 to 179. If the point is located at position and the line’s angle is , then the line’s radius is defined to be

Figure 4. (Top) Row detection is performed with the Hough transform, an image processing algorithm that finds lines in images. A line is defined by a radius and an angle from the line’s closest point to the vehicle. Here, multiple lines could go through laser point A, like the ones defined by ( , ) and ( , ). (Bottom left) Graphical representation of the process of creating a Hough transform. One point in the laser data is selected, and the system records a number of potential lines (“votes”) that could go through that point. (Bottom right) After vote lines have been drawn for all points, two of the vote lines dominate the image.

7

Figure 4. (Continued) The vote matrix used by the row detector, in which lines are represented by their radius and angle from the vehicle. Height indicates the number of votes for that line. There are two peaks in the matrix, corresponding to the two dominant lines in the image. The angle associated with these two peaks is the same, indicating the dominant lines are parallel. These are the edge lines selected by the row detector. For each line the corresponding cell in the matrix is incremented, to represent that line receiving a vote. The line orientations range from 0 to 179 degrees, so each laser range point contributes 180 votes to the vote matrix. After all data points have been counted, the system looks for the peaks in the vote matrix. These peaks represent the most likely lines. In our case, since we are looking for parallel lines, we have the system look for the two highest peaks that have the same orientation angle. These two highest peaks are chosen by the row detector to be the tree rows (Figure 5).

Figure 5. The autonomous system uses a Hough transform to detect a pair of parallel lines in the laser data. These parallel lines indicate the rows of trees. The system constructs the path for the vehicle to follow, indicated by the white line, located midway between the two edges.

8

Once the tree rows are detected, the center of the row can be defined as the line midway between the two edges. Since the tree canopies on either side do not make perfect straight lines, the resulting line from the row detection system may jump around from iteration to iteration. To reduce noise we apply a low-pass filter on the line equation. The low-pass filter averages new row center detections with past results to smoothly track the center. If the previous filtered center line is , the new row detection is , and the filter gain is , then the new filtered center line is

In addition to dealing with noise from the canopy, the low-pass filter can also counter the effects of bad row detections. Figure 6 shows the results of the row detection with uneven terrain. In this case from Soergel Orchards near Pittsburgh, the ground is sloped to the side in front of the vehicle, such that the ground intersects the laser plane. Here row detection is disrupted by many spurious points. The low pass filter resists the bad detection, and the vehicle continues to track the row. The user can specify to the robot the length of the row to travel. However, we would like our navigation system to be robust to errors in the measurement of that length as well as errors in vehicle position. That is, the robot would not require a highly accurate system to tell it where it is relative to the end of the row. Instead, we detect the end of the row. The dead reckoning localization estimate tells the navigation system when the vehicle is near the end of the row, and the laser data determine precisely where it is. As the vehicle gets close to the end, the navigation module looks for a ―gap‖ in the data to either side of the vehicle; an empty stretch of terrain extending to the left and right which indicates that there are no more trees. When the system finds such a stretch of terrain it knows the vehicle has reached the end of the row.

Ground points

Figure 6. In this case, the terrain is sloped such that the ground intersects with the laser plane. The ground shows up in the range data (circled in green). These points interfere with the row detection (blue and yellow lines), which no longer aligns with the trees. The white line is the filtered row center, which resists the bad detection.

Turning Between Rows In order to navigate the entire block, when the vehicle reaches the end of one row it must turn into the next. Our initial model for this was for the vehicle to make a prescribed turn, driving in a semicircle that would take it from the center of its current row to the center of the next row. However, tracking inconsistencies, imprecision in dead reckoning localization estimates, and the small margin of error provided by narrow rows make this method unreliable. To achieve reliable row turning we need to detect the row before entering it.

9

In our current method, when the vehicle reaches the end of the row it begins making a quartercircle prescribed turn in the direction of the next row. With the next row now to the side of the vehicle, we run the row detector using the laser data only on that side. The row detector finds the precise location of the row to enter, and the navigation system plans an arc for the vehicle to drive from its current location into the row center (Figure 7).

Figure 7. As the vehicle turns towards the next row, the row detector finds the row. This compensates for vehicle localization errors and errors in the intended layout of the orchard.

Experimental Results In one summer of operation we tested the APM with row following and turning capability at a number of orchards in Pennsylvania and Washington. During these tests the APM traveled a total of over 130 km autonomously. We conducted our initial tests at Soergel Orchards, in Wexford, PA. The growing system being used was vertical axis, and the rows were spaced 6 m (20 ft.) apart. The purpose of these tests was to stress test the vehicle in long, continuous missions. In one test the vehicle traveled 11 km in a single continuous run. Later tests were executed at the Penn State Fruit Research and Extension Center (FREC) in Biglerville, PA. The focus of these tests was to drive entire blocks of trees and demonstrate the ability of a vehicle like the APM to autonomously perform orchard operations. The trees at the FREC were grown with a trellis system in rows spaced closer together, at 3.5 m (12 ft.). In one of these tests we attached a mower to the APM’s trailer hitch. The APM mowed an entire block of 8 rows autonomously (Figure 8). In another test we attached a WeedSeeker selective spraying system (http://www.ntechindustries.com) to the back bin of the APM and used it to autonomously spray the ―kill strip‖ along a row of trees.

10

Figure 8. (Left) The APM cuts grass by towing a mower attachment. (Right) A WeedSeeker selective spray system sprays weeds in a kill strip as the APM drives through the orchard. We conducted further experiments in orchards in Washington state. At Sunrise Orchards, near Wenatchee, we again demonstrated autonomous mowing of an entire block. We later coordinated with Vision Robotics, Corp. at Valley Fruit Orchards, near Royal City, to demonstrate another application of orchard autonomy, crop load scouting. Vision Robotics has developed the Newton, a vision-based system to locate apples and grade them for size and color. We interfaced with the Newton in two ways. The first was a simple mechanical linkage that allowed the APM to tow the Newton autonomously through the orchard (Figure 9). We also interfaced over Ethernet to provide status to the Newton. This status contained the APM’s location, as well as signals when the APM was starting or ending a row. In this way we allowed Vision Robotics to not only find apples in the orchard, but to map their locations as well.

Figure 9. At Valley Fruit Orchards near Royal City, Washington, we demonstrated autonomous crop load scouting. The APM towed the Newton, a vision-based apple detection platform developed by Vision Robotics. The autonomous system on the APM provided localization information to the Newton, allowing Vision Robotics to map the location of apples in the orchard.

11

The row following performance was consistent across all of the orchards. The APM stayed to the center of the rows. Occasionally it did brush some branches, but these were branches growing into the middle of the driving path, such that contact was unavoidable. There were also instances as in Figure 6, in which uncleared weeds or other vegetation growing near the trees would temporarily confuse the row detection. However, the low pass filter averaged the row center lines, causing these temporary bad detections to have little effect. The reliability of the turning around method varied greatly between different orchards. This was primarily due to differing canopy sizes, specifically the extent to which the density of the canopy allowed the sensors to see into the row being entered. Figure 10 shows a screenshot of laser data from three different orchard test sites. The trees at Soergel Orchards were young and small, with sometimes an almost nonexistent canopy, and therefore provided few points for the row detection algorithm to use. This led to some variance in the rows being detected by the system, but the tree rows were spaced far enough apart that the vehicle could safely track a little to one side without coming near any trees. Row entry reliability, the percentage of successful row entries out of number of attempts, at Soergel was 100%. The rows at the FREC and Sunrise Orchards were less forgiving in terms of width, but the canopy size was ideal for our method. The system was nearly always able to find a good row. Row entry reliability at Sunrise was above 90%. At Valley Fruit Orchards, the tree canopy was much more dense. As shown in Figure 10, the navigation system can only see part of the far trees in the row to be entered, and only the first tree on the near side. All other trees in the row are occluded. Given this little information the Hough row detector was not able to find two parallel lines to define a path for the vehicle. Row entry reliability at Valley Fruit was below 50%.

Figure 10. Entering a row in three different orchards. The green arrow indicates the row the APM is attempting to enter. At left, the trees are young, with small trunks and very sparse canopy; the row detector must work with little information. Also, unmaintained vegetation grows in the vehicle’s path, obscuring the row. In the middle, the trees have sparse canopy, but their trunks are larger and more easily detected. The row detector has no problems here. At right, the trees have a canopy so dense that the APM cannot see into the row it is trying to enter. The row detector cannot find two parallel lines in this data.

12

Future Work Our work in the coming year will be focused on improving the reliability of row detection and turning. We have already demonstrated the need for a more reliable method of detecting the start of a row. We would also like to improve the row detector’s robustness to outlier points, such as those produced in unlevel terrain. Early results in this area are promising. One of the goals in our research is to automate different areas of orchard operation. With the APM we have demonstrated how an autonomous vehicle can take over driving tasks. We have also automated an orchard platform (Figure 11). Tests are planned for the summer of 2010 to show how this platform can be used to improve the efficiency of workers in pruning, thinning, training, and harvesting applications. Finally, any autonomous vehicle will be used by a variety of workers. These workers may have little computer experience and little proficiency with the English language. We are developing an interface that will allow any worker to operate the vehicle with little training. We have tested this interface in orchards around Biglerville, PA with good early results. We will continue working in this area.

Figure 11. We are automating an orchard platform as a second instance of an APM. This vehicle is physically different from the electric utility vehicle described above and will be used mostly for tasks such as harvesting, pruning and thinning by people standing on the platform while it drives autonomously.

Conclusion In recent years the specialty crop industry has met with some of the greatest challenges it ever faced: uncertainties about labor availability, increasing consumer demand for a safe, affordable, traceable, and high quality food supply, competition from foreign producers, and the need to minimize its environmental footprint. To remain profitable and competitive, the industry needs to invest in technologies that increase management efficiency and decrease labor cost.

13

Autonomous vehicles will be one of the tools fruit and nursery tree growers will depend on to automate production processes and the collection of data necessary for decision making. Our proposed family of Autonomous Prime Movers is a strong step in this direction. In the past year we have demonstrated an electric vehicle capable of autonomously traversing entire orchard blocks mowing, spraying, and towing a system for counting apples on trees. In the near future we expect to extend these results to include augmented harvesting from an autonomous agricultural platform, canopy mapping, and other sensor-based data collection tasks. The ultimate goal is to make APMs commercially available in a variety of form factors and functionalities. Here we have presented the core capability of GPS-free autonomous row following (including turning) and several management and production applications it enables. In conducting this work we learned that exiting a row and reliably entering the next one is the most challenging part of the process. We were able to make row entry more robust by incorporating explicit row detection in the algorithm but still cannot guarantee 100% successful entry in very dense canopies. We are in the process of experimenting with new methods in simulation and will test them in the summer of 2010. Another important lesson learned is that to bridge the gap between a sophisticated autonomous vehicle and the agricultural workers that will use it on a daily basis we need to provide functional, easy-to-use interfaces. We have just conducted the first version of such an interface following a formal design method in cooperation with the CMU HumanComputer Interaction department. Results of this work will be reported in future publications.

Acknowledgements We would like to thank Ben Grocholsky, George Kantor, Jackie Libby, Stephan Roth, and Jim Teza from Carnegie Mellon for their work in getting the APM and software system operational, as well as Reuben Dise from Penn State University and Jillian Cannons and Tony Koselka from Vision Robotics for their contribution to deploying the APM in various practical applications. We would also like to thank Tara Baugher from Penn State University, Gwen Hoheisel and Karen Lewis from Washington State University, and Tom Auvil from the Washington Tree Fruit Research Commission for arranging and organizing our orchard test visits. Finally, we would like to thank the owners and managers of Soergel Orchards, the orchards at Penn State’s Fruit Research and Extension Center, Hollabaugh Brothers Farms, Bear Mountain Orchards, Sunrise Orchards, and Valley Fruit Orchards, for allowing us to test in their facilities. This work is supported by the USDA Specialty Crop Research Initiative program under grant no. 2008-51180-04876.

References Astrand, B. and A. Baerveldt. 2005. A vision based row-following system for agricultural field machinery. Mechatronics. 15(2): 251-269. Belforte, G., R. Deboli, P. Gay, and P. Piccarolo. Robot design and testing for greenhouse applications. Biosystems Engineering. 95(3): 309-321. Billingsley, J. and M. Schoenfish. 1997. The successful development of a vision guidance system for agriculture. Computers and electronics in agriculture. 16(2): 147-163. Duda, R. O. and P. E. Hart. 1972. Use of the Hough Transformation to Detect Lines and Curves in Pictures. Comm. ACM, 15: 11–15. Hamner, B., S. Singh, S. Roth, and T. Takahashi. 2008. An efficient system for combined route traversal and collision avoidance. Autonomous Robots. 24(4): 365-385.

14

Kirchner, A. and T. Heinrich. 1998. Model Based Detection of Road Boundaries with a Laser Scanner. In Proceedings of the 1998 IEEE International Conference on Intelligent Vehicles. 93-98. Ollis, M. and A. Stentz. 1997. Vision-Based Perception for an Autonomous Harvester. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robotic Systems. Vol. 3, 1838-1844. Peynot. T., J. Underwood, and S. Scheding. 2009. Towards reliable perception for unmanned ground vehicles in challenging conditions. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems. 1170-1176. Thrun, S., M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale, M. Halpenny, G. Hoffmann, K. Lau, C. Oakley, M. Palatucci, V. Pratt, P. Stang, S. Strohband, C. Dupont, L. Jendrossek, C. Koelen, C. Markey, C. Rummel, J. van Niekerk, E. Jensen, P. Alessandrini, G. Bradski, B. Davies, S. Ettinger, A. Kaehler, A. Nefian, and P. Mahoney. 2006. Stanley: The robot that won the DARPA Grand Challenge. Journal of Field Robotics. 23(9): 661-692. Torii, T. 2000. Research in autonomous agriculture vehicles in Japan. Computers and electronics in agriculture. 25(1-2): 133-153. Urmson, C., C. Ragusa, D. Ray, J. Anhalt, D. Bartz, T. Galatali, A. Gutierrez, J. Johnston, S. Harbaugh, H. Kato, W. Messner, N. Miller, K. Peterson, B. Smith, J. Snider, S. Spiker, J. Ziglar, W. Whittaker, M. Clark, P. Koon, A. Mosher, and J. Struble. 2006. A robust approach to high-speed navigation for unrehearsed desert terrain. Journal of Field Robotics. 23(8): 467-508. Wellington, C., A. Courville, and A. Stentz. 2006. A Generative Model of Terrain for Autonomous Navigation in Vegetation. International Journal of Robotics Research. 25(12): 12871304.

15