Introduction and Motivation

Much image data has been collected already on planetary surfaces and the Moon. It has been used for operational decision making, science and exploration purposes and provided mankind with an outstanding visual impression of ancient landscapes. However, only a small fraction of the data has been fully processed to the highest possible quality, exploiting most of the existing data signatures, and making use of available cues (such as views of the same area from different vision sensors, different positions and time of day). The processing of such data has only been performed as and when deemed necessary for immediate operations and science return of the particular mission. On the other hand, processing means have evolved recently, particularly in internationally focused efforts such as the just finished FP7-SPACE (European Union’s Seventh Framework Programme of SPACE) Project PRoVisG (Planetary Robotics Vision Ground Processing) (Planetary Robotics Vision Ground Processing) [21]. It is “low-hanging fruit” for PRoViDE (Planetary Robotics Vision Data Exploitation) to make use of such newly established frameworks in order to complete the suite of available data products from vision sensors placed on planetary surfaces, to give planetary scientists a much wider field of data available for geology, biology, morphology and meteorology as well as members of the public access to a consistent and uniform quality of experience for all landing sites to date and in the near future.
Planetary surface Image data: Hundreds of thousands of images

More than seven years after landing on Mars, one of the MERs (Mars Exploration Rovers), Opportunity, is still in operation: As of November 1, 2011 (B-2763), Opportunity had traveled 33.28 km (obtained from OSU (Ohio State University) Bundle adjustment based on images). Before becoming stuck in sand in May 2009 and ceasing operations March 2010, Spirit had traveled 6.59 km. Topographic maps, rover traverse maps, and updated rover locations have been produced with tremendous manual and processing effort and distributed to the MER (Mars Exploration Rovers) science and engineering team members [36]. However, only a small fraction of this data has been actually processed for targeting a more comprehensive 3D view of the traversed landscape. The same applies to past missions to the Lunar surface, be it hand-held imagery by the Apollo astronauts or images stemming from the robotic probes of the Russian Luna missions (including the two highly successful Lunokhod missions).

In 2008 the US successfully completed the Phoenix mission, a stationary lander that analyzed the surface and subsurface soil and ice searching for volatiles and organic compounds. Phoenix used cameras to view the underneath of the lander which led to the discovery of water ice just beneath the surface. Phoenix has been followed by the MSL (Mars Science Laboratory) rover mission in 2012/13 in a near-equatorial position close to sediments believed to be associated with the action of water. MSL (Mars Science Laboratory) has improved mobility over MER (Mars Exploration Rovers) and the landing site, Gale Crater, contains features of interest both inside and beyond the boundaries of the landing ellipse (25×20 km), requiring much more powerful rover navigation, 3D vision, and long-range localization and mapping capabilities than heretofore achieved. MSL (Mars Science Laboratory) is the first mission which has the benefit of very high resolution imagery and stereo coverage from the NASA (National Aeronautics and Space Administration) Mars Reconnaissance Orbiter HiRISE (High Resolution Stereo Camera) 250 mm pushbroom TDI imager [26].

In order to ensure that we learn from past NASA (National Aeronautics and Space Administration) and other international efforts and to ensure that Europe is able to produce unique new scientific information, we need to understand what can be obtained from such imagery particularly that obtained from rover missions providing stereo image capabilities. We therefore exploit the full range of information available within the NASA (National Aeronautics and Space Administration) MER (Mars Exploration Rovers) Spirit and Opportunity missions from the imaging instruments. Most of the analysis to date within NASA (National Aeronautics and Space Administration) has been focused on navigation and planning on a short-term “day-by-day” basis. We plan to take a “step back” and place all of the relevant imaging data collected to date within the same GLOBAL coordinate framework such that we are no longer concerned with the original imaging geometry or the local rover coordinate system.

Critical to this goal is the use of highest-resolution imaging systems. Using 300 mm imaging (and stereo) capability, the NASA (National Aeronautics and Space Administration) Mars Reconnaissance Orbiter HiRISE (High Resolution Stereo Camera) instrument began acquiring data in October 2006. Likewise, the LRO (Lunar Reconnaissance Orbiter) NAC (narrow-angle camera) is routinely taking images at 0.5 m ground resolution. The camera allows us to image all Lunar landing sites, including those, where rovers have been operating (Lunokhod 1,2, and Apollo rovers, Figure 1):


Figure 1: The Apollo 15 landing site, including the ALSEP (Apollo Lunar Surface Experiment Package), the LRRR (Lunar Laser Ranging Retro Reflector) and the LRV (Apollo Lunar Roving Vehicle), as well as astronaut and rover tracks (Image credit: NASA (National Aeronautics and Space Administration)/Goddard/ASU (Arizona State University)) The role of Vision Based Systems: Science and Operations

The MER (Mars Exploration Rovers) mission represents the first implementation of a so-called ‘tele-robotic field geology’ on Mars in which geologists, geochemists and mineralogists on the ground use a remotely operated mobile asset on another planet to systematically study the geology at the rover site in much the same way as they would do when in the field themselves but with a significant (usually a day or more) time difference. This includes

  • The ability to move around (provided by the rover mobility system)
  • The ability to observe, survey and map the scene to establish scientific context and for planning of movements, of sampling locations and close-up investigations by cameras
  • The ability to approach, ‘touch’ and sample targets of interest identified from stand-off remote observations (provided by robot-mounted in situ analysis and microscopic instruments that determine elemental and mineralogical composition as well as close-up texture.In a scientific rover mission, imaging therefore has a two-fold function:
  • An engineering function: providing a basis for building three dimensional (3D) models of the rover surroundings to support path planning onboard the vehicle
  • A scientific function: providing high fidelity imaging in selected wavebands (or other signatures such as 3D surface roughness) and at appropriately high resolution – including a precise fusion of different sensors – to support interpretations of the local geology as well as to support the selection of promising targets for close-up study.The majority of such processing was dedicated to the (almost) real-time motion of the Mission in order to plan short-term actions. In the case of Lunar exploration as conducted by more interactive missions (e.g. Lunokhod), but also the Apollo missions, decisions during the mission were taken in real-time by humans, while scientific imagery was recorded (mainly by analogue means) to be evaluated later. Science was performed on these image data, but a formal unique embedding into spatio-temporal context never took place.

Vision Strategies so far: Single-Site imagery emphasized

For tactical operations, MER (Mars Exploration Rovers) works mainly with a single site at a time, using a coordinate system unique and local to that site. On the rare occasions where the rover has doubled back or revisited old terrain, using the old data in the context of the new coordinate system has been problematic, relying on ad-hoc and largely manual methods. The telemetered rover positions are stored in a combination of the NASA (National Aeronautics and Space Administration) – PDS (Planetary Data System) labels and ancillary files in PDS, but there is no systematic storage of updates in operations or PDS such as those produced by OSU (Ohio State University) [34].

MSL (Mars Science Laboratory) addresses this deficiency by creating a database (named “PLACES”) of rover positions and orientations (“localization” data), storing both telemetered and updated data. This will serve both as a source of localization data for PRoViDE (Planetary Robotics Vision Data Exploitation), and as an avenue for updates created by PRoViDE (Planetary Robotics Vision Data Exploitation) to be made available to the MSL (Mars Science Laboratory) operations and science teams. MSL (Mars Science Laboratory) has no current plans to systematically re-create products using updated localization data, leaving this to individual users to do so as needed. Therefore the services of PRoViDE (Planetary Robotics Vision Data Exploitation) are useful in this regard also.

Local coordinate systems have also been established on the Apollo mission by simple measurement means, and occasionally also post-mission by photogrammetry. It is one of the PRoViDE (Planetary Robotics Vision Data Exploitation) targets to utilize such data and bring it into the unique context of a Lunar Image Catalogue in a specifically usable format for 3D exploitation of such imagery.

Multi-Site imagery: On-Demand only

OSU (Ohio State University) has been mapping Mars for the MER (Mars Exploration Rovers) 2003 mission since the initial landing of the two landers/rovers in 2004. During the mission, OSU (Ohio State University) has utilized ground images taken by the rover at different locations i.e. multi-site imagery, to map distant targets. Since accurate terrain maps are essential to design and plan a rover’s route and help it access targets efficiently, OSU (Ohio State University) applied LBS (long baseline stereo) mapping for the mapping of common features in stereo images captured from different rover positions [33]. Compared to the tens of metres of effective mapping area for a single site mapping, multi-site mapping provides a better solution for mapping hundreds of metres from the rover. Throughout the 7 years of mission operations to date, many wide baseline maps have been generated and provided to NASA (National Aeronautics and Space Administration) scientists for research, rover route planning and winter haven selection. These products with high resolution and quality are adopted in PRoViDE (Planetary Robotics Vision Data Exploitation) and provide useful information for research. In particular, there is a need, articulated in PRoViSG, that in order to develop a comprehensive web-GIS (web-Geo-Information-System), all the rover positions need to be in a global coordinate system, which can be co-registered with high resolution orthorectified HiRISE (High Resolution Stereo Camera) stereo imagery. This will require in addition the fusion of HiRISE (High Resolution Stereo Camera) imagery with these multi-baseline orthorectified stereo images all in the same coordinate system.

Besides such “controlled” imagery of distant objects and landscapes by LBS, large parts of Martian and Lunar landscapes (particularly from MER (Mars Exploration Rovers), Apollo and the Lunokhods) have been incidentally viewed from more than one position. Exploiting “serendipitous” coverage, hidden beneath the confines of the PDS (Planetary Data System) catalogue, could reveal a tremendous amount of additional scientific content due to new knowledge about morphology and formation processes. For example, another use of multi-site imagery is to map large objects that can only be imaged from different views as the rover moves around.

Contemporaneous to PRoViDE (Planetary Robotics Vision Data Exploitation): MSL (Mars Science Laboratory) as an interactive example

MSL (Mars Science Laboratory), launched in the fall of 2011, arrived at Mars in August 2012. To first order, MSL (Mars Science Laboratory) has been operated much like MER (Mars Exploration Rovers), with similar processes and data products, updated based on 7 years of operational experience. Anything written to work with MER (Mars Exploration Rovers) data also works with MSL (Mars Science Laboratory) data, with only minor modifications. The primary difference from the PRoViDE (Planetary Robotics Vision Data Exploitation) perspective is that MSL (Mars Science Laboratory) is an active mission during the PRoViDE (Planetary Robotics Vision Data Exploitation) time frame (and hopefully beyond).

Timely production of mapping products and rover localization data can directly support many of the scientific investigations planned for the MSL (Mars Science Laboratory) mission. As shown in Figure 2, 3D mapping products at a millimetre level of resolution are indispensable for studying the morphology, layering, geological formation and evolution of surface features. Detailed 3D models of craters are very valuable for crater gradation analysis and for study of the history of surface impacts and modifications. Furthermore, rover



Figure 2: Part 1: Opportunity rover traverse overlaid on Pancam image mosaic of the Victoria Crater wall. White dots indicate rover locations. Outcrop layers and target features are marked as lines and larger dots (NASA (National Aeronautics and Space Administration) MER (Mars Exploration Rovers) team / OSU (Ohio State University)). Part 2: Detailed mapping of Larry’s Outcrop at the Spirit rover landing site (DTM (Digital Terrain Model) resolution: 0.005 m; contour interval: 0.01 m).

JPL (contributing with two members of the PRoViDE (Planetary Robotics Vision Data Exploitation) AB) are realizing a large part of the mission, including image processing.

1 PRoViDE (Planetary Robotics Vision Data Exploitation) Innovation & Solutions

The main innovation that PRoViDE (Planetary Robotics Vision Data Exploitation) presents to the International Space Community is the first attempt at full exploitation &3D vision processing of all available (usable) images taken on the surface of celestial bodies in the Solar System.

Comprehensive surface image data sets available from robotic probes on Mars, the Moon (including human spaceflight data) and other planetary bodies are embedded into the spatial context of orbiter imagery, and processed into 3D vision products. A real-time rendering tool that allows seamless access to all available image and 3D data resolutions allows immersive data visualisation for scientists to exploit the novel access to spatial relationships. image004


Figure 3: PRoViDE (Planetary Robotics Vision Data Exploitation) overall layout. Robotic missions ground surface vision data and orbiter data covering the operational sites of the probes are collected in a Data Catalogue with strong contribution of a process that specifies the use cases for further scientific exploitation. The vision data in the catalogue is processed to get various sets of 3D Vision products that are stored in a data base (PRoDB) and made accessible via a GIS (Geo-Information System) (PRoGIS). The products are presented through a real-time visualisation engine to experts (as default exploitation executing the use cases), and by a www visualisation engine to the general public. Furthermore, relevant data is stored in PDS (Planetary Data System) format for subsequent usage by the Space Community.