Skip to content

Detecting Craters on Mars

The following is adapted from an extended article that accompanied my Judges’ Award winning poster presentation on my 2019-20 summer internship project at the Pawsey Supercomputing Centre and Curtin University.

This project was supported by resources provided by the Pawsey Supercomputing Centre with funding from the Australian Government and the Government of Western Australia.


Mars, like all planetary bodies, is subject to bombardment by asteroids and other impactors (including spacecraft). The craters left behind on the surface are a record of that planet’s bombardment history. This record holds important clues about the cratering rate, the size distribution of impactors and/or population estimates, as well as the age of the planet’s surface via geochronometry. For planets like Mars that have an atmosphere, its cratered surface can also shed light on how meteors disintegrate within the atmosphere, potentially defining an upper limit on the sizes of asteroids that fragment before impacting the surface.

Traditionally, impact craters must be analysed manually by viewing and zooming in on raw image data. In recent years, automatic techniques have helped to alleviate this painstaking process. Deep learning, a burgeoning sub-field of machine intelligence, has seen tremendous success in fields requiring image and object recognition. In this project, we aim to test a new machine learning-based Crater Detection Algorithm (CDA) to detect craters in high resolution images of Mars taken by the HiRISE camera on the Mars Reconnaissance Orbiter (MRO).

Methodology

The CDA utilises the Ultralytics YOLOv3 software, a general inference and training framework for real-time object detection. The CDA is trained with transfer learning, a method of tweaking an existing (trained) network to work with a new set of weights without needing to retrain entirely from scratch. The core workflow for this project starts with downloading the high-resolution images (in JPEG2000 or JP2 format) taken by HiRISE for several test craters. These images are the map-projected black and white images (not any of the colour images). Although these images are map-projected, the CDA algorithm requires the image to be a GeoTIFF file. In order to obtain the real location of each crater, we need the image file to explicitly include the georeferencing data (e.g the coordinate system (in Martian coordinates) and map projection). The real coordinates are mapped to the image coordinates. This is important since the YOLOv3 algorithm operates with the image coordinates; without this translation (and a correct interpretation of the map projection) the coordinates are meaningless.

Execution

GDAL (in particular the gdal_translate utility) was used to convert the JP2 images to GeoTIFF files. Complicating matters was that the pixels of the original JP2 images were stored as unsigned 16-bit integers, while the CDA requires GeoTIFFs with pixels stored as bytes (8 bit). Thus there is a significant loss of dynamic range (from 65535 shades of grey to merely 255), resulting in the final GeoTIFF appearing washed out, however this did not sacrifice any detail of the craters (if anything some of the darker shadows were more visible). The CDA takes the GeoTIFF as input and outputs a .csv file containing the list of detected craters. Each row contains, among other things, the image coordinates of the center, the minimum and maximum x,y image coordinates of the bounding box, the real (Martian) latitude and longitude of the crater, and an estimate of the diameter of the crater. Another program makes use of Python’s Pillow image manipulation package to overlay the detection boxes over the original image, finally resulting an annotated JPEG file. Most of the images analysed were initially around 500 MB in size (as a JP2). When converted into an uncompressed GeoTIFF the file size was often in excess of 1 GB, while the final annotated JPEGs were around 100-150 MB.

The scripts were executed on Pawsey’s Zeus cluster, with the CDA algorithm requiring the use of (at least) one GPU. The entire workflow (convert JP2 to GeoTiff, detect craters, annotate data) was containerised, with separate docker containers for each step. Pawsey’s Nimbus cloud-compute platform was also used to prototype the GDAL conversions (especially handling the image projection) using different publicly available docker containers. To test the CDA, we tested several craters that were analysed in the Dauber et al. (2013) paper.

Results and Future Considerations

Ultimately, the CDA algorithm was successfully tested with the HiRISE image data. The number of detections also increased with newly trained weights, showcasing the strength of the YOLOv3 algorithm and its use of transfer learning. Each annotated image was visually inspected in order to confirm that new craters had indeed been detected. In cases where a new crater was surrounded by dust (such that the crater boundary could not be visibly discerned), the CDA did not detect the presence of the crater. Nevertheless, the CDA was able to detect craters of varying size (including cases of craters within craters. The initial results are promising; given the compute capability at Pawsey and the containerised workflow, many images can be rapidly analysed at the rate that far exceeds what could ever be possible with manual inspection. The difficulties in detecting dusty craters can be accounted for by incorporating these images into future training sets, improving the performance of the CDA in the long run. Novel methods such as this CDA will play important roles in the future study of the Martian cratering rate, leading to new insights into Mars’ atmospheric processes and geological history.

Key Resources