Show simple item record

dc.contributor.supervisorHowell, Kerry
dc.contributor.authorCross, Eleanor
dc.contributor.otherFaculty of Science and Engineeringen_US
dc.date.accessioned2024-07-01T10:02:04Z
dc.date.issued2024
dc.identifier10616597en_US
dc.identifier.urihttps://pearl.plymouth.ac.uk/handle/10026.1/22579
dc.description.abstract

Vast areas of the world’s deep sea are unexplored, and the recent threats of deep-sea mining and trawling make exploration of these habitats a time-sensitive concern. Much of our understanding of the deep sea has been based on physical sampling of sediment habitats using trawls and grabs, however, modern methods of surveying the deep sea involve the acquisition of video footage using Remotely Operated Vehicles (ROVs). Use of image-based methods is particularly important for sensitive or vulnerable species such as cold-water corals, especially considering these species tend to inhabit hard substrate areas that cannot be sampled using trawls. Annotation of footage is a lengthy process requiring expert knowledge and represents a significant bottleneck in our ability to learn more about understudied hard-substrate habitats. Use of machine learning is a possible solution, but underwater images have their unique issues on top of the problems generally faced when using these models. This thesis reviews the state-of-the-art regarding coral identification, machine learning in ecology and object detection and asks how machine learning can be applied to the problem of annotating organisms in ROV footage. Using the YoloV5 model architecture, three machine-learning models were trained to explore how choice of classes effects performance – Model 1 was trained with 16 classes (the 16 families of black and octocoral present in the training images), Model 2 with 17 classes (the same 16 classes as Model 1 with an additional class ‘Other Corals’) and Model 3 with 18 classes (the 17 classes 6 from Model 2 plus an ‘Other’ class). The trained versions of the model were applied to validation and independent test sets of images to test the transferability between datasets. All models performed around ten times better than random (55% at best) with Model 1 achieving the highest in almost all metrics. Transferability of the model onto the independent test set was good however (0.05 difference), indicating transfer may be a viable option for future models. This study shows the potential for the application of AI in marine sciences and more specifically its possible use in annotation of corals in video footage.

en_US
dc.language.isoen
dc.publisherUniversity of Plymouth
dc.subjectmachine learningen_US
dc.subjectmarine biologyen_US
dc.subjectcoralen_US
dc.subjectcold water coralsen_US
dc.subjectcomputer visionen_US
dc.subjectartificial intelligenceen_US
dc.subject.classificationResMen_US
dc.titleApplication of deep-learning to deep-sea species identification from image-based dataen_US
dc.typeThesis
plymouth.versionpublishableen_US
dc.identifier.doihttp://dx.doi.org/10.24382/5212
dc.rights.embargodate2025-01-01T10:02:04Z
dc.rights.embargoperiod6 monthsen_US
dc.type.qualificationMastersen_US
rioxxterms.funderUniversity of Plymouthen_US
rioxxterms.identifier.projectAllen_US
rioxxterms.versionNA


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record


All items in PEARL are protected by copyright law.
Author manuscripts deposited to comply with open access mandates are made available in accordance with publisher policies. Please cite only the published version using the details provided on the item record or document. In the absence of an open licence (e.g. Creative Commons), permissions for further reuse of content should be sought from the publisher or author.
Theme by 
Atmire NV