Application of deep-learning to deep-sea species identification from image-based data
dc.contributor.supervisor | Howell, Kerry | |
dc.contributor.author | Cross, Eleanor | |
dc.contributor.other | Faculty of Science and Engineering | en_US |
dc.date.accessioned | 2024-07-01T10:02:04Z | |
dc.date.issued | 2024 | |
dc.identifier | 10616597 | en_US |
dc.identifier.uri | https://pearl.plymouth.ac.uk/handle/10026.1/22579 | |
dc.description.abstract |
Vast areas of the world’s deep sea are unexplored, and the recent threats of deep-sea mining and trawling make exploration of these habitats a time-sensitive concern. Much of our understanding of the deep sea has been based on physical sampling of sediment habitats using trawls and grabs, however, modern methods of surveying the deep sea involve the acquisition of video footage using Remotely Operated Vehicles (ROVs). Use of image-based methods is particularly important for sensitive or vulnerable species such as cold-water corals, especially considering these species tend to inhabit hard substrate areas that cannot be sampled using trawls. Annotation of footage is a lengthy process requiring expert knowledge and represents a significant bottleneck in our ability to learn more about understudied hard-substrate habitats. Use of machine learning is a possible solution, but underwater images have their unique issues on top of the problems generally faced when using these models. This thesis reviews the state-of-the-art regarding coral identification, machine learning in ecology and object detection and asks how machine learning can be applied to the problem of annotating organisms in ROV footage. Using the YoloV5 model architecture, three machine-learning models were trained to explore how choice of classes effects performance – Model 1 was trained with 16 classes (the 16 families of black and octocoral present in the training images), Model 2 with 17 classes (the same 16 classes as Model 1 with an additional class ‘Other Corals’) and Model 3 with 18 classes (the 17 classes 6 from Model 2 plus an ‘Other’ class). The trained versions of the model were applied to validation and independent test sets of images to test the transferability between datasets. All models performed around ten times better than random (55% at best) with Model 1 achieving the highest in almost all metrics. Transferability of the model onto the independent test set was good however (0.05 difference), indicating transfer may be a viable option for future models. This study shows the potential for the application of AI in marine sciences and more specifically its possible use in annotation of corals in video footage. | en_US |
dc.language.iso | en | |
dc.publisher | University of Plymouth | |
dc.subject | machine learning | en_US |
dc.subject | marine biology | en_US |
dc.subject | coral | en_US |
dc.subject | cold water corals | en_US |
dc.subject | computer vision | en_US |
dc.subject | artificial intelligence | en_US |
dc.subject.classification | ResM | en_US |
dc.title | Application of deep-learning to deep-sea species identification from image-based data | en_US |
dc.type | Thesis | |
plymouth.version | publishable | en_US |
dc.identifier.doi | http://dx.doi.org/10.24382/5212 | |
dc.rights.embargodate | 2025-01-01T10:02:04Z | |
dc.rights.embargoperiod | 6 months | en_US |
dc.type.qualification | Masters | en_US |
rioxxterms.funder | University of Plymouth | en_US |
rioxxterms.identifier.project | All | en_US |
rioxxterms.version | NA |
Files in this item
This item appears in the following Collection(s)
-
01 Research Theses Main Collection
Research Theses Main