Artificial intelligence is already widely used in marine science. Still, poor data quality and limited transparency are holding back its full impact, according to a new report from the International Council for the Exploration of the Sea (ICES).
The findings come from a 2025 workshop on AI applications in ICES work. The report shows that AI is now used for tasks such as species detection, catch monitoring, and data analysis. However, experts say progress depends on better data and stricter rules governing AI use.
AI is already used across fisheries science
AI tools are now applied across a wide range of marine science tasks. These include image recognition, acoustic data analysis, and automated monitoring of fishing activity.
The report highlights that computer vision and deep learning are the most common methods. They are mainly used to detect and identify fish species, analyse otolith images, and process video from fishing vessels.
Some systems can track fish through the catch process, count them, and estimate fishing effort. Others analyse large datasets to find patterns or support scientific advice.
AI is also used in text analysis. Large language models can assist with reports, coding, and data processing. However, their use remains limited and requires careful control.
Data quality is the main bottleneck
The biggest barrier to wider AI use is not technology, but data.
Experts point to a lack of suitable training data and issues with data quality. These problems reduce model accuracy and make it hard to scale solutions.
In some cases, large volumes of data exist, but they are not well organised or labelled. This limits their value for machine learning.
The report also notes that AI models depend heavily on the data they are trained on. Poor input leads to poor results. This remains a key challenge across all use cases.
Transparency and oversight flagged as critical risks
The report raises strong concerns about transparency and accountability.
AI systems can be difficult to explain. This creates problems for scientific work, where results must be reproducible and verifiable.
Experts say all AI outputs must be clearly labelled. Human oversight is essential, especially in areas with economic or environmental impact.
There are also concerns about bias, data security, and misuse of sensitive information. The report warns against relying on AI alone for decision-making.
In scientific publishing, AI-generated text is seen as a risk. It can appear correct but lack depth or contain errors. Without human review, quality may fall.
Monitoring and catch data are seen as a high-impact area
One of the most promising areas for AI is remote electronic monitoring (REM) on fishing vessels.
AI can analyse video data to identify species, track catches, and improve reporting. This could reduce the need for manual review and cut costs.
However, this use case also requires the most effort to implement. It depends on large datasets and complex systems.
If successful, it could deliver major gains in efficiency and data quality.
ICES calls for stronger AI governance
The report concludes that ICES must update its policies to manage AI use.
Key priorities include:
- Clear rules on transparency and disclosure
- Strong human oversight of all outputs
- Protection of sensitive data
- Limits on automated decision-making
ICES also plans further work on AI governance and training. A follow-up workshop is expected to support best practices and future development.
The report states that AI can improve productivity, but only with careful control and investment.