Ship-to-shore “AI CCTV” at sea: safety tool, or governance trap?

We’re not building crewless deep-sea shipping yet. We’re building the human-in-the-loop decade — and deciding, right now, whether “AI on board” assists the watch or audits the crew.

This edition is about the second path: ship-to-shore video + analytics sold as “risk management” and “fleet performance.”

TL;DR

  • Ship-to-shore AI CCTV changes incentives. It’s not just a sensor; it’s a governance model.
  • In a stressed workforce, always-on monitoring is not neutral — it can add pressure and reduce candour.
  • EU rules are already banning some workplace AI practices, including emotion recognition in workplaces (with limited exceptions).
  • “Assist the ship, don’t audit the crew” is a design choice: local processing, minimal retention, clear purpose limits.

Disclosure: I am the founder of ELNAV.AI and the designer of Aware Mate, a bridge watchkeeping monitoring system. I therefore have a commercial interest in this space. The views below are my own, based on more than three decades around ships.

Fairness / right of reply: If any organisation or vendor mentioned here wants to respond or correct the record, I’ll publish the response (or link to it) prominently.


1) The new normal: insurer-backed pilots for ship-to-shore video AI

NorthStandard and ShipIn have publicly described a collaboration to offer members a fully subsidised pilot of ShipIn’s FleetVision platform (BASE package on one vessel per participating member), with enrolment opening in early December 2025.

ShipIn describes FleetVision as an AI risk management and fleet performance platform, and also describes FleetVision as enabling ship-to-shore collaboration using AI-powered maritime CCTV and visual analytics, with real-time alerts about onboard events.

None of this is automatically wrong. Insurers can support technology they believe reduces harm and claims.

But we should call the thing what it is:

When a system’s outputs are designed to be seen ashore, it’s not just “safety tech.” It’s an accountability architecture — and that architecture will shape behaviour on board.


2) What the platform category claims to do (in plain words)

ShipIn’s own “how it works” description is unusually clear about the model:

  • connect to on-vessel cameras “giving visibility” of operations
  • use onboard computer vision to turn large volumes of footage into “real-time intelligence” (ShipIn gives the example of “10,000 hours of monthly footage”)
  • enable “seafarers, fleet managers and owners” to see the same information and collaborate in real time

In ShipIn’s NorthStandard pilot announcement, FleetVision is described as transforming routine shipboard CCTV into “actionable insights,” including fire-prevention capabilities that fuse optical and thermal data to detect heat anomalies, leaks, smoke or haze, delivering alerts to crews and teams ashore.

That can absolutely produce safety value in certain domains (fire, security, operational anomalies).

My concern is not the existence of those capabilities. My concern is how they’re governed, and who the system ultimately serves when trade-offs appear.


3) A stressed industry doesn’t need another invisible supervisor

Danica’s 2025 seafarer survey reporting is blunt:

  • 44% reported stress during their last contract (up from 35% in 2024)
  • 16% reported feeling mentally depressed
  • 42% expect to stop sailing before 55
A horizontal bar chart showing selected seafarer survey indicators, including stress at 44% (2025) vs 28% (2019), rest-hours violations at 37%, and mental depression at 16%.

Selected indicators from Danica’s 2025 seafarer survey reporting (visualised by the author).

So here’s the design question:

In a workforce already under strain, does always-on ship-to-shore monitoring reduce risk… or does it quietly add another form of pressure?

Because crews will experience it less as “extra eyes” and more as:

  • “Someone is watching, but not sharing the burden.”
  • “My judgement is recorded, but my context is not.”
  • “The video will outlive the fatigue, the weather, and the commercial pressure that created the moment.”

That is a cultural and human-factors issue — not a bandwidth issue.


4) The cultural physics: how “video truth” can undermine just culture

Video is powerful. That’s exactly why it needs strict boundaries.

Three predictable failure modes (not inevitabilities — risks):

1) Chilling effects on reporting and debriefing If people believe near-misses can be replayed later for non-learning purposes, they will talk less — and the system loses the very early signals that prevent casualties.

2) Hindsight bias (“it’s obvious on the replay”) Footage makes it easy to focus on the last action (“wrong button”, “late call”, “why didn’t you…”) instead of the conditions that made the error likely: fatigue, workload, manning, schedule, alarm fatigue, bridge resource management dynamics.

3) Reframing the bridge as a monitored workplace A bridge is already cognitively saturated. Adding an unseen remote audience can increase self-consciousness and “compliance theatre” precisely when calm judgement matters.

A just culture depends on a simple bargain: reporting is safe, learning is real, and blame is not automated.

Ship-to-shore CCTV analytics can support that bargain — but only if governance makes the bargain explicit.


5) Regulation is drawing red lines (and it matters at sea)

The EU AI Act is already active in parts and explicitly bans certain “unacceptable risk” practices — including emotion recognition in workplaces and education institutions (with limited exceptions). The European Commission states the prohibitions became effective in February 2025, and it lists “emotion recognition in workplaces” among prohibited practices.

The Berlaymont building in Brussels, headquarters of the European Commission, photographed from street level.

European Commission HQ (Berlaymont), Brussels — a reminder that “workplace AI” rules are tightening.

The Commission also published guidelines on prohibited AI practices (non-binding, but a strong signal of interpretation).

Two practical implications for maritime deployments (non-legal, operational reading — consult counsel for decisions):

  • If an onboard system is used in ways that resemble worker-management (evaluation, discipline, performance scoring), you’re moving into a highly sensitive category where fundamental-rights impacts are central.
  • Any vendor marketing that drifts toward stress/emotion inference in workplace contexts is entering a regulatory minefield — and ship operators should ask hard questions in writing before deployment.

This is not “EU bureaucracy.” It’s an attempt to stop exactly the pattern we’re tempted to normalise: monitor first, govern later.


6) Design principle: assist the ship, don’t audit the crew

Here’s the distinction I care about:

Assist means the system is built to help the crew do the job better in the moment, with minimal data and minimal downstream use.

Audit means the system is built to create a persistent record for remote review, scoring, benchmarking, or claims leverage.

Once you accept ship-to-shore streaming as the default, you shift incentives. The system no longer has one customer (the bridge team). It has multiple customers — and their interests diverge.

Good governance is not a “terms and conditions” paragraph. It’s architecture:

  • local processing by default
  • short retention (incident-only, time-boxed, purpose-limited)
  • clear access rules (who sees what, when, and why)
  • explicit separation between safety learning and discipline/claims
A two-column flow diagram comparing onboard AI that triggers local alerts and BNWAS escalation with a ship-to-shore model that sends video or event data to dashboards ashore.

Concept diagram: local “assist” architecture vs ship-to-shore “audit” architecture.


7) Before installing another camera, ask these questions

You don’t need a PhD in AI. You need procurement discipline.

A checklist titled “Before installing another camera, ask these questions,” listing questions about what data leaves the ship, access and retention, training use, data ownership, evidence of safety benefit, just culture, SMS discipline limits, and GDPR/EU AI Act fit.

Printable checklist: governance questions to answer before installing ship-to-shore camera analytics.

Ask, in writing:

  • What exactly leaves the ship (continuous video, clips, metadata)?
  • Who can access it, for how long, and under what governance?
  • Is any of it used to train models beyond your fleet?
  • Who owns the data and footage (owner / P&I / vendor)?
  • Is there independent evidence of safety benefit (not just testimonials)?
  • Does this support a just culture — or “blame-by-video”?
  • Will your SMS state recordings cannot be the sole basis for discipline?
  • How does it fit with GDPR and the EU AI Act (high-risk / prohibited uses)?

Closing: safety with dignity — and skills intact

Shipping needs better tools. Nobody serious disputes that.

But we should be careful not to build a future where:

  • humans remain fully responsible,
  • while their work becomes continuously extractable,
  • and learning culture quietly turns into compliance theatre.

If we want to recruit and retain professionals, we must design systems that support the watch, not systems that assume the watch is the problem.

Closing question (engineered for experts): Where would you draw the hard line in an SMS or procurement spec — what ship-to-shore video/analytics use is acceptable for safety, and what should be banned (discipline, claims leverage, benchmarking, etc.)?


ShipIn / NorthStandard pilot announcement (Nov 2025): https://shipin.ai/resources/northstandard-launches-free-pilot-initiative-to-expand-members-access-to-fleetvision/

ShipIn “How it works” (FleetVision): https://shipin.ai/how-shipin-works/

Danica Seafarer Survey 2025 reporting (Nov 2025): https://www.danica-maritime.com/wp-content/uploads/2025/10/Danica-Seafarer-Survey-Report-2025.pdf

European Commission – AI Act overview + prohibited practices list/timeline: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

European Commission – Guidelines on prohibited AI practices (Feb 2025): https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-act