AI Lookouts on the Bridge – Assistance, Authority, and the Mental Model

By: a navigating officer who still believes in standing a proper watch

TL;DR

  • “Automated lookout” systems can genuinely improve detection and prioritization in congestion and poor visibility – if they’re built and used as assistants, not replacements.
  • The main risk in the situational-awareness lane isn’t ship‑to‑shore surveillance – it’s automation bias, deskilling, and authority confusion on a tired bridge.
  • Watchkeeping isn’t screen-watching. It’s continuous mental model building: what’s happening now, what’s likely next, and what “smells wrong.”
  • “Good” bridge AI makes uncertainty visible, fails loudly (no silent degraded modes), and supports training that keeps skills sharp.
  • If we want safety with dignity – and skills intact – we need procurement standards that measure human‑system performance, not marketing claims.
Officer seated at a ship’s bridge console with radar and navigation displays, looking out through forward windows over coastal water.

This is where authority lives. Any “automated lookout” changes the job at this chair, even if the hardware looks the same. Image credit: Ibrahim Boran / Pexels.

Disclosure: I am the founder of ELNAV.AI and the designer of Aware Mate, a bridge watchkeeping monitoring system. I therefore have a commercial interest in this space. The views below are my own, based on more than three decades around ships.

Fairness / right of reply: If any organization mentioned wants to respond or correct the record, I will publish the response (or link to it) prominently.


1) Why this matters now

Insurers don’t just reimburse casualties. Increasingly, they influence what gets installed on board.

NorthStandard has publicly announced a partnership with Orca AI as part of its Get SET offering, positioning situational awareness technology as a navigational safety measure.

That matters because the words we use shape the bridge.

When a product is marketed as a “fully automated lookout on the bridge”, the implied message is simple: the human lookout is the weak link.

My view is more careful: these tools can help – but only if we design and govern them so they extend watchkeeping rather than quietly hollow it out.

(Separate topic: ship‑to‑shore CCTV “event” analytics and dashboards. That’s a different governance problem, and I’ll cover it in a later edition.)


2) Steelman: what situational awareness AI can genuinely help with

Let’s be fair. There are real operational pain points these systems target:

  • Reduced visibility / night / backscatter / clutter where the human eye and raw video struggle.
  • High‑density traffic where the attention bottleneck becomes the limiting factor.
  • Cognitive load when the bridge is balancing radar, ARPA, ECDIS, comms, alarms, and paperwork.

If a system improves detection, prioritization, and timely prompting – and the bridge team remains clearly in command – that can be a net safety benefit.

The problem isn’t assistance.

The problem is when “assistance” becomes substitution by drift: the watch changes from active navigation to monitoring an automation layer.


3) The core risk: automation bias and authority drift

There’s a known human-factors pattern called automation bias: people tend to over‑rely on automated outputs, especially under workload, fatigue, or time pressure.

On a bridge, that shows up in familiar ways:

  • A confident label or alert nudges the OOW toward a conclusion too early.
  • Contradictory cues (a “wrong-looking” echo, an odd light pattern, a behavior mismatch) get discounted.
  • Over time, the bridge team adapts its habits: “the system will catch it.”

That’s not a moral failing. It’s predictable human cognition.

Which means the mitigation can’t be “be more vigilant.” It has to be designed in.


4) Vigilance is not screen-watching. It’s mental model building.

View from inside a ship’s bridge through angled windows over a bright sea; a azimuth ring on gyro compass near the window, with distant snowy coastline under clouds.

Situational awareness starts outside. Tools should support the picture, not replace it. Image credit: Antoine Lamielle, via Wikimedia Commons (CC BY-SA 4.0).

Vigilance on watch is not just “staring at screens.” It is an active, continuous process of building and updating a mental model of the ship’s situation:

What traffic is likely to appear? How are wind and current affecting us? What can our ship and crew do right now? What “smells wrong” in the pattern of lights and echoes outside?

Here’s a very ordinary night in coastal waters:

An AI overlay confidently labels an approaching contact as a fishing vessel and, for a moment, everyone relaxes.

If you have been following that ship’s pattern on radar and ECDIS from the moment she appeared, you will already have a feel for where her gear probably lies and how she is working the grounds. You shape your track early to stay well clear of that gear, and you are not shocked when she later steams up to place herself between you and it – she is defending her livelihood, not starring in an AI demo.

That kind of anticipatory picture is built by a human mind paying attention over time, not by a single classification box on the screen.

If we treat that vigilance as an outdated inefficiency to be patched with neural networks, we will get exactly what we design for: officers who are very good at watching the AI – and less good at watching the sea.


5) What “good” looks like for bridge situational awareness AI

Empty ship’s bridge interior with navigation consoles and chairs facing forward windows; daylight sea visible ahead.

The interface is already crowded. ‘Helpful’ has to be disciplined. Image credit: NAC, via Wikimedia Commons (CC BY-SA 4.0).

If we want situational awareness AI without deskilling, I think these should be non‑negotiables:

1) Uncertainty must be visible If every output looks equally confident, humans will over‑trust it. The system should show confidence/quality in a way that’s usable on watch.

2) No silent degraded modes Lens fouling, glare, spray, saturation, misalignment, radar/AIS anomalies – when the system’s reliability drops, it should say so clearly (not fail “quietly”).

3) Alerts should prompt re-checking, not replace judgment A good alert is: “Look again here, now.” A bad alert is: “All clear.”

4) The interface must preserve mental model building The best designs help the OOW keep the “story” straight: behaviors, trends, intent – not just labels.

5) Training must include “AI wrong” drills If a system is part of the watch routine, then “AI wrong” has to be part of the drills: misclassification, missed detection, false alarms, AIS errors, clutter, and edge cases.

6) Define the ODD in plain language ODD (Operational Design Domain) means: the conditions this system is intended to work in. If the ODD isn’t explicit, users will assume it’s universal – and that’s how surprises become incidents.


6) What would count as evidence (before we scale it)

Testimonials are not evidence. Marketing videos are not evidence.

If we’re serious, “proof” should include at least some of this:

  • Detection performance by condition (day/night, rain, glare, sea state, congestion levels) – including missed detections, not just success stories.
  • False alarm rate and alert load (because attention is finite).
  • Human‑system performance in simulator and at sea: do OOWs make better decisions, earlier – or do they become passive monitors?
  • Degraded-mode behavior: how does the system behave when sensors are compromised?
  • Independent evaluation where possible (not only vendor-run trials).

This isn’t about “gotcha.” It’s about building systems that remain safe when the bridge is tired, busy, and imperfect – because that’s normal operations.


7) AI for good: extend the watch without hollowing it out

None of this means AI-based situational awareness has no place on the bridge. Beyond already useful gains from better imaging in reduced visibility, there are several areas where it can genuinely extend human watchkeeping instead of trying to replace it:

Humpback whale breaching at the ocean surface with water splashing, forested coastline and hills in the background.

AI for good: earlier mammal detection, fewer strikes, better reporting. Image credit: NOAA (public domain).

  • Marine mammal detection and reporting. Dedicated visual/thermal analytics can help detect large mammals, classify likely species, predict most likely path, and automatically share sightings with other ships and coastal networks. That makes it easier to avoid strikes and to respect dynamic “no‑go” areas – something that is hard to achieve reliably from a single bridge with unaided human vision.
  • Piracy and small‑craft threat patterns. Behavior‑focused analytics can watch the wider picture for small craft whose approach patterns match known piracy/armed‑robbery profiles — repeated shadowing, high‑speed convergence, loitering in the wrong place at the wrong time – and give the bridge team an earlier, clearer prompt to reassess. The decision to call the master, harden the ship, or change plan still belongs to humans. The system helps spot the needle in the haystack.
  • Traffic evolution and intent support beyond classic ARPA. Traditional ARPA (even with trial maneuver functions) is still built on geometric projections of course and speed, one assumed maneuver at a time. More advanced tools can blend AIS, declared next ports, historical trading patterns and typical route behavior to highlight which ships are most likely to interact with your passage plan over a chosen time window – not just under the current vector setting. Used as an adviser (not a pilot), this can reduce cognitive load in dense traffic.

That’s the lane I want bridge AI to live in: assist the watch, preserve the watch.

The sea has not become less dangerous. We should be careful whose eyes we decide to trust – and what those eyes do to the people who still carry the responsibility.


Closing question

Masters/OOWs, pilots, Class, and P&I: If you had to mandate one requirement before approving an “automated lookout” for fleet rollout, which would it be – ODD limits, uncertainty display, degraded‑mode drills, or independent evaluation – and why?


Sources

NorthStandard press release – Orca AI partnership (Get SET): https://north-standard.com/insights-and-resources/resources/press-releases/northstandard-partners-with-orca-ai-to-offer-safety-benefits-of-situational-awareness-platform-as-part-of-get-set-suite

Orca AI – product positioning (“fully automated lookout on the bridge”): https://www.orca-ai.io/

Danica Seafarer Survey 2025 (stress, mental wellbeing, rest hours context): https://www.danica-maritime.com/wp-content/uploads/2025/10/Danica-Seafarer-Survey-Report-2025.pdf

Automation bias (systematic review; concept + evidence base): https://pmc.ncbi.nlm.nih.gov/articles/PMC3240751/