4 min read

AI on the Edge: Evaluating Options for Vessel Applications

AI on the Edge: Evaluating Options for Vessel Applications

Deploying AI on vessels means running systems offline with high reliability. We evaluated several compact AI devices for maritime use, things like analyzing video feeds, monitoring equipment, or supporting basic decision-making without an internet connection.

Jetson Orin Nano: Strong but Unstable

We tested the NVIDIA Jetson Orin Nano under real-world maritime conditions to assess its readiness for deployment on vessels. It showed good performance when handling straightforward tasks like analyzing camera feeds or monitoring environmental inputs. The device offered a compact, capable AI platform that, at first glance, seemed well-suited for space-constrained setups.

However, during extended testing, a number of serious limitations became clear. The setup process was complex, involving multiple steps that were easily broken by system updates. Keeping the system running reliably required manual fixes, as the documentation was often missing or outdated. The unit also proved to be unstable under moderate load, with unexpected reboots even when active cooling was in place.

Attempting to run more complex AI models (for example, those used in decision-making or basic AI assistance) quickly pushed the device to its limits. Memory became a bottleneck, and compatibility issues emerged when installing newer software components. These challenges underscored the fact that the device lives up to its classification as a developer kit—suitable for testing and experimentation, but not ready for dependable deployment in real-world maritime operations.

In summary, while the Orin Nano is a powerful tool for very specific and lightweight tasks, it lacks the maturity required for unattended, production-grade deployment at sea. We consider it a promising platform for prototyping rather than one ready for operational use.

Verdict: Good for experiments and demos. Not ready for real operations at sea.

Google Coral TPU: Simple and Reliable

The Coral TPU by Google is designed specifically for running small and efficient AI models at the edge. It works well for detecting patterns in images or sound, making it a great option for fixed-function tasks like identifying equipment states, spotting safety risks, or monitoring for predefined anomalies.

One of its biggest strengths is how quickly and reliably it can be deployed. The setup process is straightforward and doesn't require deep technical knowledge, which makes it appealing for teams looking to integrate AI without committing extensive engineering resources. Once deployed, it runs smoothly with very little need for maintenance.

That said, the Coral TPU is limited in what it can do. It only supports specific types of models, and those models must be pre-trained and optimized to run on Google's TensorFlow Lite framework. It's not built for flexibility or for running more interactive or complex AI logic, such as checklists or support tools that need to respond to changing input.

In short, it's a stable and efficient solution for predefined detection tasks but not suitable for anything that requires broader AI reasoning or adaptability.

CPUs: Still the Best for Smart Agents

When it comes to running AI systems that help with decisions, provide instructions, or offer onboard support without an internet connection, regular computer processors (CPUs) remain the most reliable and accessible solution.

Unlike specialized chips designed for specific tasks, CPUs are general-purpose and can run a wide variety of software without special setup. This makes them ideal for smart agents that guide crew through checklists, assist with troubleshooting, or adapt to different situations. These agents do not require heavy graphics processing or large amounts of memory—just enough computing power to interpret instructions, respond to events, and maintain consistent behavior.

Another major advantage is that CPUs operate in a stable and familiar environment. Most operating systems and deployment tools are already built for CPU-based systems, eliminating the need for low-level configuration or hardware-specific patches. This significantly reduces setup and maintenance effort, which is especially valuable on vessels where technical support is limited.

In our own tests, CPUs were the only option that could reliably run basic AI agents without needing workarounds or encountering resource constraints. With the use of smaller, optimized models, it is entirely possible to deploy helpful onboard tools that operate independently and support the crew in real time, even in completely disconnected environments.

A further benefit of CPU-based systems is their ability to share resources across multiple services. By running everything in a virtualized environment, we can dynamically allocate CPU power where it is most needed, helping to smooth out performance during temporary demand spikes and making efficient use of available processing capacity.

BitNet: New Tech Worth Watching

BitNet is an experimental AI model architecture developed by Microsoft that focuses on extreme efficiency. Unlike traditional AI models, BitNet uses a technique called 1-bit quantization, which dramatically reduces how much memory and processing power the model needs to operate. This makes it especially promising for small, general-purpose processors like those found in many embedded and onboard systems.

While still in early development, BitNet represents a potentially important breakthrough for maritime use cases. With this kind of model, it might become feasible to run basic AI agents such as voice-assisted checklists or maintenance helpers directly on compact devices, without requiring a cloud connection or specialized hardware. It is not intended for large-scale decision-making or deep reasoning, but for simple and interactive onboard tools, it could prove to be a game-changer.

Currently, BitNet is still evolving. Only a few test models have been released, and the ecosystem is just starting to take shape. There's still work needed before it's ready for production use. However, the direction is exciting and aligns well with the needs of isolated environments like vessels.

Verdict: BitNet is a technology to watch. It’s not ready yet, but it holds strong potential for enabling lightweight AI assistants in future offline maritime systems.


Summary

There is no single solution that fits all scenarios when deploying AI at the edge, especially in maritime environments. Each platform we evaluated (Jetson, Coral, CPU-based setups, and even early-stage models like BitNet) offers different strengths and trade-offs depending on the use case.

Some devices are well suited for handling visual monitoring or predefined detection tasks, while others shine when used for logic-based agents or interactive tools. The key takeaway is that choosing the right approach depends heavily on the specific goals and technical context of the deployment.

The space is evolving quickly. New lightweight models, improved hardware, and better software support are emerging all the time. For that reason, we recommend reevaluating available options when planning any future AI deployment onboard. What isn't feasible today might become viable in the near future.