SPS Networks
Multi-bearer management·Incident governance·Orbit Connect — now available →·New: What should a maritime connectivity SLA actually contain? →·Monthly SLA packs·30-day deployment·Fleet monitoring·Offshore operations·Single accountability·VSAT · LEO · LTE · L-band·One partner. One outcome.·Multi-bearer management·Incident governance·Orbit Connect — now available →·New: What should a maritime connectivity SLA actually contain? →·Monthly SLA packs·30-day deployment·Fleet monitoring·Offshore operations·Single accountability·VSAT · LEO · LTE · L-band·One partner. One outcome.·

2026-04-04 · 7 min read

Why maritime connectivity keeps failing at deployment

The procurement decision gets the attention. Vendors pitch throughput, fleet managers evaluate options, someone signs the contract, and the hardware ships. What happens after the hardware ships receives a fraction of that attention — and accounts for a disproportionate share of the problems that follow.

Deployment failure in maritime connectivity is not usually a hardware failure or a coverage gap. It is an organisational failure: the wrong parties in the room at commissioning, configuration decisions made by people without the context to make them, handover to crew who were not trained on the system, and integration with vessel systems that was never tested before go-live. The result is a system that underperforms against its specification from day one — and frequently continues to underperform because the root cause is never diagnosed.

Configuration is where specifications go to die

A connectivity specification can be technically correct and operationally irrelevant if the configuration that implements it is wrong. Network segmentation between the communications layer and vessel OT systems is a standard recommendation in every maritime cybersecurity framework. It requires VLAN configuration on a managed switch during installation. In practice, the installation team — whose scope covers physical mounting, cable routing, and terminal commissioning — often does not include network engineering. The switch gets connected. The VLANs do not get configured. The specification says segmented network. The delivered system is flat.

QoS is the same problem in a different register. A vessel running Starlink as primary and VSAT as fallback needs traffic prioritisation rules that protect operational data streams when the primary link is congested or failing. Those rules are a configuration file. They are not part of a standard terminal installation. If no one writes them, the system defaults to treating all traffic equally — which means a crew member's video call competes with bridge ECDIS data on a degraded link.

The gap between what was specified and what was configured is rarely visible until the system fails under load.

The multi-party accountability gap

A typical vessel connectivity deployment involves at minimum four parties: the hardware vendor, the satellite carrier, the managed service or integrator layer, and the vessel operator's IT function. In many cases there is also a ship management company between the operator and the vessel, and a flag state or class society that has approved the installation method but not the configuration outcome.

None of these parties owns the end-to-end system outcome. The hardware vendor is responsible for the terminal. The carrier is responsible for the satellite link. The integrator is responsible for what the scope of works says — which was written during procurement and may not reflect what the vessel's network actually looks like. The operator's IT function may never have boarded the vessel. The ship manager co-ordinates the installation window and then disengages.

When something does not work post-installation, each party can correctly identify why their component is not at fault. The problem lives in the gaps: the interface between the terminal and the onboard network, the handoff logic between bearers, the integration between the connectivity platform and the shore-side management system. These are the pieces that no one owned during installation and no one owns during fault resolution.

When four parties are involved and the system does not work, the party with the problem is the fleet manager. The party accountable for fixing it is whoever can be pressured the most.

Physical installation decisions that last for years

Antenna placement is a structural decision. Once a VSAT dome or Starlink terminal is mounted, its position determines its line of sight, its exposure to mast shadow, and its interference profile for the life of the installation. In practice, antenna placement is frequently determined by the shipyard schedule and the convenience of the cable run, not by RF performance analysis. A terminal mounted in the shadow of a funnel can lose 15 to 20 percent of its usable service time on certain headings. That loss persists until the antenna is moved, which requires another haulout and another installation window.

The same issue applies to cable routing: a long coaxial run with substandard connector quality, or a power supply shared with equipment that introduces electrical interference, are decisions made at commissioning that show up as intermittent degradation for years. The engineer who makes those calls is often the only person who knows they were made. There is typically no as-built documentation that captures them. The vessel underperforms, the cause is unknown, and the operator continues paying for service quality they are not receiving.

Crew handover: the point where institutional knowledge evaporates

The commissioning engineer understands the system. The crew who operate the vessel do not, and the commissioning engineer is on the next flight off. The information that does not transfer is not theoretical — it is operational: how to determine which bearer is active, what the fallback sequence looks like, how to reset the terminal without creating a configuration conflict, who to call when the system degrades and what information they need to diagnose it.

Most crew training on connectivity systems covers how to use the Wi-Fi. It does not cover what to do when the system is underperforming. The result is that vessel crews — who are the first observers of connectivity problems and the first responders to them — have no structured procedure for fault reporting and no baseline to compare against. Shore support receives a ticket that says internet slow with no further context, and the diagnostic process starts from scratch every time.

Integration that was never tested

A connectivity system does not exist in isolation. It connects to bridge systems, planned maintenance platforms, remote monitoring tools, and crew welfare portals. In many installations, these integrations are tested in a simulated environment before deployment, or not tested at all. The assumption is that if the connectivity link is up, everything that uses it will work. That assumption fails when bandwidth is constrained, when latency exceeds an application's timeout threshold, or when the routing configuration does not permit the traffic type the application requires.

Remote diagnostic tools for engine monitoring frequently depend on a persistent low-latency connection to a shore-side platform. When that connection is interrupted by a bearer failover — which changes the vessel's IP address — the session drops and must be re-established manually. If the remote diagnostic platform has not been configured to handle reconnection gracefully, and if the crew does not know to expect this behaviour, the tool is effectively unavailable during exactly the conditions when the connectivity system is under stress. The integration failure was present from day one. It was not discovered because it was not tested.

What a deployment that works actually requires

The operators whose connectivity deployments work are not using different hardware. They are running the deployment with a different process: a commissioning checklist that covers configuration and integration, not just physical installation; an acceptance test with defined pass criteria that must be satisfied before the commissioning engineer leaves the vessel; a handover document that gives the next person to touch the system enough context to understand what was built and why.

The acceptance test is the single most impactful intervention available to a fleet manager. It converts the commissioning event from a handover on trust to a handover on evidence. If bearer failover does not execute correctly under a simulated primary outage, that failure is found on commissioning day — not three months later in the middle of a North Atlantic transit. The criteria for passing are defined before the installation team boards the vessel. The cost of a failed test is a delayed go-live. The cost of no test is years of unexplained underperformance.

The deployment process is the part of maritime connectivity that the industry has not standardised. Satellite technology, SLA structures, and cybersecurity frameworks all have published guidance. Commissioning procedures and acceptance criteria are still negotiated individually on every project, or skipped entirely. That is where the fix is — and it does not require a new terminal, a new contract, or a new provider. It requires someone to own the outcome before the vessel leaves port.

Orbit measures what your SLA should.

Multi-bearer visibility, incident governance, and monthly SLA packs — independent of your bearer providers.

Book a call

An Integral company

One control plane. One SLA. One partner.