SPS Networks
Multi-bearer management·Incident governance·Orbit Connect — now available →·New: What should a maritime connectivity SLA actually contain? →·Monthly SLA packs·30-day deployment·Fleet monitoring·Offshore operations·Single accountability·VSAT · LEO · LTE · L-band·One partner. One outcome.·Multi-bearer management·Incident governance·Orbit Connect — now available →·New: What should a maritime connectivity SLA actually contain? →·Monthly SLA packs·30-day deployment·Fleet monitoring·Offshore operations·Single accountability·VSAT · LEO · LTE · L-band·One partner. One outcome.·

2026-04-02 · 6 min read

Why your VSAT uptime report is not telling you the truth

The monthly report arrives from your VSAT provider. Uptime: 99.3 percent. No critical incidents. Service within SLA parameters.

Meanwhile, your operations team spent three days last month chasing a vessel that went effectively unreachable during its transit of the Straits of Malacca. The master reported degraded performance for most of the passage. Your crew welfare application barely functioned. But the SLA report says 99.3 percent.

Both things are true. That is the problem.

How vendors measure uptime

Your VSAT provider measures uptime at the network level — typically from their gateway or network operations centre, not from your vessel. The measurement asks: was the satellite link technically active? It does not ask: was the link usable?

A link can be technically active while delivering a fraction of its contracted throughput. Beam contention — the shared nature of geostationary satellite capacity — means that performance degrades precisely when and where you need it most: in busy shipping lanes, near major ports, during peak traffic hours.

The satellite terminal is connected. The report marks it green. Your vessel's applications are struggling.

The carve-outs

SLA documents in maritime VSAT contracts contain exclusion clauses that have become standard practice. Weather events beyond a defined threshold are excluded. Solar interference windows are excluded. Scheduled maintenance windows — which can account for several hours per month — are excluded. Force majeure clauses are written broadly.

These exclusions are not unreasonable in isolation. The problem is cumulative. When you add up the excluded hours and the technically-active-but-degraded hours, the gap between the reported figure and operational experience becomes significant. That figure survives because the measurement methodology is designed to produce it.

What beam contention actually does

All terminals on a geostationary beam share the same capacity. When multiple vessels transit the same high-traffic region simultaneously — the Straits of Malacca, the approaches to Rotterdam, the English Channel — they compete for capacity on the same beam. Throughput per vessel drops. Latency rises. Applications that require consistent bandwidth degrade or fail.

None of this registers as an outage. The terminal is connected. The uptime counter keeps running. The report stays green.

What your report should show but does not

The standard monthly report demonstrates that the provider met its contractual obligations — and nothing more. That is a different document from one designed to give you operational visibility. Here is what should be in it:

Bearer-level uptime by zoneNot aggregate network uptime — uptime per bearer (VSAT, LTE, L-band) broken down by operating zone: port, coastal, open ocean. A vessel that loses VSAT in open ocean and falls back to L-band has experienced an event that matters operationally, even if headline uptime stays high.

Throughput against contracted rateNot just whether the link was active, but what throughput was actually delivered relative to what was contracted, by hour and by zone.

Latency percentilesAverage latency hides the spikes that break real-time applications. P95 and P99 matter. Mean does not.

Ticket volume and resolution timeHow many fault reports were raised and how long did each take to resolve. This is the operational burden your team carries that never appears in a headline SLA figure.

Bearer failover eventsHow many times did the terminal fail over to a secondary bearer. Each failover is an admission that the primary was insufficient. It should be counted and explained.

The accountability gap

No independent audit standard exists for maritime VSAT performance reporting. The Smart Maritime Council — the industry body whose membership includes BIMCO and IMO participants — spent twelve months developing a standardised dataset for noon report data, released in May 2024. It was the first time the industry had agreed on what those fields should contain, for a reporting format vessels have produced for over a century. Satellite performance reporting is further behind still.

No third party publishes benchmarks against which a fleet manager can compare their numbers. The industry has accepted vendor-defined measurement as the norm, and vendors have no commercial incentive to change it.

The fleet manager who reads the monthly report and files it has accepted that arrangement. The one who interrogates it — comparing reported uptime against crew feedback, application performance logs, and failover events — is working with a materially different picture of their fleet.

The question is not whether your VSAT provider is meeting its SLA. It almost certainly is. The starting point is specifying in the contract what gets measured — before it is signed.

Orbit measures what your SLA should.

Multi-bearer visibility, incident governance, and monthly SLA packs — independent of your bearer providers.

Book a call

An Integral company

One control plane. One SLA. One partner.