Image of an arrow going upwards

Throttling That Doesn’t Suck

Years ago, some cell carriers introduced “soft caps” for data. Subscribers that used their allotted data could continue accessing the internet at a vastly reduced speed of 128kbps.

In some cases, this was a nice perk. A subscriber to Mint Mobile’s old 3GB plan who had run out of data might still be able to load an important email or boarding pass. In other cases, the reduced-speed data was part of a marketing gimmick. A carrier might offer a soft-capped 15GB plan and market it as an unlimited plan.

At 128kbps, things don’t only load slowly. In many cases, things stop working. Video may not stream. Websites may time out.

Softer Caps

Recently, a handful of carriers started offering soft-capped plans with less aggressive throttles. There are at least five carriers throttling download speeds to 1Mbps or higher:1

  • Xfinity – 1.5Mbps
  • Cox – 1.5Mbps
  • AT&T Prepaid – 1.5Mbps
  • Spectrum – 1Mbps
  • US Mobile – 1Mbps

Props to these carriers. At 1Mbps, you can stream music, browse the internet, and use most apps normally. High-quality video streaming might not work, but almost everything else will.

With the less aggressive throttling, labeling plans unlimited is perhaps a generous framing, but it’s no longer outright bullshit.

Throttled But Prioritized

Ahmed Khattak, CEO of US Mobile, shared the following in a Reddit post (emphasis mine):

We’re also setting the throttle speed of all our Unlimited Plans to 1Mbps after the high-speed data allotment is used. Unless you’re streaming 4K Video, I’m unsure if you will notice any difference if you are throttled. You will also remain on priority data even with throttled speeds.

Network congestion is a common source of the sub-1Mbps speeds that cause lousy user experiences. With priority data, throttled users have some protection from congestion troubles.

I hope we see high-priority data post-throttling become a more common feature.2 Subscribers don’t use a ton of data after getting throttled. MVNOs that pay a per-gig premium for priority data may be able to offer the feature without meaningfully changing their cost structures.

Crowdsourcing Coverage Data

Vehicles with DIMO hardware are driving millions of miles per week and contributing anonymized data to Coverage Critic’s interactive crowdsourced map. Here’s how the new map presents T-Mobile’s coverage around New York City.

Coverage map image showing New York City mostly in green (indicating a strong signal)

Green shading indicates strong signals. As the signals weaken, the shading morphs to yellow and eventually red.

While the map is less filled in remote areas, it’s potentially more interesting. In the heart of the mountain town of Leadville, Colorado, the map shows good coverage from AT&T. Following the highway out of town to the north, signal quality quickly deteriorates.

Map of Leadville showing green (indicating a strong signal) in the center of the town and red (indicating a weak signal) north of the town

Today’s map is built on almost 20 million data points. With over a million more observations coming in each day, the map will grow more complete.

Map users can drill down into the underlying data after selecting a specific region:

Screenshot showing details about the data underlying one of the hexes within the coverage map.

More powerful tools for exploring and visualizing the data are coming soon.

Approaches To Network Testing

There are two standard approaches for assessing cell networks:

  • Drive Testing: Evaluators drive around with several phones on different networks while running tests to assess cellular performance.
  • Conventional Crowdsourcing: Code running in the background of various apps on ordinary consumers’ phones collects data about cell networks.

Pros & Cons

Drive testing is well-controlled. Evaluators use the same device on every network and have detailed information on exactly what’s going on. On the other hand, drive tests assess a limited area and may not reflect real-world usage. RootMetrics, the company behind what is arguably the most respected drive test, covers only a few hundred thousand miles each year.

Conventional crowdsourcing allows for the collection of far more data. However, the data is harder to interpret. Crowdsourced data comes from numerous different devices. It’s often unclear whether a given data point comes from a phone that’s indoors or outdoors. Since consumer aren’t randomly assigned their devices or networks, bias infiltrates assessments.1

Coverage Critic’s New Approach

Coverage Critic’s approach is a hybrid of the two standard approaches to network testing. Like drive tests, Coverage Critic collects data in a relatively well-controlled manner, relying on in-vehicle data from a tightly constrained set of devices. Since data is crowdsourced from thousands of vehicles, more miles are covered in a week than some conventional drive testers cover in a year.2

Enabled By DIMO

Mozilla recently published a bombshell report titled It’s Official: Cars Are the Worst Product Category We Have Ever Reviewed for Privacy. The report details (1) how much data modern vehicles collect and (2) car companies’ tendency to reserve the right to sell that data.

DIMO is reimagining how vehicle data is collected and used, allowing consumers to share in the value of their vehicles’ data while offering more transparency and control over its use.

Thousands of cars driving tens of millions of miles annually are equipped with DIMO hardware and contributing anonymized data to the new map. When Coverage Critic pays DIMO for data, a share of the payments goes towards rewards for the DIMO community. To everyone participating in the project, I’d like to offer my thanks! If you’d like to join the DIMO Network, you can head here to pick up a device.

The Road Ahead

For the moment, Coverage Critic will offer a coverage map relying on RF modeling data submitted to the FCC’s BDC program and an alternate map based on on-the-ground, crowdsourced data. Eventually, I plan to merge both data sources into a single map.

With an appropriate statistical model, the two data sources can aid one another. Information collected on the ground can be used to forecast the accuracy of networks’ RF models in other locations. Predictions from RF models can inform how likely crowdsourced data is to reflect the typical experience in a given area. In a few months, I’ll have much more to say on this topic.

Go ahead and explore the map or check out DIMO. If you have any feedback, please let me know.

Animated image showing Coverage Critic's crowdsouced map in Los Angeles