Don’t Take Ookla Too Seriously

Ookla recently published its Q2 report on the performance of U.S. wireless networks. As I’ve discussed before, I’m not a fan of Ookla’s methodology.[1] Because of my qualms, I’m not going to bother summarizing Ookla’s latest results. However, I do want to draw attention to a part of the recent report.

Ookla’s competitive geographies filter

In the last year or two, Ookla has restricted its main analyses to only account for data from “competitive geographies.” Here’s how Ookla explains competitive geographies:

To meet the definition of ‘competitive’ in the U.S., a zip code must contain samples from at least three top national competitors…but no competitor can have more than 2/3 of the samples in that zip code.

The competitive geographies filter mitigates some of the problems with Ookla’s methodology but also introduces a bunch of new issues.

Availability

Ookla’s latest results for 4G availability illustrate the issues:

Ookla 4G availability scores

Sprint unambiguously has the smallest coverage profile of the four nationwide networks.[2] The competitive geographies filter makes Ookla’s availability metric so meaningless that Sprint can nevertheless tie for the best availability.

Lots of regions only have coverage from Verizon. All those data points get thrown away because they come from non-competitive geographies. Other areas have coverage from only Verizon and AT&T. Again, those data points get thrown out because they’re not from competitive geographies. What’s the point of measuring availability while ignoring the areas where differences in network availability are most substantial?

Giving Ookla credit

Despite my criticisms, I want to give Ookla some credit. Many evaluators develop complicated, poorly thought-out metrics and only share those metrics when the results seem reasonable. I appreciate that Ookla didn’t hide its latest availability information because the results looked silly.

Opensignal’s Report on U.S. 5G Performance – No Big Surprises

Earlier this week, Opensignal released a report on the performance of 5G networks in the United States. Opensignal’s report puts some numbers and data behind two things that were already clear:

  • T-Mobile is destroying the competition in terms of 5G coverage, but T-Mobile’s 5G isn’t very fast
  • Verizon’s 5G is outrageously fast, but the coverage profile is terrible.

Opensignal primarily collects its performance data by crowdsourcing data from tests that run in the background on regular people’s phones. It looks like the company restricted the data underlying this report to include tests run from 5G-compatible phones.

Speeds

Verizon destroyed the competition with an average 5G download speed of 495Mbps. The other major networks in the U.S. had 5G download speeds averaging around 50Mbps. Verizon’s dominance in download speeds is due to the company’s focus on rolling out millimeter wave 5G.

Coverage

Unlike Verizon, T-Mobile has focused on deploying sub-6 5G. This type of 5G is great for covering large areas, but less impressive for delivering high speeds. Unsurprisingly, T-Mobile dominated in terms of 5G availability. According to Opensignal’s data, T-Mobile subscribers were able to access 5G 22.5% of the time. Verizon did about fifty times worse with an availability score of 0.4%.

While 0.4% is low, it’s still a better availability score than I would have predicted for Verizon. I wonder if Opensignal’s crowdsourcing approach might lend Verizon’s availability scores a leg up. If living near a Verizon 5G deployment makes a Verizon customer more likely to purchase a 5G phone, selection bias can creep in and cause Opensignal to overestimate Verizon’s actual 5G availability.

Silly press releases

Following Opensignal’s release of its report, T-Mobile published a press release. The company bragged about the network’s excellent 5G coverage without mentioning that the network got demolished in the download speed results.

Verizon published its own press release bragging about ludicrous download speeds. Verizon’s awful 5G availability score was not mentioned.

Download Speed Experience – 5G Users

Opensignal’s report included a metric called Download Speed Experience – 5G Users. The results for this metric were calculated by looking at users with 5G-compatible phones and tracking their average download speed even at times where they did not have 5G connections. In some sense, this single metric does some accounting for both 5G speeds and 5G availability.

Verizon and AT&T tied for the top spot:
Results graph showing AT&T and Verizon tied for the top spot

The metric is interesting, but I don’t think it quite captures how users will feel about the quality of their download speed experiences. The marginal value of a 10Mbps boost in download speeds that moves a subscriber from 5Mbps to 15Mbps is much greater than the marginal value of a 10Mbps boost that moves a subscriber from 500Mbps to 510Mbps. Collapsing a distribution of download speeds into a single, average download speed masks this reality.

Proof & Network Evaluations

RootMetrics recently tweeted about how its latest analyses prove which networks are the fastest and most reliable:

RootMetrics' Tweet

I’m a big fan of RootMetrics, but the tweet annoyed me. There’s a ton of flexibility in how network evaluators can approach their work. Will performance data be crowdsourced from consumers or collected via in-house testing? How will data be cleaned and aggregated? What regions will be included? Etc.

Phrases like “fastest network” and “most reliable network” are ambiguous. Do you determine the fastest network based on average download speeds, median download speeds, or something else?

RootMetrics’ tweet is especially odd in light of their latest report. Depending on which speed metric you choose to look at, you could argue that either Verizon or AT&T has the fastest network. AT&T narrowly beats Verizon in terms of median download speed. Verizon narrowly beats AT&T in RootMetrics’ overall speed scores.[1]

RootMetrics’ Results In Dense City Centers

RootMetrics’ most recent report includes information about network speeds in city centers. Here’s how RootMetrics’ describes the tests:[1]

Capacity is critical for a good mobile experience in highly congested areas. To show you how the carriers performed in high-traffic areas of select cities, we measured each carrier’s median download speed outdoors in the dense urban cores of 13 major cities across the country.

The results are presented with the following graphic that lists the median download speed in megabits per second from the fastest and slowest network in each city:[2]

RootMetrics' median download speeds in city centers

Despite Sprint’s lousy performance in RootMetrics’ overall, national-level results, Sprint still managed to offer the fastest speeds in 3 of the 13 city centers RootMetrics considered. The results are consistent with a point I’ve tried to emphasize in the past: if you tend to stay in one area, you shouldn’t worry too much about national-level network performance.

Prioritization

I haven’t seen information about which plans RootMetrics’ test devices use on each network. In fact, I’m not entirely sure RootMetrics uses service plans that are available to regular consumers. My guess is that RootMetrics’ test devices have prioritization and quality of service levels at the high-end of what’s available to regular consumers. If my guess is correct, RootMetrics’ test devices have higher priority than many budget-friendly services that run over the Big Four networks. A few examples of those services:

  • AT&T: Cricket Wireless unlimited plans
  • T-Mobile: Metro by T-Mobile plans
  • Sprint: Mobile hotspot plans and Boost Mobile plans
  • Verizon: Verizon Prepaid plans, the Start Unlimited plan, and Visible plans

Subscribers using low-priority services are especially likely to experience slow speeds in congested areas. I’d be interested in seeing RootMetrics rerun its city-center tests with low-priority services.

RootMetrics’ Report For Late 2019

Yesterday, RootMetrics released its latest report on the performance of U.S. wireless networks. I’d been looking forward to this report. RootMetrics’ drive testing methodology has some advantages over the approaches used by other companies that evaluate network performance.

Results

RootMetrics’ results were generally unsurprising. As with the last report, Verizon was the big winner, followed by AT&T in second place, T-Mobile in third, and Sprint in fourth.

Here are the overall, national scores out of 100 for each of the major networks:

  • Verizon – 94.6 points
  • AT&T – 93.2 points
  • T-Mobile – 86.5 points
  • Sprint – 83.2 points

RootMetrics also reports which carriers scored the best on each of its metrics within individual metro areas. Here’s how many metro area awards each carrier won (along with the change in the number of rewards received since the last report):

  • Verizon – 660 awards (-12)
  • AT&T – 401 awards (+21)
  • T-Mobile – 217 awards (-20)
  • Sprint – 80 awards (-9)

AT&T’s improvements

RootMetrics’ results align with the results of other recent evaluations suggesting aspects of AT&T’s network are becoming more competitive. AT&T fared particularly well in RootMetrics’ latest speed metrics. While Verizon narrowly beat AT&T in the final speed score out of 100 (90.7/100 for Verizon vs. 90.2/100 for AT&T), AT&T narrowly beat Verizon in aggregate median download speed (33.1 Mbps for AT&T vs. 32.7 Mbps for Verizon).

It appears that RootMetrics’ final speed scores are based on something more than median download speed. That may be a good thing: having consistent speeds is arguably much more important than having high average or median speeds. Still, I’m frustrated that I can’t figure out exactly how the final speed scores are derived. RootMetrics continues to be non-transparent about the math underlying its analyses.

A section of the latest report suggests that Verizon may do a particularly good job of avoiding sluggish speeds:

Verizon’s ‘slowest’ median download speed of 17.9 Mbps, recorded in Fresno, CA, was still quite strong and would allow end users to complete the majority of data tasks with ease. In fact, Fresno was the only market in which Verizon registered a median download speed below 20 Mbps. No other carrier came close to matching Verizon’s consistency of delivering fast speeds in metros across the US.

5G performance

The new report includes details about RootMetrics’ recent tests on 5G networks. I found the 5G results unsurprising, and I’m not going to comment on them further at this time. I think 5G deployments are still in too early a stage for the results to be of much interest.

Opensignal’s 2020 U.S. Mobile Performance Report

Today, Opensignal released a new report on the performance of U.S. wireless networks. The report details results on seven different metrics.

Here are the networks that took the top spot for each metric at the national level:

  • Video Experience – Verizon
  • Voice App Experience – T-Mobile/AT&T (draw)
  • Download Speed Experience – AT&T
  • Upload Speed Experience – T-Mobile
  • Latency Experience – AT&T
  • 4G Availability – Verizon
  • 4G Coverage Experience – Verizon

It’s important to interpret these results cautiously due to limitations in Opensignal’s crowdsourcing approach. Since performance data is collected from actual subscribers’ devices, factors not directly related to underlying network quality may impact the organization’s results. For example, if subscribers on a network are unusually likely to use low-end devices or live in rural areas, that will affect the outcome of Opensignal’s analyses. Still, Opensignal’s results are interesting; they’re drawn from a huge data set involving primarily automated performance tests.

Download speed findings

The most notable result in the latest report might be AT&T’s first-place finish on the download speed metric. In the previous Opensignal report, T-Mobile won first place for download speeds, and AT&T took third place. I’ve recently been critical of the methodologies used in some other evaluations that suggested AT&T had the nation’s fastest network. While many of those methodological criticisms still stand, the fact that Opensignal’s arguably more reliable methodology also found AT&T to have the fastest network leads me to believe I was too harsh. I’ll be interested to see whether AT&T also takes the top spot for speeds in RootMetrics’ upcoming report.

New metrics

Two new metrics were introduced in this report: Voice App Experience and 4G Coverage Experience. The Voice App Experience metric assesses the quality of voice calls via apps like Skype and Facebook Messenger. I’m not exactly sure how the metric works, but it looks like all four networks received similar scores. Opensignal deemed all these scores as indicative of “acceptable” quality.

The 4G Coverage Experience metric adds a bit of complexity to the previously existing 4G Availability metric. The coverage metric assesses 4G availability across areas all Opensignal’s users find themselves in, regardless of their network.

AT&T’s Claim To Being America’s Best Network

AT&T has been running an ad campaign with commercials where the company claims to offer the best network.

These commercials start with a funny skit that leads to the line, “just ok is not ok.” The commercials’ narrator then says something along the lines of: “AT&T is America’s best wireless network according to America’s biggest test.”

Here’s an example:



Alternate versions of the commercial involve ok babysitters, ok sushi, ok surgeons, and more.

AT&T bases its “best network” claim on the results of Global Wireless Solutions’s (GWS) 2018 tests. The claim is at odds with the results of many other companies’ evaluations and my own view.

The meaning of the word “best” is ambiguous, but I’d guess that a survey of professionals in the wireless industry would find that most people consider RootMetrics to be the best evaluation firm in the wireless industry. Verizon fared far better than AT&T in RootMetrics’s most recent evaluation.

It’s unclear to me what AT&T is claiming when it calls GWS’s test, “America’s biggest test.” Is it the biggest test in terms of miles driven, data points collected, area covered, or something else? GWS may have the biggest test according to one metric, but it’s not unambiguously the biggest test in the nation.

GWS’s OneScore Methodology & 2019 Results

Global Wireless Solutions (GWS) evaluates wireless networks according to the company’s OneScore methodology. At the moment, AT&T cites GWS’s results in commercials where AT&T claims to offer the best network.

In an article about performance tests of wireless networks, GWS’s founder, Dr. Paul Carter, writes:[1]

With so many conflicting research reports and with every network touting itself as number one, it’s critical that wireless carriers are transparent about how and what they actually test. If what was tested doesn’t match up with the average consumer experience, then was that test truly worthwhile?

Unfortunately, GWS itself is not especially transparent about its methodology. The public-facing information about the company’s methodology is sparse, and I did not receive a response to my email requesting additional information.

As I understand it, GWS’s methodology has two components:

  • Technical performance testing in about 500 markets
  • Consumer surveying that helps determine how much weight to give different metrics

Technical testing

In 2019, GWS conducted extensive drive testing; GWS employees drove close to 1,000,000 miles as phones in their vehicles performed automated tests of networks’ performance.[2]

The drive testing took place in about 500 of the markets, including all of the largest metropolitan areas. GWS says the testing represents about 94% of the U.S. population.[3] I expect that GWS’s focus on these markets limits the weight placed on rural and remote areas. Accordingly, GWS’s results may be biased against Verizon (Verizon tends to have better coverage than other networks in sparsely populated areas).

Consumer surveying

In 2019, GWS surveyed about 5,000 consumers to figure out how much they value different aspects of wireless performance.[4] GWS finds that consumers place a lot of importance on phone call voice quality, despite the fact the people are using their phones for more and more activities unrelated to phone calls.[5] GWS also finds that, as I’ve suggested, consumers care a lot more about the reliability of their wireless service than its raw speed.[6]

Combining components

As I understand it, GWS draws on the results of its surveying to decide how much weight to place of different aspects parts of the technical performance tests:

The consumer survey includes questions asking respondents to rank the importance of different tasks they perform on their mobile device, as well as the importance of different aspects of network performance. Our network test results are then weighted according to how consumers prioritize what’s important to them, and evaluated in eleven different network performance areas related to voice, data, network reliability and network coverage.

The methodology’s name, OneScore, and the graphic below suggest that the company combines all of its data to arrive at final, numerical scores for each network:[7]

GWS OneScore Visual

Oddly enough, I can’t find GWS publishing anything that looks like final scores. That may be a good thing. I’ve previously gone into great detail about why scoring systems that use weighted rubrics to give companies or products a single, overall score tend to work poorly.

2019 Results

In GWS’s 2019 report, the company lists which networks had the best performance in several different areas:

AT&T:

  • Download speed
  • Data reliability
  • Network capacity
  • Video streaming experience
  • Voice accessibility
  • Voice retainability

T-Mobile:

  • Voice quality

Verizon:

  • Upload speed

Open questions

I have a bunch of open questions about GWS’s methodology. If you represent GWS and can shed light on any of these topics, please reach out.

  • Does the focus on 501 markets (94% of the U.S.) tend to leave out rural areas where Verizon has a strong network relative to other operators?
  • Do operators pay GWS? Does AT&T pay to advertise GWS’s results?
  • What does the consumer survey entail?
  • How directly are the results of the consumer survey used to determine weights used later in GWS’s analysis?
  • What does GWS make of the discrepancies between its results and those of RootMetrics?
  • How close were different networks’ scores in each category?
  • GWS shares the best-performing network in several categories. Is information available about the second, third, and fourth-place networks in each category?
  • Does GWS coerce its raw data into a single overall score for each network?
    • Are those results publicly available?
    • How are the raw performance data coerced into scores that can be aggregated?

FCC Reveals Misleading Coverage Claims

On Wednesday, the FCC released a fascinating report related to the Mobility Fund Phase II (MF-II). The MF-II is a planned program to provide federal funding for network build-outs in rural areas that are underserved by 4G coverage.

To determine which geographic areas were underserved, the FCC requested coverage maps and data from network operators. After reviewing the data and allowing outside entities to challenge the datas’ reliability, the FCC became concerned about the accuracy of the information shared by T-Mobile, U.S. Cellular, and Verizon. The FCC decided to conduct its own performance tests and compare the results of its tests to the information the network operators provided. Here’s what the agency found:[1]

Through the investigation, staff discovered that the MF-II coverage maps submitted by Verizon, U.S. Cellular, and T-Mobile likely overstated each provider’s actual coverage and did not reflect on-the-ground performance in many instances. Only 62.3% of staff drive tests achieved at least the minimum download speed predicted by the coverage maps—with U.S. Cellular achieving that speed in only 45.0% of such tests, T-Mobile in 63.2% of tests, and Verizon in 64.3% of tests…In addition, staff was unable to obtain any 4G LTE signal for 38% of drive tests on U.S. Cellular’s network, 21.3% of drive tests on T-Mobile’s network, and 16.2% of drive tests on Verizon’s network, despite each provider reporting coverage in the relevant area.

Incentives

When considering the accuracy of coverage maps, I try to think about the incentives network operators face. When advertising to consumers, network operators often have an incentive to overstate the extent of their coverage. However, incentives can run in the opposite direction in other situations. For example, when trying to get approval for a merger between Sprint and T-Mobile, Sprint had incentives to make its 4G coverage profile look limited and inferior to the coverage profiles of other nationwide networks.[2]

I’m not well-informed about the MF-II, so I don’t feel like I have a good grasp of all the incentives at play. That said, it’s not clear that all network operators would have an incentive to overstate their coverage. A network operator that claimed to offer coverage in an area it didn’t cover may limit competitors’ access to subsidies in that area. However, a network operator erroneously claiming to cover an area may prevent itself from receiving subsidies in that area.

Challenges

After network operators submitted coverage information to the FCC, a number of entities, including both governments and network operators, were allowed to challenge the validity of coverage information submitted by others. Here’s a bit more detail about the challenge process:[3]

After release of the map of presumptively eligible areas, mobile service providers, state, local, and Tribal government entities, and other interested parties granted a waiver were eligible to submit challenges in the challenge process via an online system operated by USAC. Challengers that requested access to the USAC MF-II Challenge Portal were able to access the provider-specific coverage maps, after agreeing to keep the coverage data confidential, and to file challenges to providers’ coverage claims by submitting speed test data. Challengers were required to conduct speed tests pursuant to a number of standard parameters using specific testing methods on the providers’ pre-approved handset models. The Commission adopted the requirement that challengers use one of the handsets specified by the provider primarily to avoid inaccurate measurements resulting from the use of an unsupported or outdated device—e.g., a device that does not support all of the spectrum bands for which the provider has deployed 4G LTE…During the eight-month challenge window, 106 entities were granted access to the MF-II Challenge Portal. Of the 106 entities granted access to the MF-II Challenge Portal, 38 were mobile service providers required to file Form 477 data, 19 were state government entities, 27 were local government entities, 16 were Tribal government entities, and six were other entities that filed petitions requesting, and were each granted, a waiver to participate.

About a fifth of the participating entities went on to submit challenges:[4]

21 challengers submitted 20.8 million speed tests across 37 states.

The challenge data often showed failed tests and lackluster speeds in areas where network operators claimed to offer coverage:[5]

During the challenge process, some parties entered specific concerns into the record. For example:[6]

Smith Bagley (d/b/a Cellular One) submitted maps of its service area in Arizona overlaid with Verizon’s publicly-stated 4G LTE coverage and the preliminary results of drive tests that Smith Bagley had conducted. Smith Bagley asserted that, for large stretches of road in areas where Verizon reported coverage, its drive testers recorded no 4G LTE signal on Verizon’s network. Smith Bagley argued that the ‘apparent scope of Verizon’s inaccurate data and overstated coverage claims is so extensive that, as a practical matter, the challenge process will not and cannot produce the necessary corrections.’
As part of a public report detailing its experience, Vermont published a map showing its speed test results which contradicted the coverage maps in Vermont of U.S. Cellular, T-Mobile, and Verizon, among others. This map included information on the approximately 187,000 speed tests submitted by Vermont, including download speed, latency, and signal strength. In the report, Vermont detailed that 96% of speed tests for U.S. Cellular, 77% for T-Mobile, and 55% for Verizon failed to receive download speeds of at least 5 Mbps.

After reviewing the challenges, the FCC requested additional information from the five largest network operators (AT&T, T-Mobile, Verizon, Sprint, and U.S. Cellular) to understand the assumptions involved in the networks’ coverage models.

FCC tests

Around the same time the FCC was requesting additional information from network operators, the agency also began its own testing of Verizon, U.S. Cellular, and T-Mobile’s networks. These speed tests took place in 12 states and primarily made use of a drive-testing methodology. As mentioned earlier, analyses of the FCC’s test data suggested that the on-the-ground experience with Verizon, T-Mobile, and U.S. Cellular’s network was much different than the experience that would be expected based on the information the networks provided to the FCC.

What happened?

A lot of the commentary and news articles I’ve seen in response to the FCC’s report seem to conclude that network operators are bullshitters that intentionally lied about the extent of their coverage. I have reservations about fully accepting that conclusion. Accurately modeling coverage is difficult. Lots of factors affect the on-the-ground experience of wireless subscribers. The FCC largely acknowledges this reality in its report:

Providers were afforded flexibility to use the parameters that they used in their normal course of business when parameters were not specified by the Commission. For example, the Commission did not specify fading statistics or clutter loss values, and providers were required to model these factors as they would in the normal course of business.[7]
Our speed testing, data analyses, and inquiries, however, suggest that some of these differences may be the result of some providers’ models: (1) using a cell edge RSRP value that was too low, (2) not adequately accounting for network infrastructure constraints, including backhaul type and capacity, or (3) not adequately modeling certain on-the-ground factors—such as the local clutter, terrain, and propagation characteristics by spectrum band for the areas claimed to be covered.[8]

Further supporting the idea that assessing coverage is difficult, the FCC didn’t just find that its tests contradicted the initial information submitted by network operators. The FCC data also contradicted the data submitted by those who challenged network operators’ data:

The causes of the large differences in measured download speed between staff and challenger speed tests taken within the same geographic areas, as well as the high percentage of tests with a download speed of zero in the challenger data, are difficult to determine. Discrepancies may be attributable to differences in test methodologies, network factors at the time of test, differences in how speed tet apps or drive test software process data, or other factors…Given the large differences between challenger and staff results however, we are not confident that individual challenger speed test results provide an accurate representation of the typical consumer on-the-ground experience.[9]

While the FCC found some of the information submitted by networks to be misleading about on-the-ground service quality, I don’t believe it ended up penalizing any network operators or accusing them of anything too serious.[10] Still, the FCC did suggest that some of the network operators could have done better:

Staff engineers, however, found that AT&T’s adjustments to its model to meet the MF-II requirements may have resulted in a more realistic projection of where consumers could receive mobile broadband. This suggests that standardization of certain specifications across the largest providers could result in coverage maps with improved accuracy. Similarly, the fact that AT&T was able to submit coverage data that appear to more accurately reflect MF-II coverage requirements raises questions about why other providers did not do so. And while it is true that MF-II challengers submitted speed tests contesting AT&T’s coverage data, unlike for other major providers, no parties alleged in the record that AT&T’s MF-II coverage data were significantly overstated.[11]

FCC response

The FCC concluded that it should make some changes to its processes:[12]

First, the Commission should terminate the MF-II Challenge Process. The MF-II coverage maps submitted by several providers are not a sufficiently reliable or accurate basis upon which to complete the challenge process as it was designed.
Second, the Commission should release an Enforcement Advisory on broadband deployment data submissions, including a detailing of the penalties associated with filings that violate federal law, both for the continuing FCC Form 477 filings and the new Digital Opportunity Data Collection. Overstating mobile broadband coverage misleads the public and can misallocate our limited universal service funds.
Third, the Commission should analyze and verify the technical mapping data submitted in the most recent Form 477 filings of Verizon, U.S. Cellular, and T-Mobile to determine whether they meet the Form 477 requirements. Staff recommends that the Commission assemble a team with the requisite expertise and resources to audit the accuracy of mobile broadband coverage maps submitted to the Commission. The Commission should further consider seeking appropriations from Congress to carry out drive testing, as appropriate.
Fourth, the Commission should adopt policies, procedures, and standards in the Digital Opportunity Data Collection rulemaking and elsewhere that allow for submission, verification, and timely publication of mobile broadband coverage data. Mobile broadband coverage data specifications should include, among other parameters, minimum reference signal received power (RSRP) and/or minimum downlink and uplink speeds, standard cell loading factors and cell edge coverage probabilities, maximum terrain and clutter bin sizes, and standard fading statistics. Providers should be required to submit actual on-the-ground evidence of network performance (e.g., speed test measurement samplings, including targeted drive test and stationary test data) that validate the propagation model used to generate the coverage maps. The Commission should consider requiring that providers assume the minimum values for any additional parameters that would be necessary to accurately determine the area where a handset should achieve download and upload speeds no less than the minimum throughput requirement for any modeling that includes such a requirement.

Reflections

The FCC’s report illustrates how hard it is to assess network performance. Assumptions must be made in coverage models, and the assumptions analysts choose to make can have substantial effects on the outputs of their models. Similarly, on-the-ground performance tests don’t always give simple-to-interpret results. Two entities can run tests in the same area and find different results. Factors like the time of day a test was conducted or the type of device that was used in a test can have big consequences.

If we want consumers to have better information about the quality of service networks can offer, we need entities involved in modeling and testing coverage to be transparent about their methodologies.

Tutela’s October 2019 MVNO Report

In October, the network evaluator Tutela released its USA State of MVNOs report. Most network evaluators only assess the performance of the Big Four carriers (AT&T, T-Mobile, Sprint, and Verizon), so it’s interesting to see Tutela assessing a wider range of carriers.

Near the beginning of the report, Tutela shares some reflections on how the MVNO landscape is changing:[1]

MVNOs and MNO flanker brands in the US carved out a niche largely serving the needs of lower-income customers or those with particular data needs…in 2019, the landscape is rapidly shifting. Technological advancements have made the barrier for operating some kind of network much lower; the entrance of cable companies into the market have pushed MVNO service into the more lucrative postpaid segment; and multi-network MVNOs are innovating on the network side of the equation, rather than solely differentiating on price or customer service.

Methodology

The approach Tutela used to evaluate MVNOs was in line with its usual methodology. The company crowdsourced performance data from typical consumers with the help of code embedded in Tutela’s partners’ apps. In the new report, Tutela primarily considers how well MVNOs performed in regions where at least three of the big four networks offer coverage. Tutela calls these core coverage areas.[2]

Within core coverage areas, Tutela calculates the amount of time subscribers have service that exceeds two different quality thresholds. When service exceeds the “excellent” threshold, subscribers should be able to do highly demanding things like streaming high-definition video or downloading large files quickly. When service exceeds the “core” threshold, subscribers should be able to carry out typical activities like browsing or streaming music without trouble, but performance issues may be encountered with demanding activities.

Results

Here’s Tutela’s visualization of the main results:[3]

Tutela results


A chart of median download speeds shows a similar ranking among carriers:

Tutela Download Speeds

The results aren’t too surprising. Verizon MVNOs come out near the top of the hierarchy, while Sprint MVNOs tend to come out near the bottom. Cricket Wireless has a good score for the core threshold but does poorly in terms of the excellent threshold. That outcome makes sense since Cricket throttles maximum speeds.

Possible selection bias

I often write about how assessments of network performance that use crowdsourced data may be vulnerable to selection bias. These results from Tutela are no exception. In particular, I wonder if the results are skewed based on how high-quality phones used with different carriers tend to be. In general, newer or more expensive phones have better network hardware than older or cheaper phones.

Xfinity Mobile takes the top spot in the rankings. Xfinity Mobile is a new-ish carrier and is restrictive about which phones are eligible for use with the service. I would guess the average phone used with Xfinity Mobile is a whole lot newer and more valuable than the average phone used with TracFone. Similar arguments could be made for why Spectrum or Google Fi may have an advantage.

To Tutela’s credit, the company acknowledges the possibility of selection bias in at least one case:[4]

The second factor explaining Google Fi’s performance compared to Metro or Boost is the device breakdown. Although a broad range of Android and iOS devices work with Google Fi’s service, the network is targeted most heavily at owners of Google’s own Pixel devices…The Pixel devices use top-of-the-line cellular modems, which intrinsically provide a better cellular experience than older or mid-range devices.

Wi-Fi results

Several MVNOs offer access to Wi-Fi hotspots in addition to cellular networks. I’ve been curious how much data carriers send over Wi-Fi, and Tutela’s results give an estimate. While Xfinity Mobile appears to have sent the largest share of its data via hotspots, it’s a smaller share than I expected:[5]

Tutela data suggests that Xfinity Mobile has already succeeded in offloading over 6% of smartphone data traffic onto its Wi-Fi network – far more than any other network.

Tutela also shares a graph comparing hotspot usage among different carriers:[6]

Graph of wi-fi usage share among multiple carriers

Other stuff

There were a few other bits of the report that I found especially interesting. In one section, the report’s authors reflect on the fast growth of MVNOs run by cable companies:[7]

Xfinity Mobile and Spectrum Mobile captured nearly 50% of the postpaid subscriber growth in Q2 2019, and combined added nearly as many postpaid subscribers as host network Verizon.

In another part of the report, Tutela shares a map displaying the most common host network that Google Fi subscribers access. It looks like there are a decent number of areas where Sprint or U.S. Cellular provide the primary host network:[8]