GWS’s OneScore Methodology & 2019 Results

Global Wireless Solutions (GWS) evaluates wireless networks according to the company’s OneScore methodology. At the moment, AT&T cites GWS’s results in commercials where AT&T claims to offer the best network.

In an article about performance tests of wireless networks, GWS’s founder, Dr. Paul Carter, writes:1

With so many conflicting research reports and with every network touting itself as number one, it’s critical that wireless carriers are transparent about how and what they actually test. If what was tested doesn’t match up with the average consumer experience, then was that test truly worthwhile?

Unfortunately, GWS itself is not especially transparent about its methodology. The public-facing information about the company’s methodology is sparse, and I did not receive a response to my email requesting additional information.

As I understand it, GWS’s methodology has two components:

  • Technical performance testing in about 500 markets
  • Consumer surveying that helps determine how much weight to give different metrics

Technical testing

In 2019, GWS conducted extensive drive testing; GWS employees drove close to 1,000,000 miles as phones in their vehicles performed automated tests of networks’ performance.2

The drive testing took place in about 500 of the markets, including all of the largest metropolitan areas. GWS says the testing represents about 94% of the U.S. population.3 I expect that GWS’s focus on these markets limits the weight placed on rural and remote areas. Accordingly, GWS’s results may be biased against Verizon (Verizon tends to have better coverage than other networks in sparsely populated areas).

Consumer surveying

In 2019, GWS surveyed about 5,000 consumers to figure out how much they value different aspects of wireless performance.4 GWS finds that consumers place a lot of importance on phone call voice quality, despite the fact the people are using their phones for more and more activities unrelated to phone calls.5 GWS also finds that, as I’ve suggested, consumers care a lot more about the reliability of their wireless service than its raw speed.6

Combining components

As I understand it, GWS draws on the results of its surveying to decide how much weight to place of different aspects parts of the technical performance tests:

The consumer survey includes questions asking respondents to rank the importance of different tasks they perform on their mobile device, as well as the importance of different aspects of network performance. Our network test results are then weighted according to how consumers prioritize what’s important to them, and evaluated in eleven different network performance areas related to voice, data, network reliability and network coverage.

The methodology’s name, OneScore, and the graphic below suggest that the company combines all of its data to arrive at final, numerical scores for each network:7

GWS OneScore Visual

Oddly enough, I can’t find GWS publishing anything that looks like final scores. That may be a good thing. I’ve previously gone into great detail about why scoring systems that use weighted rubrics to give companies or products a single, overall score tend to work poorly.

2019 Results

In GWS’s 2019 report, the company lists which networks had the best performance in several different areas:

AT&T:

  • Download speed
  • Data reliability
  • Network capacity
  • Video streaming experience
  • Voice accessibility
  • Voice retainability

T-Mobile:

  • Voice quality

Verizon:

  • Upload speed

Open questions

I have a bunch of open questions about GWS’s methodology. If you represent GWS and can shed light on any of these topics, please reach out.

  • Does the focus on 501 markets (94% of the U.S.) tend to leave out rural areas where Verizon has a strong network relative to other operators?
  • Do operators pay GWS? Does AT&T pay to advertise GWS’s results?
  • What does the consumer survey entail?
  • How directly are the results of the consumer survey used to determine weights used later in GWS’s analysis?
  • What does GWS make of the discrepancies between its results and those of RootMetrics?
  • How close were different networks’ scores in each category?
  • GWS shares the best-performing network in several categories. Is information available about the second, third, and fourth-place networks in each category?
  • Does GWS coerce its raw data into a single overall score for each network?
    • Are those results publicly available?
    • How are the raw performance data coerced into scores that can be aggregated?

Dawson On The Government’s Role In 5G

I recently stumbled across a fantastic post by Doug Dawson about the government’s role in 5G. Here’s a bit of it (emphasis mine):

It’s been really interesting to watch how much the federal government talks about 5G technology. I’ve not seen anything else like this in my adult lifetime…I’ve been hearing about the 5G war for a few years now and I still don’t know what it means. 5G is ultimately a broadband technology. I can’t figure out how the US is harmed if China gets better broadband. If there is now a 5G war, then why hasn’t there been a fiber-to-the-home war? I saw recently where China passed us in the number of last-mile fiber connections, and there wasn’t a peep about it out of Congress…Cellular carriers worldwide are crowing about 5G deployment, yet those deployments contain none of the key technology that defines 5G performance. There is no frequency slicing. There is no bonding together of multiple frequencies to create larger data pipes. There is no massive expansion of the number of connections that can be made at a website. Cellphones can’t yet connect to multiple cell sites. What we have instead, for now, are new frequencies layered on top of 4G LTE…The carriers admit that the 600 MHz and the 850 MHz spectrum being deployed won’t result in faster speeds than 4G LTE…It’s starting to look like the real reason for the talk about a 5G war is to drum up sympathy for the big cellular carriers as a justification for big government giveaways.

I mostly agree with Dawson, and I strongly recommend the full post.

Boost’s “Super Reliable, Super Fast” Network

Boost Mobile is running a new commercial that features Pitbull and pitches the company’s low prices. Towards the end of the ad, a narrator says that Boost has a “super fast, super reliable network.” The narration is accompanied by this image:



In most commercials that involve carriers making claims about service quality, carriers will use fine print to cite research that backs up their claims. The Boost ad doesn’t include a citation; perhaps that’s because Boost’s claim doesn’t have much substance. “Super fast, super reliable” is super vague. Boost’s service probably is super fast compared to wireless service from 15 years ago. On the other hand, Boost’s service is not super fast compared to most services currently offered by other U.S. carriers.

Boost runs over Sprint’s network. There are a lot of different companies that evaluate network performance, and Sprint tends to do poorly relative to its competitors in the rigorous evaluations. Sprint had the lowest speeds among all four major networks in both RootMetrics’ and Opensignal’s most recent national assessments.

Boost’s claim looks even sillier in light of the fact that its subscribers tend to have low priority data access on Sprint’s network. When Sprint’s network is congested, Boost Mobile’s subscribers will tend to experience slower data speeds than those who subscribe directly to Sprint’s service.

A misleading image accompanies Boost’s misleading claims about quality. The image looks like a coverage map showing extensive coverage, but a disclaimer states: Coverage not available everywhere. Not a depiction of actual coverage.

Here’s the full commercial:

BOOM! Mobile Launches T-Mobile Plans

The mobile virtual network operator BOOM! Mobile recently launched wireless plans that run over T-Mobile’s network. With this addition, BOOM! now offers three types of plans:

  • Boom! Red – service over Verizon’s network
  • Boom! Blue – service over AT&T’s network
  • Boom! Pink – service over T-Mobile’s network

Many of the Boom! Pink plans have the same allotments of minutes, texts, and data as well as the same price points as previously existing Boom! Red plans. Boom! Blue plans with allotments equivalent to those in Pink and Red plans are sometimes available, but they tend to have higher price points.

BOOM! Mobile is also offering several Boom! Pink plans that are unlike the company’s previous offerings. These plans each offer a certain number of Flex Points. Each point can be redeemed for either 1 minute of calling, 1 text message, or 1MB of data.

  • 450 Flex Points (7 Day Plan) – $5
  • 900 Flex Points (14 Day Plan) – $10
  • 3,000 Flex Points (Yearly Plan) – $60

My thoughts

It’s great to see BOOM! Mobile expanding its offerings. For most consumers, I think the Red plans will continue to be the best option since (a) they aren’t more expensive than the Pink plans and (b) they run over Verizon’s extensive network. I expect most consumers looking for coverage over T-Mobile’s could find better deals with an alternative MVNO (e.g., Mint Mobile). Still, I’m glad to see BOOM! Mobile offering access to more networks. The new flex plans are particularly interesting. I’d love to see more carriers come out with plans that use similar structures.

Pink-Out

Deutsche Telekom (DT), the parent company of T-Mobile, has been making legal threats against companies that use the color magenta in their branding. DT has gone after companies outside of the telecom industry. DT has even tried to force companies to stop using shades of magenta that are different from the shade it uses. TechCrunch has a good article covering the ridiculous story in more detail.

In a funny turn of events, Itamar Kestenbaum, a software engineer at one of the companies DT has threatened, released a Google Chrome extension called Pink-out. Here’s how the app is described (emphasis mine):

Experience the web according to trademark trolls. Deutsche Telekom (aka, T-Mobile’s parent) is out here telling other companies they can’t use pink…so this Chrome Extension makes sure you’re pink-compliant and removes it from all your browsing pages on the web…This extension is free – like the color pink should be.

Visible’s Swap Program Now Offering Better Phones

I previously raved about Visible’s swap program. New customers used to be able to trade in almost any Android phone to get a free ZTE R2. In my case, I was able to trade in an old phone that was several years old for a much better device.

Visible recently made the swap program much better. The ZTE R2 has been dropped from the program, and customers now get to choose between the ZTE Blade A7 Prime and the Motorola Moto e6. I haven’t got my hands on either device yet, but from what I’ve read, both look like solid entry-level phones.

If you have an Android phone that powers on and isn’t already compatible with Visible, it should be eligible for the swap program. You can verify whether a device is compatible by entering its IMEI on Visible’s website. If you get a message that your device is incompatible, hit the “Next” button to continue with the swap program.

Plan Finder Tool Released

Last week, I released a new plan finder tool. Users accessing the tool can answer a few questions about how they use their phones, how budget-sensitive they are, and where they live. They’ll then be matched with a few carriers and plans that are likely to be well-suited for their needs.

Competing Tools

A few other companies have released their own plan finder tools. These tools generally function by assuming the wireless industry is simpler and more commoditized than it is. For example, WhistleOut’s tool appears to assume that cell phone plans have only five features:

  • A host network
  • An allotment of data
  • An allotment of minutes
  • An allotment of texts
  • A price

The allotments are all assumed to take fixed, numerical values. Plans’ prices are also assumed to take simple, fixed values. The host network is simply one of five options (Verizon, AT&T, T-Mobile, Sprint, or U.S. Cellular). Making these assumptions allows many carriers’ plans to be compared, sorted, and filtered with basic math and logic. Unfortunately, the assumptions sweep a lot of important nuances under the rug. For example:

  • Carriers may throttle data speeds or ignore data use from certain applications. Complicated data policies can’t be captured when assuming that plans have simple, fixed data allotments.
  • Pricing may not be fixed. E.g., Mint Mobile has one price for subscribers that purchase 3 months of service upfront and another price for those who purchase 12 months of service upfront.
  • Two services that use the same host network could have different levels of priority during congestion.
  • Factors WhistleOut doesn’t account for, like device compatibility and customer service quality, matter to consumers.

While WhistleOut’s plan finder has a feature for checking coverage, WhistleOut appears to treat coverage as a binary thing⁠—either you have coverage or you don’t. In reality, coverage quality is much richer. You can have mediocre coverage or strong coverage. You can have good coverage at your house but problematic coverage where you work.

CoverageCritic’s Tool

While building CoverageCritic’s plan finder, I tried to account for things like prices, resource allotments, and coverage quality but kept in mind that these aspects of wireless service are complicated and often difficult to fully capture in simple models. While I can’t claim my tool is exclusively driven by hard data, I think my approach makes the tool more useful than competitor’s tools.

CoverageCritic’s tool makes predictions about coverage quality after drawing on geographic information provided by users. At the moment, state-level estimates of coverage quality are combined with user-provided information about population density. Population density proxies for coverage quality and is used to adjust state-level coverage estimates to arrive at location-specific predictions of coverage quality. In the future, I hope to refine the predictions of coverage quality by drawing on much larger data sets from carriers and network evaluators.

At the moment, the tool considers services from about ten carriers, and I plan to add more soon. The tool isn’t perfect, but it should be able to provide most consumers with a good starting point as they search for wireless providers.

Ryan Reynolds Acquires An Ownership Stake In Mint Mobile

Actor Ryan Reynolds recently announced that he has acquired an ownership stake in Mint Mobile. I expect that Reynolds only owns a part of Mint Mobile rather than the entire company, but I’m not entirely sure. In many places, Reynolds is described as the owner of Mint Mobile in a way that doesn’t seem incompatible with him having complete or near-complete ownership of the company.

From Reynolds’ Twitter bio:


Reynold's Twitter Bio Screenshot


From Reynolds’ tweet announcing involvement with Mint:


Ryan Reynolds tweet screenshot


From a banner on Mint’s website:


Image from Mint Mobile's website


However, Mint Mobile’s press release makes it sound like Reynolds only acquired partial ownership:

Mint Mobile, the wireless company offering carrier-grade service for a fraction of the cost, today announced actor, writer, producer and mobile phone enthusiast Ryan Reynolds has purchased an ownership stake in the company.

The press release suggests that Reynolds will become involved with Mint’s marketing and communications efforts. I’d love to see Mint come up with ads similar to this one that Reynolds used to promote his gin brand:

FCC Reveals Misleading Coverage Claims

On Wednesday, the FCC released a fascinating report related to the Mobility Fund Phase II (MF-II). The MF-II is a planned program to provide federal funding for network build-outs in rural areas that are underserved by 4G coverage.

To determine which geographic areas were underserved, the FCC requested coverage maps and data from network operators. After reviewing the data and allowing outside entities to challenge the datas’ reliability, the FCC became concerned about the accuracy of the information shared by T-Mobile, U.S. Cellular, and Verizon. The FCC decided to conduct its own performance tests and compare the results of its tests to the information the network operators provided. Here’s what the agency found:1

Through the investigation, staff discovered that the MF-II coverage maps submitted by Verizon, U.S. Cellular, and T-Mobile likely overstated each provider’s actual coverage and did not reflect on-the-ground performance in many instances. Only 62.3% of staff drive tests achieved at least the minimum download speed predicted by the coverage maps—with U.S. Cellular achieving that speed in only 45.0% of such tests, T-Mobile in 63.2% of tests, and Verizon in 64.3% of tests…In addition, staff was unable to obtain any 4G LTE signal for 38% of drive tests on U.S. Cellular’s network, 21.3% of drive tests on T-Mobile’s network, and 16.2% of drive tests on Verizon’s network, despite each provider reporting coverage in the relevant area.

Incentives

When considering the accuracy of coverage maps, I try to think about the incentives network operators face. When advertising to consumers, network operators often have an incentive to overstate the extent of their coverage. However, incentives can run in the opposite direction in other situations. For example, when trying to get approval for a merger between Sprint and T-Mobile, Sprint had incentives to make its 4G coverage profile look limited and inferior to the coverage profiles of other nationwide networks.2

I’m not well-informed about the MF-II, so I don’t feel like I have a good grasp of all the incentives at play. That said, it’s not clear that all network operators would have an incentive to overstate their coverage. A network operator that claimed to offer coverage in an area it didn’t cover may limit competitors’ access to subsidies in that area. However, a network operator erroneously claiming to cover an area may prevent itself from receiving subsidies in that area.

Challenges

After network operators submitted coverage information to the FCC, a number of entities, including both governments and network operators, were allowed to challenge the validity of coverage information submitted by others. Here’s a bit more detail about the challenge process:3

After release of the map of presumptively eligible areas, mobile service providers, state, local, and Tribal government entities, and other interested parties granted a waiver were eligible to submit challenges in the challenge process via an online system operated by USAC. Challengers that requested access to the USAC MF-II Challenge Portal were able to access the provider-specific coverage maps, after agreeing to keep the coverage data confidential, and to file challenges to providers’ coverage claims by submitting speed test data. Challengers were required to conduct speed tests pursuant to a number of standard parameters using specific testing methods on the providers’ pre-approved handset models. The Commission adopted the requirement that challengers use one of the handsets specified by the provider primarily to avoid inaccurate measurements resulting from the use of an unsupported or outdated device—e.g., a device that does not support all of the spectrum bands for which the provider has deployed 4G LTE…During the eight-month challenge window, 106 entities were granted access to the MF-II Challenge Portal. Of the 106 entities granted access to the MF-II Challenge Portal, 38 were mobile service providers required to file Form 477 data, 19 were state government entities, 27 were local government entities, 16 were Tribal government entities, and six were other entities that filed petitions requesting, and were each granted, a waiver to participate.

About a fifth of the participating entities went on to submit challenges:4

21 challengers submitted 20.8 million speed tests across 37 states.

The challenge data often showed failed tests and lackluster speeds in areas where network operators claimed to offer coverage:5

During the challenge process, some parties entered specific concerns into the record. For example:6

Smith Bagley (d/b/a Cellular One) submitted maps of its service area in Arizona overlaid with Verizon’s publicly-stated 4G LTE coverage and the preliminary results of drive tests that Smith Bagley had conducted. Smith Bagley asserted that, for large stretches of road in areas where Verizon reported coverage, its drive testers recorded no 4G LTE signal on Verizon’s network. Smith Bagley argued that the ‘apparent scope of Verizon’s inaccurate data and overstated coverage claims is so extensive that, as a practical matter, the challenge process will not and cannot produce the necessary corrections.’
As part of a public report detailing its experience, Vermont published a map showing its speed test results which contradicted the coverage maps in Vermont of U.S. Cellular, T-Mobile, and Verizon, among others. This map included information on the approximately 187,000 speed tests submitted by Vermont, including download speed, latency, and signal strength. In the report, Vermont detailed that 96% of speed tests for U.S. Cellular, 77% for T-Mobile, and 55% for Verizon failed to receive download speeds of at least 5 Mbps.

After reviewing the challenges, the FCC requested additional information from the five largest network operators (AT&T, T-Mobile, Verizon, Sprint, and U.S. Cellular) to understand the assumptions involved in the networks’ coverage models.

FCC tests

Around the same time the FCC was requesting additional information from network operators, the agency also began its own testing of Verizon, U.S. Cellular, and T-Mobile’s networks. These speed tests took place in 12 states and primarily made use of a drive-testing methodology. As mentioned earlier, analyses of the FCC’s test data suggested that the on-the-ground experience with Verizon, T-Mobile, and U.S. Cellular’s network was much different than the experience that would be expected based on the information the networks provided to the FCC.

What happened?

A lot of the commentary and news articles I’ve seen in response to the FCC’s report seem to conclude that network operators are bullshitters that intentionally lied about the extent of their coverage. I have reservations about fully accepting that conclusion. Accurately modeling coverage is difficult. Lots of factors affect the on-the-ground experience of wireless subscribers. The FCC largely acknowledges this reality in its report:

Providers were afforded flexibility to use the parameters that they used in their normal course of business when parameters were not specified by the Commission. For example, the Commission did not specify fading statistics or clutter loss values, and providers were required to model these factors as they would in the normal course of business.7
Our speed testing, data analyses, and inquiries, however, suggest that some of these differences may be the result of some providers’ models: (1) using a cell edge RSRP value that was too low, (2) not adequately accounting for network infrastructure constraints, including backhaul type and capacity, or (3) not adequately modeling certain on-the-ground factors—such as the local clutter, terrain, and propagation characteristics by spectrum band for the areas claimed to be covered.8

Further supporting the idea that assessing coverage is difficult, the FCC didn’t just find that its tests contradicted the initial information submitted by network operators. The FCC data also contradicted the data submitted by those who challenged network operators’ data:

The causes of the large differences in measured download speed between staff and challenger speed tests taken within the same geographic areas, as well as the high percentage of tests with a download speed of zero in the challenger data, are difficult to determine. Discrepancies may be attributable to differences in test methodologies, network factors at the time of test, differences in how speed tet apps or drive test software process data, or other factors…Given the large differences between challenger and staff results however, we are not confident that individual challenger speed test results provide an accurate representation of the typical consumer on-the-ground experience.9

While the FCC found some of the information submitted by networks to be misleading about on-the-ground service quality, I don’t believe it ended up penalizing any network operators or accusing them of anything too serious.10 Still, the FCC did suggest that some of the network operators could have done better:

Staff engineers, however, found that AT&T’s adjustments to its model to meet the MF-II requirements may have resulted in a more realistic projection of where consumers could receive mobile broadband. This suggests that standardization of certain specifications across the largest providers could result in coverage maps with improved accuracy. Similarly, the fact that AT&T was able to submit coverage data that appear to more accurately reflect MF-II coverage requirements raises questions about why other providers did not do so. And while it is true that MF-II challengers submitted speed tests contesting AT&T’s coverage data, unlike for other major providers, no parties alleged in the record that AT&T’s MF-II coverage data were significantly overstated.11

FCC response

The FCC concluded that it should make some changes to its processes:12

First, the Commission should terminate the MF-II Challenge Process. The MF-II coverage maps submitted by several providers are not a sufficiently reliable or accurate basis upon which to complete the challenge process as it was designed.
Second, the Commission should release an Enforcement Advisory on broadband deployment data submissions, including a detailing of the penalties associated with filings that violate federal law, both for the continuing FCC Form 477 filings and the new Digital Opportunity Data Collection. Overstating mobile broadband coverage misleads the public and can misallocate our limited universal service funds.
Third, the Commission should analyze and verify the technical mapping data submitted in the most recent Form 477 filings of Verizon, U.S. Cellular, and T-Mobile to determine whether they meet the Form 477 requirements. Staff recommends that the Commission assemble a team with the requisite expertise and resources to audit the accuracy of mobile broadband coverage maps submitted to the Commission. The Commission should further consider seeking appropriations from Congress to carry out drive testing, as appropriate.
Fourth, the Commission should adopt policies, procedures, and standards in the Digital Opportunity Data Collection rulemaking and elsewhere that allow for submission, verification, and timely publication of mobile broadband coverage data. Mobile broadband coverage data specifications should include, among other parameters, minimum reference signal received power (RSRP) and/or minimum downlink and uplink speeds, standard cell loading factors and cell edge coverage probabilities, maximum terrain and clutter bin sizes, and standard fading statistics. Providers should be required to submit actual on-the-ground evidence of network performance (e.g., speed test measurement samplings, including targeted drive test and stationary test data) that validate the propagation model used to generate the coverage maps. The Commission should consider requiring that providers assume the minimum values for any additional parameters that would be necessary to accurately determine the area where a handset should achieve download and upload speeds no less than the minimum throughput requirement for any modeling that includes such a requirement.

Reflections

The FCC’s report illustrates how hard it is to assess network performance. Assumptions must be made in coverage models, and the assumptions analysts choose to make can have substantial effects on the outputs of their models. Similarly, on-the-ground performance tests don’t always give simple-to-interpret results. Two entities can run tests in the same area and find different results. Factors like the time of day a test was conducted or the type of device that was used in a test can have big consequences.

If we want consumers to have better information about the quality of service networks can offer, we need entities involved in modeling and testing coverage to be transparent about their methodologies.

Tutela’s October 2019 MVNO Report

In October, the network evaluator Tutela released its USA State of MVNOs report. Most network evaluators only assess the performance of the Big Four carriers (AT&T, T-Mobile, Sprint, and Verizon), so it’s interesting to see Tutela assessing a wider range of carriers.

Near the beginning of the report, Tutela shares some reflections on how the MVNO landscape is changing:1

MVNOs and MNO flanker brands in the US carved out a niche largely serving the needs of lower-income customers or those with particular data needs…in 2019, the landscape is rapidly shifting. Technological advancements have made the barrier for operating some kind of network much lower; the entrance of cable companies into the market have pushed MVNO service into the more lucrative postpaid segment; and multi-network MVNOs are innovating on the network side of the equation, rather than solely differentiating on price or customer service.

Methodology

The approach Tutela used to evaluate MVNOs was in line with its usual methodology. The company crowdsourced performance data from typical consumers with the help of code embedded in Tutela’s partners’ apps. In the new report, Tutela primarily considers how well MVNOs performed in regions where at least three of the big four networks offer coverage. Tutela calls these core coverage areas.2

Within core coverage areas, Tutela calculates the amount of time subscribers have service that exceeds two different quality thresholds. When service exceeds the “excellent” threshold, subscribers should be able to do highly demanding things like streaming high-definition video or downloading large files quickly. When service exceeds the “core” threshold, subscribers should be able to carry out typical activities like browsing or streaming music without trouble, but performance issues may be encountered with demanding activities.

Results

Here’s Tutela’s visualization of the main results:3

Tutela results


A chart of median download speeds shows a similar ranking among carriers:

Tutela Download Speeds

The results aren’t too surprising. Verizon MVNOs come out near the top of the hierarchy, while Sprint MVNOs tend to come out near the bottom. Cricket Wireless has a good score for the core threshold but does poorly in terms of the excellent threshold. That outcome makes sense since Cricket throttles maximum speeds.

Possible selection bias

I often write about how assessments of network performance that use crowdsourced data may be vulnerable to selection bias. These results from Tutela are no exception. In particular, I wonder if the results are skewed based on how high-quality phones used with different carriers tend to be. In general, newer or more expensive phones have better network hardware than older or cheaper phones.

Xfinity Mobile takes the top spot in the rankings. Xfinity Mobile is a new-ish carrier and is restrictive about which phones are eligible for use with the service. I would guess the average phone used with Xfinity Mobile is a whole lot newer and more valuable than the average phone used with TracFone. Similar arguments could be made for why Spectrum or Google Fi may have an advantage.

To Tutela’s credit, the company acknowledges the possibility of selection bias in at least one case:4

The second factor explaining Google Fi’s performance compared to Metro or Boost is the device breakdown. Although a broad range of Android and iOS devices work with Google Fi’s service, the network is targeted most heavily at owners of Google’s own Pixel devices…The Pixel devices use top-of-the-line cellular modems, which intrinsically provide a better cellular experience than older or mid-range devices.

Wi-Fi results

Several MVNOs offer access to Wi-Fi hotspots in addition to cellular networks. I’ve been curious how much data carriers send over Wi-Fi, and Tutela’s results give an estimate. While Xfinity Mobile appears to have sent the largest share of its data via hotspots, it’s a smaller share than I expected:5

Tutela data suggests that Xfinity Mobile has already succeeded in offloading over 6% of smartphone data traffic onto its Wi-Fi network – far more than any other network.

Tutela also shares a graph comparing hotspot usage among different carriers:6

Graph of wi-fi usage share among multiple carriers

Other stuff

There were a few other bits of the report that I found especially interesting. In one section, the report’s authors reflect on the fast growth of MVNOs run by cable companies:7

Xfinity Mobile and Spectrum Mobile captured nearly 50% of the postpaid subscriber growth in Q2 2019, and combined added nearly as many postpaid subscribers as host network Verizon.

In another part of the report, Tutela shares a map displaying the most common host network that Google Fi subscribers access. It looks like there are a decent number of areas where Sprint or U.S. Cellular provide the primary host network:8