FCC Reveals Misleading Coverage Claims

On Wednesday, the FCC released a fascinating report related to the Mobility Fund Phase II (MF-II). The MF-II is a planned program to provide federal funding for network build-outs in rural areas that are underserved by 4G coverage.

To determine which geographic areas were underserved, the FCC requested coverage maps and data from network operators. After reviewing the data and allowing outside entities to challenge the datas’ reliability, the FCC became concerned about the accuracy of the information shared by T-Mobile, U.S. Cellular, and Verizon. The FCC decided to conduct its own performance tests and compare the results of its tests to the information the network operators provided. Here’s what the agency found:1

Through the investigation, staff discovered that the MF-II coverage maps submitted by Verizon, U.S. Cellular, and T-Mobile likely overstated each provider’s actual coverage and did not reflect on-the-ground performance in many instances. Only 62.3% of staff drive tests achieved at least the minimum download speed predicted by the coverage maps—with U.S. Cellular achieving that speed in only 45.0% of such tests, T-Mobile in 63.2% of tests, and Verizon in 64.3% of tests…In addition, staff was unable to obtain any 4G LTE signal for 38% of drive tests on U.S. Cellular’s network, 21.3% of drive tests on T-Mobile’s network, and 16.2% of drive tests on Verizon’s network, despite each provider reporting coverage in the relevant area.

Incentives

When considering the accuracy of coverage maps, I try to think about the incentives network operators face. When advertising to consumers, network operators often have an incentive to overstate the extent of their coverage. However, incentives can run in the opposite direction in other situations. For example, when trying to get approval for a merger between Sprint and T-Mobile, Sprint had incentives to make its 4G coverage profile look limited and inferior to the coverage profiles of other nationwide networks.2

I’m not well-informed about the MF-II, so I don’t feel like I have a good grasp of all the incentives at play. That said, it’s not clear that all network operators would have an incentive to overstate their coverage. A network operator that claimed to offer coverage in an area it didn’t cover may limit competitors’ access to subsidies in that area. However, a network operator erroneously claiming to cover an area may prevent itself from receiving subsidies in that area.

Challenges

After network operators submitted coverage information to the FCC, a number of entities, including both governments and network operators, were allowed to challenge the validity of coverage information submitted by others. Here’s a bit more detail about the challenge process:3

After release of the map of presumptively eligible areas, mobile service providers, state, local, and Tribal government entities, and other interested parties granted a waiver were eligible to submit challenges in the challenge process via an online system operated by USAC. Challengers that requested access to the USAC MF-II Challenge Portal were able to access the provider-specific coverage maps, after agreeing to keep the coverage data confidential, and to file challenges to providers’ coverage claims by submitting speed test data. Challengers were required to conduct speed tests pursuant to a number of standard parameters using specific testing methods on the providers’ pre-approved handset models. The Commission adopted the requirement that challengers use one of the handsets specified by the provider primarily to avoid inaccurate measurements resulting from the use of an unsupported or outdated device—e.g., a device that does not support all of the spectrum bands for which the provider has deployed 4G LTE…During the eight-month challenge window, 106 entities were granted access to the MF-II Challenge Portal. Of the 106 entities granted access to the MF-II Challenge Portal, 38 were mobile service providers required to file Form 477 data, 19 were state government entities, 27 were local government entities, 16 were Tribal government entities, and six were other entities that filed petitions requesting, and were each granted, a waiver to participate.

About a fifth of the participating entities went on to submit challenges:4

21 challengers submitted 20.8 million speed tests across 37 states.

The challenge data often showed failed tests and lackluster speeds in areas where network operators claimed to offer coverage:5

During the challenge process, some parties entered specific concerns into the record. For example:6

Smith Bagley (d/b/a Cellular One) submitted maps of its service area in Arizona overlaid with Verizon’s publicly-stated 4G LTE coverage and the preliminary results of drive tests that Smith Bagley had conducted. Smith Bagley asserted that, for large stretches of road in areas where Verizon reported coverage, its drive testers recorded no 4G LTE signal on Verizon’s network. Smith Bagley argued that the ‘apparent scope of Verizon’s inaccurate data and overstated coverage claims is so extensive that, as a practical matter, the challenge process will not and cannot produce the necessary corrections.’
As part of a public report detailing its experience, Vermont published a map showing its speed test results which contradicted the coverage maps in Vermont of U.S. Cellular, T-Mobile, and Verizon, among others. This map included information on the approximately 187,000 speed tests submitted by Vermont, including download speed, latency, and signal strength. In the report, Vermont detailed that 96% of speed tests for U.S. Cellular, 77% for T-Mobile, and 55% for Verizon failed to receive download speeds of at least 5 Mbps.

After reviewing the challenges, the FCC requested additional information from the five largest network operators (AT&T, T-Mobile, Verizon, Sprint, and U.S. Cellular) to understand the assumptions involved in the networks’ coverage models.

FCC tests

Around the same time the FCC was requesting additional information from network operators, the agency also began its own testing of Verizon, U.S. Cellular, and T-Mobile’s networks. These speed tests took place in 12 states and primarily made use of a drive-testing methodology. As mentioned earlier, analyses of the FCC’s test data suggested that the on-the-ground experience with Verizon, T-Mobile, and U.S. Cellular’s network was much different than the experience that would be expected based on the information the networks provided to the FCC.

What happened?

A lot of the commentary and news articles I’ve seen in response to the FCC’s report seem to conclude that network operators are bullshitters that intentionally lied about the extent of their coverage. I have reservations about fully accepting that conclusion. Accurately modeling coverage is difficult. Lots of factors affect the on-the-ground experience of wireless subscribers. The FCC largely acknowledges this reality in its report:

Providers were afforded flexibility to use the parameters that they used in their normal course of business when parameters were not specified by the Commission. For example, the Commission did not specify fading statistics or clutter loss values, and providers were required to model these factors as they would in the normal course of business.7
Our speed testing, data analyses, and inquiries, however, suggest that some of these differences may be the result of some providers’ models: (1) using a cell edge RSRP value that was too low, (2) not adequately accounting for network infrastructure constraints, including backhaul type and capacity, or (3) not adequately modeling certain on-the-ground factors—such as the local clutter, terrain, and propagation characteristics by spectrum band for the areas claimed to be covered.8

Further supporting the idea that assessing coverage is difficult, the FCC didn’t just find that its tests contradicted the initial information submitted by network operators. The FCC data also contradicted the data submitted by those who challenged network operators’ data:

The causes of the large differences in measured download speed between staff and challenger speed tests taken within the same geographic areas, as well as the high percentage of tests with a download speed of zero in the challenger data, are difficult to determine. Discrepancies may be attributable to differences in test methodologies, network factors at the time of test, differences in how speed tet apps or drive test software process data, or other factors…Given the large differences between challenger and staff results however, we are not confident that individual challenger speed test results provide an accurate representation of the typical consumer on-the-ground experience.9

While the FCC found some of the information submitted by networks to be misleading about on-the-ground service quality, I don’t believe it ended up penalizing any network operators or accusing them of anything too serious.10 Still, the FCC did suggest that some of the network operators could have done better:

Staff engineers, however, found that AT&T’s adjustments to its model to meet the MF-II requirements may have resulted in a more realistic projection of where consumers could receive mobile broadband. This suggests that standardization of certain specifications across the largest providers could result in coverage maps with improved accuracy. Similarly, the fact that AT&T was able to submit coverage data that appear to more accurately reflect MF-II coverage requirements raises questions about why other providers did not do so. And while it is true that MF-II challengers submitted speed tests contesting AT&T’s coverage data, unlike for other major providers, no parties alleged in the record that AT&T’s MF-II coverage data were significantly overstated.11

FCC response

The FCC concluded that it should make some changes to its processes:12

First, the Commission should terminate the MF-II Challenge Process. The MF-II coverage maps submitted by several providers are not a sufficiently reliable or accurate basis upon which to complete the challenge process as it was designed.
Second, the Commission should release an Enforcement Advisory on broadband deployment data submissions, including a detailing of the penalties associated with filings that violate federal law, both for the continuing FCC Form 477 filings and the new Digital Opportunity Data Collection. Overstating mobile broadband coverage misleads the public and can misallocate our limited universal service funds.
Third, the Commission should analyze and verify the technical mapping data submitted in the most recent Form 477 filings of Verizon, U.S. Cellular, and T-Mobile to determine whether they meet the Form 477 requirements. Staff recommends that the Commission assemble a team with the requisite expertise and resources to audit the accuracy of mobile broadband coverage maps submitted to the Commission. The Commission should further consider seeking appropriations from Congress to carry out drive testing, as appropriate.
Fourth, the Commission should adopt policies, procedures, and standards in the Digital Opportunity Data Collection rulemaking and elsewhere that allow for submission, verification, and timely publication of mobile broadband coverage data. Mobile broadband coverage data specifications should include, among other parameters, minimum reference signal received power (RSRP) and/or minimum downlink and uplink speeds, standard cell loading factors and cell edge coverage probabilities, maximum terrain and clutter bin sizes, and standard fading statistics. Providers should be required to submit actual on-the-ground evidence of network performance (e.g., speed test measurement samplings, including targeted drive test and stationary test data) that validate the propagation model used to generate the coverage maps. The Commission should consider requiring that providers assume the minimum values for any additional parameters that would be necessary to accurately determine the area where a handset should achieve download and upload speeds no less than the minimum throughput requirement for any modeling that includes such a requirement.

Reflections

The FCC’s report illustrates how hard it is to assess network performance. Assumptions must be made in coverage models, and the assumptions analysts choose to make can have substantial effects on the outputs of their models. Similarly, on-the-ground performance tests don’t always give simple-to-interpret results. Two entities can run tests in the same area and find different results. Factors like the time of day a test was conducted or the type of device that was used in a test can have big consequences.

If we want consumers to have better information about the quality of service networks can offer, we need entities involved in modeling and testing coverage to be transparent about their methodologies.