Photo of a frustrated person with a broken phone

Consumer Reports’ Broken Cell Service Rankings

Several months ago, I published a blog post arguing that Consumer Reports’ cell phone rankings were broken. This month, Consumer Reports updated those rankings with data from another round of surveying its subscribers. The rankings are still broken.

Consumer Reports slightly changed its approach this round. While Consumer Reports used to share results on 7 metrics, it now uses 5 metrics:

  1. Value
  2. Customer support
  3. Data
  4. Reception
  5. Telemarketing call frequency

Of the 19 carriers Consumer Reports’ assesses, only 5 operate their own network hardware.1 The other 14 carriers resell access to other companies’ networks while maintaining their own customer support teams and retail presences.2

Several of the carriers that don’t run their own network offer service over only one host network:

  • Cricket Wireless – AT&T’s network
  • Page Plus Cellular – Verizon’s network
  • MetroPCS – T-Mobile’s network
  • CREDO Mobile – Verizon’s network
  • Boost Mobile – Sprint’s network
  • GreatCall – Verizon’s network
  • Virgin Mobile – Sprint’s network

To test the validity of Consumer Reports’ methodology, we can compare scores on metrics assessing network quality between each of these carriers and their host network. At first glance, it looks like the reception and data metrics should both be exclusively about network quality. However, the scores for data account for value as well as quality:3

Data service indicates overall experience (e.g., cost, speed, reliability) with the data service.
I think it was a methodological mistake to account for value within the data metric then account for value again in the value metric. That leaves us with only the reception scores.4 Here are the scores the four host operators get for reception:

  • Verizon – Good
  • T-Mobile – Fair
  • AT&T – Poor
  • Sprint – Poor

How do those companies’ scores compare to scores earned by carriers that piggyback on their networks?

  • Cricket Wireless has good reception while AT&T has poor reception.
  • Page Plus and Verizon both have good reception.
  • MetroPCS has good reception while T-Mobile has fair reception.
  • CREDO and Verizon both have good reception.
  • Boost has very good reception while Sprint has poor reception.
  • GreatCall and Verizon both have good reception.
  • Virgin has good reception while Sprint has poor reception.

In the majority of cases, carriers beat their host networks. The massive differences between Cricket/AT&T and Boost/Sprint are especially concerning. In no cases do host operators beat the carriers that piggyback on their networks. I would have expected the opposite outcome. Host networks generally give higher priority to their direct subscribers when networks are busy.

The rankings are broken.

What’s the problem?

I see two especially plausible explanations for why the survey results aren’t valid for comparison purposes:

  • Non-independent scoring – Respondents may take prices into account when assessing metrics other than value. If that happens, scores won’t be valid for comparisons across carriers.
  • Selection bias – Respondents were not randomly selected to try certain carriers. Accordingly, respondents who use a given carrier probably differ systematically from respondents that use another carrier. Differences in scores between two carriers could reflect either (a) genuine differences in service quality or (b) differences in the type of people who use each service.

Consumer Reports, please do better!

My earlier blog post about Consumer Reports’ methodology is one of the most popular articles I’ve written. I’m nearly certain staff at Consumer Reports have read it. I’ve tried to reach out to Consumer Reports through two different channels. First, I was ignored. Later, I got a response indicating that an editor might reach out to me. So far, that hasn’t happened.

I see three reasonable ways for Consumer Reports’ to respond to the issues I’ve raised:

  • Adjust the survey methodology.
  • Cease ranking cell phone carriers.
  • Continue with the existing methodology, but mention its serious problems prominently when discussing results.

Continuing to publishing rankings based on a broken methodology without disclosing problems is irresponsible.

Consumer Reports’ Fundraising Gimmicks

You better cut the pizza in four pieces because I’m not hungry enough to eat six.Yogi Berra (allegedly)

The other day, I received a mailing from Consumer Reports. It was soliciting contributions for a raffle fundraiser. The mailing had nine raffle tickets in it. Consumer Reports was requesting that I send back the tickets with a suggested donation of $9 (one dollar for each ticket). The mailing had a lot of paper:

The raffle had a grand prize that would be the choice of an undisclosed, top-rated car or $35,000. There were a number of smaller prizes bringing the total amount up for grabs to about $50,000.

The materials included a lot of gimmicky text:

  • “If you’ve been issued the top winning raffle number, then 1 of those tickets is definitely the winner or a top-rated car — or $35,000 in cash.”
  • “Why risk throwing away what could be a huge pay day?”
  • “There’s a very real chance you could be the winner of our grand prize car!”

Consumer Reports also indicates that they’ll send a free, surprise gift to anyone who donates $10 or more. It feels funny to donate money hoping that I might win more than I donate, but I get it. Fundraising gimmicks work. That said, I get frustrated when fundraising gimmicks are dishonest.

One of the papers in the mailing came folded with print on each side. Here’s the front:

On the other side, I found a letter from someone involved in Consumer Reports’ marketing. The letter argues that it would be silly for me not to find out if I received winning tickets:

It amazes me that among the many people who receive our Consumer Reports Raffle Tickets — containing multiple tickets, mind you, not just one — some choose not to mail them in. And they do this, despite the fact there is no donation required for someone to find out if he or she has won…So when people don’t respond it doesn’t make any sense to me at all.

The multiple tickets bit is silly. It’s like the Yogi Berra line at the opening of the post; cutting a pizza into more slices doesn’t create more food. It doesn’t matter how many tickets I have unless I get more tickets than the typical person.

Come on. Consumer Reports doesn’t care if a non-donor decides not to turn in tickets. What’s the most plausible explanation for why Consumer Reports includes the orange letter? People who would otherwise ignore the mailing sometimes end up feeling guilty enough to make a donation. Checking the “I choose not to donate at this time, but please enter me in the Raffle” box on the envelope doesn’t feel great.

Writing my name on each ticket, reading the materials, and mailing the tickets takes time. My odds of winning are low. Stamps cost money.

Let’s give Consumer Reports the benefit of the doubt and pretend that the only reason not to participate is that stamps cost money. The appropriate stamp costs 55 cents at the moment.1 Is the expected reward for sending in the tickets greater than 55 cents?

Consumer Reports has about 6 million subscribers.2 Let’s continue to give Consumer Reports the benefit of the doubt and assume it can print everything, send mailings, handle the logistics of the raffle, and send gifts back to donors for only $0.50 per subscriber. That puts the promotion’s cost at about 3 million dollars. The $50,000 of prizes is trivial in comparison. Let’s further assume that Consumer Reports runs the promotion expecting that additional donations the promotion brings in will cover the promotion’s cost.

The suggested donation is $9. Let’s say the average, additional funding brought in by this campaign comes out to $10 per respondent.3 To break even, Consumer Reports needs to have 300,000 respondents.

With 300,000 respondents, nine tickets each, and $50,000 in prizes, the expected return is about 1.7 cents per ticket.4 Sixteen cents per person.5 Not even close to the cost of a stamp.


4/12/2019 Update: I received a second, almost-identical mailing in early April.

10/3/2019 Update: I received a few more of these mailings.

Issues with Consumer Reports’ 2017 Cell Phone Plan Rankings


Consumer Reports offers ratings of cellular service providers based on survey data collected from Consumer Reports subscribers. Through subscriber surveying in 2017, Consumer Reports collected data on seven metrics:1

  1. Value
  2. Data service quality
  3. Voice service quality
  4. Text service quality
  5. Web service quality
  6. Telemarketing call frequency
  7. Support service quality

The surveys collected data from over 100,000 subscribers.2 I believe Consumer Reports would frown upon a granular discussion of the exact survey results, so I’ll remain vague about exact ratings in this post. If you would like to see the full results of their survey, Consumer Reports subscribers can do so here.

Survey results

Results are reported for 20 service providers. Most of these providers are mobile virtual network operators (MVNOs). MVNOs don’t operate their own network hardware but make use of other companies’ networks. For the most part, MVNOs use networks provided by the Big Four (Verizon, Sprint, AT&T, and T-Mobile).

Interestingly, the Big Four do poorly in Consumer Reports’ evaluation. Verizon, AT&T, and Sprint receive the lowest overall ratings and take the last three spots. T-Mobile doesn’t do much better.

This is surprising. The Big Four do terribly, even though MVNOs are using the Big Four’s networks. Generally, I would expect the Big Four to offer network access to their direct subscribers that is as good or better than the access that MVNO subscribers receive.

It’s possible that the good ratings can be explained by MVNOs offering prices and customer service far better than the Big Four—making them deserving of the high ratings for reasons separate from network quality.

Testing the survey’s validity

To test the reliability of Consumer Reports methodology, we can compare MVNOs to the Big Four using only the metrics about network quality (ignoring measures of value, telemarketing call frequency, and support quality). In many cases, MVNOs use more than one of the Big Four’s networks. However, several MVNOs use only one network, allowing for easy apples-to-apples comparisons.3

  • Boost Mobile is owned by Sprint.
  • Virgin Mobile is owned by Sprint.
  • Circket Wireless is owned by AT&T.
  • MetroPCS is owned by T-Mobile.
  • GreatCall runs exclusively on Verizon’s network.
  • Page Plus Cellular runs exclusively on Verizon’s network.

When comparing network quality ratings between these MVNOs and the companies that run their networks:

  • Boost Mobile’s ratings beat Sprint’s ratings in every category.
  • Virgin Mobile’s ratings beat Sprint’s ratings in every category.
  • Cricket Wireless’s ratings beat or tie AT&T’s ratings in every category.
  • MetroPCS’s ratings beat or tie T-Mobile’s ratings in every category.
  • GreatCall doesn’t have a rating for web quality due to insufficient data. GreatCall’s ratings match or beat Verizon in the other categories.
  • Page Plus Cellular doesn’t have a rating for web quality due to insufficient data. Page Plus’ ratings match or beat Verizon in the other categories.
World’s best stock photo.
Taken at face value, these are odd results. There are complicated stories you could tell to salvage the results, but I think it’s much more plausible that Consumer Reports’ surveys just don’t work well for evaluating the relative quality of cell phone service providers.

Why aren’t the results reliable?

I’m not sure why the surveys don’t work, but I see three promising explanations:

  • Metrics may not be evaluated independently. For example, consumers might take a service’s price into account when providing a rating of its voice quality.
  • Lack of objective evaluations. Consumers may not provide objective evaluations. Perhaps consumers are aware of some sort of general stigma about Sprint that unfairly affects how they evaluate Sprint’s quality (but that same stigma may not be applied to MVNOs that use Sprint’s network).
  • Selection bias. Individuals who subscribe to one carrier are probably, on average, different from individuals who subscribe to another carrier. Perhaps individuals who have used Carrier A tend to use small amounts of data and are lenient when rating data service quality. Individuals who have used Carrier B may get more upset about data quality issues. Consumer Cellular took the top spot in the 2017 rankings. I don’t think it’s coincidental that Consumer Cellular has pursued branding and marketing strategies to target senior citizens.4

Consumer Reports’ website gives the impression that their cell phone plan rankings will be reliable for comparison purposes.5 They won’t be.

The ratings do capture whether survey respondents are happy with their services. However, the ratings have serious limitations for shoppers trying to assess whether they’ll be satisfied with a given service.

I suspect Consumer Reports’ ratings for other product categories that rely on similar surveys will also be unreliable. However, the concerns I’m raising only apply to a subset of Consumer Reports’ evaluations. A lot of Consumer Reports’ work is based on product testing rather than consumer surveys.