2,000,000 AT&T Phones Were Unlocked Illegally

It recently came out that around 2,000,000 AT&T phones were unlocked by hackers that bribed AT&T employees. Muhammad Fahd and co-conspirators allegedly bribed a handful of AT&T employees to make the unlocks possible.

As I understand it, around 2012 lists of IMEI numbers were provided to bribed employees so that devices could be fraudulently unlocked. Eventually, the crimes became more involved. Bribed employees installed malware on AT&T systems and fraudulent wireless access points in AT&T facilities.

It’s a crazy story. Several years ago, I wondered how so many third-party services managed to offer device unlocking. I suppose this story is part of the explanation.

For more details, check out Ars Technica’s article.

Photo of a frustrated person with a broken phone

Consumer Reports’ Broken Cell Service Rankings

Several months ago, I published a blog post arguing that Consumer Reports’ cell phone rankings were broken. This month, Consumer Reports updated those rankings with data from another round of surveying its subscribers. The rankings are still broken.

Consumer Reports slightly changed its approach this round. While Consumer Reports used to share results on 7 metrics, it now uses 5 metrics:

  1. Value
  2. Customer support
  3. Data
  4. Reception
  5. Telemarketing call frequency

Of the 19 carriers Consumer Reports’ assesses, only 5 operate their own network hardware.1 The other 14 carriers resell access to other companies’ networks while maintaining their own customer support teams and retail presences.2

Several of the carriers that don’t run their own network offer service over only one host network:

  • Cricket Wireless – AT&T’s network
  • Page Plus Cellular – Verizon’s network
  • MetroPCS – T-Mobile’s network
  • CREDO Mobile – Verizon’s network
  • Boost Mobile – Sprint’s network
  • GreatCall – Verizon’s network
  • Virgin Mobile – Sprint’s network

To test the validity of Consumer Reports’ methodology, we can compare scores on metrics assessing network quality between each of these carriers and their host network. At first glance, it looks like the reception and data metrics should both be exclusively about network quality. However, the scores for data account for value as well as quality:3

Data service indicates overall experience (e.g., cost, speed, reliability) with the data service.
I think it was a methodological mistake to account for value within the data metric then account for value again in the value metric. That leaves us with only the reception scores.4 Here are the scores the four host operators get for reception:

  • Verizon – Good
  • T-Mobile – Fair
  • AT&T – Poor
  • Sprint – Poor

How do those companies’ scores compare to scores earned by carriers that piggyback on their networks?

  • Cricket Wireless has good reception while AT&T has poor reception.
  • Page Plus and Verizon both have good reception.
  • MetroPCS has good reception while T-Mobile has fair reception.
  • CREDO and Verizon both have good reception.
  • Boost has very good reception while Sprint has poor reception.
  • GreatCall and Verizon both have good reception.
  • Virgin has good reception while Sprint has poor reception.

In the majority of cases, carriers beat their host networks. The massive differences between Cricket/AT&T and Boost/Sprint are especially concerning. In no cases do host operators beat the carriers that piggyback on their networks. I would have expected the opposite outcome. Host networks generally give higher priority to their direct subscribers when networks are busy.

The rankings are broken.

What’s the problem?

I see two especially plausible explanations for why the survey results aren’t valid for comparison purposes:

  • Non-independent scoring – Respondents may take prices into account when assessing metrics other than value. If that happens, scores won’t be valid for comparisons across carriers.
  • Selection bias – Respondents were not randomly selected to try certain carriers. Accordingly, respondents who use a given carrier probably differ systematically from respondents that use another carrier. Differences in scores between two carriers could reflect either (a) genuine differences in service quality or (b) differences in the type of people who use each service.

Consumer Reports, please do better!

My earlier blog post about Consumer Reports’ methodology is one of the most popular articles I’ve written. I’m nearly certain staff at Consumer Reports have read it. I’ve tried to reach out to Consumer Reports through two different channels. First, I was ignored. Later, I got a response indicating that an editor might reach out to me. So far, that hasn’t happened.

I see three reasonable ways for Consumer Reports’ to respond to the issues I’ve raised:

  • Adjust the survey methodology.
  • Cease ranking cell phone carriers.
  • Continue with the existing methodology, but mention its serious problems prominently when discussing results.

Continuing to publishing rankings based on a broken methodology without disclosing problems is irresponsible.

Markets Are Honest

I’ve been reading a ton of articles with commentators’ takes on whether a merger between Sprint and T-Mobile will be good or bad for consumers. Almost everything I’ve read has taken a strong position one way or the other. I don’t think I’ve seen a single article that expressed substantial uncertainty about whether a merger would be good or bad.

It could be that everyone is hugely biased on both sides of the argument. Or maybe the deal is so bad that only incredibly biased people would consider making an argument that the merger will be good for consumers. I’m not sure.


I like to look at how markets handle situations I’m uncertain about. In the last few years, I’ve regularly seen liberal politicians and liberal news agencies arguing that we’re about to see the end of Trump’s presidency because of some supposedly impeachable action that just came to light. I’m not Trump’s biggest fan, but I’ve found a lot of arguments about how he’s about to be impeached too far-fetched. I have a habit of going to the political betting market PredictIt when I see new arguments of this sort. PredictIt has markets on lots of topics, including whether or not Trump will be impeached.

Politicians and newspapers have an incentive to say things that will generate attention. A lot of the time, doing what gets attention is at odds with saying what’s true. People putting money in markets have incentives that are better aligned with truth.

Most of the time I’ve seen articles about Trump’s impending impeachment, political betting markets haven’t moved much. In rare occasions where markets moved significantly, I’ve had a good indication that something major actually happened.


Wall Street investors have a strong incentive to understand how the merger will actually affect network operators’ success. Unsurprisingly, T-Mobile’s stock increased substantially when key information indicating likely approval of a merger came out. Sprint’s stock also increased in value.

What’s much weirder is that neither Verizon’s stock nor AT&T’s stock seemed to take a negative hit on the days when important information about the merger’s likelihood came out. In fact, it actually looks like the stocks may have increased slightly in value.1

You could tell complicated stories to explain why a merger could be good for competing companies’ stock prices and also good for consumers. I think the simpler story is much more plausible: Wall Street is betting the merger will be bad for consumers.

Maybe none of this should be surprising. There were other honest signals earlier on in the approval process. As far as I can tell, neither Verizon nor AT&T seriously resisted the merger:2


Disclosure: At the time of writing, I have financial relationships with a bunch of telecommunications companies, including all of the major U.S. network operators except T-Mobile.

Motorola G7 Play Box

Moto G7 Play – The Ultimate Budget Phone

Earlier this year, Motorola released its G7 Play. Despite a full price of only $199.99, it’s an amazing phone.

If you’re interested in diving deeply into the phone’s technical specs, I recommend Digital Trends’ review. The phone’s hardware isn’t as impressive as what’s in today’s $1,000 flagship devices, but the phone still packs plenty of power. I haven’t had any trouble with the G7 Play’s performance in a month or two of use. Even the battery life is good. I expect the majority of smartphone users would be highly satisfied with the phone’s performance. That said, those looking for optimal performance on high-end mobile games or the best camera possible should consider other devices.

The G7 Play is one of a limited number of phones that qualifies to be on my list of universal unlocked phones. When the phone is purchased directly from Motorola, it should have the radio hardware and whitelisting necessary for compatibility and solid performance on all four major networks in the U.S. Versions of the G7 Play purchased from carriers and third-party retailers may have less extensive compatibility than devices purchased directly from Motorola.

The phone runs Android 9 and can do nearly everything I expect higher-end Android phones to be capable of. It even has a handful of clever features Motorola added—e.g., I’ve enjoyed the convenience of being able to toggle the flashlight on and off by shaking the phone side to side.

Here are the most meaningful negative aspects of the phone I can come up with:

  • There’s a notch on the front of the phone that houses a camera and a microphone. The phone would be more aesthetically appealing without the notch.
  • The camera isn’t as good as many of the cameras found on high-end devices.
  • NFC is not supported.
  • The phone only has 2GB of RAM. This may limit the phone’s performance when multitasking, but I haven’t had any problems yet.

These limitations don’t really bother me. I don’t think they’ll bother most other people either.

Motorola offers two other models in the same series of phones that cost slightly more but come with more powerful hardware: the G7 Power and G7.

Verizon Pushes Back Deadline For 3G Retirement

Verizon has updated a web page about the company’s plans for retiring its 3G network. Previously, the web page indicated that (a) Verizon planned to retire its 3G network by the end of 2019 and (b) Verizon would no longer activate devices that were CDMA-only or did not support HD Voice:

Verizon Wireless is retiring its CDMA (3G) network at the end of 2019. As a result, we are no longer allowing activation of CDMA-only devices, including CDMA-only basic phones and smartphones, or 4G LTE smartphones that do not support HD Voice service.

The updated web page suggests Verizon plans to keep its 3G network available to customers until the end of 2020. It also looks like some CDMA-only phones and phones without HD Voice may be eligible for activation until the end of the year:

Starting January 1, 2020, Verizon will no longer allow any CDMA (3G and 4G Non-HD Voice) ‘Like-for-Like’ device changes.

The page also indicates that bringing your own CDMA device to activate on an existing line will be prohibited starting 1/1/2020.

As networks’ change their deadlines, I plan to update my earlier blog post covering each major networks’ plans for phasing out 3G networks.

Location, Location, Location

In my opinion, major wireless networks can be ranked pretty clearly in terms of their current, nationwide reliability:

  1. Verizon (best)
  2. AT&T
  3. T-Mobile
  4. Sprint (worst)

I get frustrated when network operators make misleading statements about nationwide quality, and I sometimes write articles calling out bullshit claims. That said, a network’s typical reliability throughout the U.S. may be very different from that network’s quality in a given area. When deciding which carrier you should use, it only matters how carriers perform where you want to use your phone.

In the last year, I’ve run speed tests in Boulder, Colorado with a bunch of carriers (using all four of the major U.S. networks). A few days ago, I ran a speed test on a phone with service from Tello, a carrier that runs over Sprint’s network. While Sprint has the worst nationwide network, the speed test found a download speed far faster than I’ve seen in Boulder with any other carrier:

129 Mbps speed test result

As a general rule, service is more expensive on networks with better nationwide performance. If you live where an underdog network performs well, you might be able to get great service at a bargain price.

DOJ Clears T-Mobile’s Merger With Sprint

As expected, the Department of Justice made an announcement today approving a merger between Sprint and T-Mobile. While the merger isn’t officially closed, DOJ approval was the largest hurdle T-Mobile and Sprint needed to jump before making their merger a reality.

As far as I can tell, the terms of the merger were consistent with what most commentators were expecting:

  • Most of Sprint’s prepaid business will be divested to DISH1
  • DISH will get Sprint’s 800 MHz spectrum
  • DISH will receive access to the New T-Mobile’s network for at least 7 years2
  • DISH will have the option to take over leases on some retail stores and cell sites

I don’t think mergers between telecom companies have a good track record of benefiting consumers. I hope this merger will be different, but I’m not betting on it. As many others have pointed out, something is odd about the whole arrangement. The divestitures to DISH are ostensibly intended to allow DISH to create a viable, facilities-based carrier (i.e., a carrier that has its own hardware and doesn’t just piggyback off other companies’ networks). If DISH is likely to succeed, it’s hard to explain why Sprint couldn’t remain a viable force. Maybe I’m misunderstanding something important.

I expect the merger-related transitions to take a few years, and I plan to write about new developments as they occur. Should be interesting.


For those interested, here are a few excerpts from T-Mobile’s announcement:

The proposed New T-Mobile, will divest Sprint’s prepaid businesses and Sprint’s 800 MHz spectrum assets to DISH. Additionally, upon the closing of the divestiture transaction, the companies will provide DISH wireless customers access to the New T-Mobile network for seven years and offer standard transition services arrangements to DISH during a transition period of up to three years. DISH will also have an option to take on leases for certain cell sites and retail locations that are decommissioned by the New T-Mobile, subject to any assignment restrictions.
The New T-Mobile will be committed to divest Sprint’s entire prepaid businesses including Boost Mobile, Virgin Mobile and Sprint-branded prepaid customers (excluding the Assurance brand Lifeline customers and the prepaid wireless customers of Shenandoah Telecommunications Company and Swiftel Communications, Inc.), to DISH for approximately $1.4 billion. These brands serve approximately 9.3 million customers in total.
With this agreement, Boost Mobile, Virgin Mobile, and Sprint-branded prepaid customers, as well as new DISH wireless customers, will have full access to the legacy Sprint network and the New T-Mobile network in a phased approach. Access to the New T-Mobile network will be through an MVNO arrangement, as well as through an Infrastructure MNO arrangement enabling roaming in certain areas until DISH’s 5G network is built out.
The companies have also committed to engage in good faith negotiations regarding the leasing of some or all of DISH’s 600 MHz spectrum to T-Mobile.
Abstract photo representing wireless technology

New RootMetrics Report – Verizon Wins Again

Yesterday, RootMetrics released its report on mobile network performance in the first half of 2019. Here are the overall, national scores for each network:1

  • Verizon – 94.8 points
  • AT&T – 93.2 points
  • T-Mobile – 86.9 points
  • Sprint – 86.7 points

While Verizon was the overall winner, AT&T wasn’t too far behind. T-Mobile came in a distant third with Sprint just behind it.

RootMetrics also reports which carriers scored the best on each of its metrics within individual metro areas. Here’s how many metro area awards each carrier won along with the change in the number of rewards received since the last report:2

  • Verizon – 672 awards (+5)
  • AT&T – 380 (+31)
  • T-Mobile – 237 (-86)
  • Sprint – 89 (+9)

My thoughts

Overall this report wasn’t too surprising since the overall results were so similar to those from the previous report. The decline in the number of metro area awards T-Mobile won is large, but I’m not sure I should take the change too seriously. There may have been a big change in T-Mobile’s quality relative to other networks, but I think it’s also possible the change can be explained by noise or a change in methodology. In its report, RootMetrics notes the following:3

T-Mobile’s performance didn’t necessarily get worse. Rather, AT&T, Sprint, and Verizon each made award gains in the test period, which corresponded with T-Mobile’s decreased award count.

I continue to believe RootMetrics’ data collection methodology is far better than Opensignal’s methodology for assessing networks at the national level. I take this latest set of results more seriously than I take the Opensignal results I discussed yesterday. That said, I continue to be worried about a lack of transparency in how RootMetrics aggregates its underlying data to arrive at final results. Doing that aggregation well is hard.

A final note for RootMetrics:
PLEASE DISCLOSE FINANCIAL RELATIONSHIPS WITH COMPANIES YOU EVALUATE!

Woman making a skeptical face

Opensignal Released a New Report – I’m Skeptical

Opensignal just released a new report on the performance of U.S. wireless networks. The report ranks major U.S. networks in five categories based on crowdsourced data:

  • 4G availability
  • Video experience
  • Download speed experience
  • Upload speed experience
  • Latency experience

Verizon took the top spot for 4G availability and video experience. T-Mobile came out on top for both of the speed metrics. T-Mobile and AT&T shared the top placement for the latency experience metric.

Selection bias

I’ve previously raised concerns about selection bias in Opensignal’s data collection methodology. Opensignal crowdsources data from typical users. Crowdsourcing introduces issues since there are systematic differences between the typical users of different networks. Imagine that Network A has far more extensive coverage in rural areas than Network B. It stands to reason that Network A likely has more subscribers in rural areas than Network B. Lots of attributes of subscribers vary in similar ways between networks. E.g., expensive networks likely have subscribers that are wealthier.

Analyses of crowdsourced data can capture both (a) genuine differences in network performance and (b) differences in how subscribers on each network use their devices. Opensignal’s national results shouldn’t be taken too seriously unless Opensignal can make a compelling argument that either (a) its methodology doesn’t lead to serious selection bias or (b) it’s able to adequately adjust for the bias.

Speed metrics

Opensignal ranks carriers based on average download and upload speeds. In my opinion, average speeds are overrated. The portion of time where speeds are good enough is much more important than the average speed a service offers.

Opensignal’s average download speed results are awfully similar between carriers:

  • Verizon – 22.9 Mbps
  • T-Mobile – 23.6 Mbps
  • AT&T – 22.5 Mbps
  • Sprint – 19.2 Mbps

Service at any of those speeds would be sufficient for almost any activities people typically use their phones for. Without information about how often speeds were especially low on each network, it’s hard to come to conclusions about differences in the actual experience on each network.

PCMag’s 2019 Network Tests — My thoughts

Summary

On June 20, PCMag published its latest results from performance testing on the major U.S. wireless networks. Surprisingly, AT&T rather than Verizon took the top spot in the overall rankings. I expect this was because PCMag places far more weight on network performance within cities than performance in less-populated areas.

In my opinion, PCMag’s methodology overweights average upload and download speeds at the expense of network reliability. Despite my qualms, I found the results interesting to dig into. PCMag deserves a lot of credit for its thoroughness and unusual level of transparency.

Testing methodology

PCMag claims to be more transparent about its methodology than other entities that evaluate wireless networks.1 I’ve found this to be true. PCMag’s web page covering its methodology is detailed. Sascha Segan, the individual who leads the testing, quickly responded to my questions with detailed answers. I can’t say anything this positive about transparency demonstrated by RootMetrics or OpenSignal.

To measure network performance, PCMag used custom speed test software developed by Ookla. The software was deployed on Samsung Galaxy S10 phones that were driven to 30 U.S. cities as they collected data.2 In each city, stops were made in several locations for additional data collection. PCMag only recorded performance on LTE networks. If a phone was connected to a non-LTE network (e.g., a 3G network) during a test, the phone would fail that test.3 PCMag collected data on six metrics:4

  • Average download speed
  • Percent of downloads over a 5Mbps speed threshold
  • Average upload speed
  • Percent of uploads over a 2Mbps speed threshold
  • Reliability (percent of the time a connection was available)
  • Latency

The Galaxy S10 is a recent, flagship device and has essentially the best technology available for high-performance on LTE networks. Accordingly, PCMag’s test are likely to show better performance than consumers using lower-end devices will experience. PCMag’s decision to use the same high-performance device on all networks may prevent selection bias that sometimes creeps up in crowdsourced data when subscribers on one network tend to use different devices than subscribers on another network.5

In my opinion, PCMag’s decision not to account for performance on non-LTE networks somewhat limits the usefulness of its results. Some network operators still use a lot of non-LTE technologies.

Scoring

PCMag accounts for networks’ performance on several different metrics. To arrive at overall rankings, PCMag gives networks a score for each metric and assigns specific weights to each metric. Scoring multiple metrics and reasonably assigning weights is far trickier than most people realize. A lot of evaluation methodologies lose their credibility during this process (see Beware of Scoring Systems).

PCMag shares this pie chart when describing the weights assigned to each metric:6

The pie chart doesn’t tell the full story. For each metric, PCMag gives the best-performing network all the points available for that metric. Other networks are scored based on how far they are away from the best-performing network. For example, if the best-performing network has an average download speed of 100Mbps (a great speed), it will get 100% of the points available for average download speed. Another network with an average speed of 60Mbps (a good speed) would get 60% of the points available for average download speed.

The importance of a metric is determined not just by the weight it’s assigned. The variance in a metric is also extraordinarily important. PCMag measures reliability in terms of how often a phone has an LTE connection. Reliability has low variance. 100% reliability indicates great coverage (i.e., a connection is always available). 80% reliability is bad. Networks’ reliability barely affects PCMag’s rankings since reliability measures are fairly close to 100% even on unreliable networks.

The scoring system is sensitive to how reliability numbers are presented. Imagine there are only two networks:

  • Network A with 99% reliability
  • Network B with 98% reliability

Using PCMag’s approach, both network A and B would get a very similar number of points for reliability. However, it’s easy to change how the same metric is presented:

  • Network A has no connection 1% of the time
  • Network B has no connection 2% of the time

If PCMag put the reliability metric in this format, network B would only get half of the points available for reliability.

As a general rule, I think average speed metrics are hugely overrated. It’s important that speeds are good enough for people to do what they want to do on their phones. Having speeds that are way faster than the minimum speed that’s sufficient won’t benefit people much.

I’m glad that PCMag put some weight on reliability and on the proportion of tests that exceeded fairly minimum upload and download speed thresholds. However, these metrics just don’t have nearly as much of an effect on PCMag’s final results as I think they should. The scores for Chicago provide a good illustration:

Despite having the worst reliability score and by far the worst score for downloads above a 5Mbps threshold, T-Mobile still manages to take the top ranking. Without hesitation, I’d choose service with Verizon or AT&T’s performance in Chicago over service with T-Mobile’s performance in Chicago. (If you’d like to get a better sense of how scores for different metrics drove the results in Chicago, see this Google sheet where I’ve reverse engineered the scoring.)

To create rankings for regions and final rankings for the nation, PCMag combines city scores and scores for suburban/rural areas. As I understand it, PCMag mostly collected data in cities, and roughly 20% of the overall weight is placed on data from rural/suburban areas. Since a lot more than 20% of the U.S. population lives in rural or suburban areas, one could argue the national results overrepresent performance in cities. I think this puts Verizon at a serious disadvantage in the rankings. Verizon has more extensive coverage than other networks in sparsely populated areas.

While I’ve been critical in this post, I want to give PCMag the credit it’s due. First, the results for each metric in individual cities are useful and interesting. It’s a shame that many people won’t go that deep into the results and will instead walk away with the less-useful conclusion that AT&T took the top spot in the national rankings.

PCMag also deserves credit for not claiming that its results are the be-all-end-all of network evaluation:7

Other studies may focus on downloads, or use a different measurement of latency, or (in Nielsen’s case) attempt to measure the speeds coming into various mobile apps. We think our balance makes the most sense, but we also respect the different decisions others have made.

RootMetrics and OpenSignal are far less modest.