Earlier this month, AT&T published a news release claiming: “AT&T is the ‘Nation’s Fastest Wireless Network’ in 2019”. The claim is based on data collected by Ookla, the company behind Speedtest.net.
In the piece, AT&T writes:
With over 10 million consumer-initiated tests taken daily with Speedtest, Ookla provides invaluable insight into the performance, quality and accessibility of networks worldwide.
Ookla’s reliance on consumer-initiated tests has serious downsides. For example, consumers on different networks use different kinds of phones. Premium phones tend to have hardware that supports faster connections than cheap phones. If subscribers on Network A tend to run tests using premium phones while subscribers on Network B tend to run tests from budget phones, Network A is going to have an advantage in consumer-initiated tests that’s unrelated to underlying network performance.
Ookla’s methodology is also prone to selection bias. Consumer-initiated tests don’t occur among a randomly selected sample of subscribers on each network. Consequently, test results likely aren’t representative of typical network performance. There was a clear case of this problem when AT&T began misleadingly labeling some of its 4G service “5GE.” Here’s an excerpt from a Speedtest.net blog post:
In the final week of Q1, we also observed an increase in faster tests taken on AT&T’s network. Upon investigation, we discovered that this correlated with the release of iOS 12.2 and the roll out of AT&T’s 5G E icon. We also found that the increase in tests was coming from device models that would have started to display the 5G E icon, such as the newer generations of iPhone (XR, XS Max, XS, X, 8, 8 Plus), indicating that consumers were seeing the new icon and taking a test to see what speeds they were getting.
While no method for evaluating mobile network performance is perfect, I tend to think Opensignal and RootMetrics use methodologies that are more reliable than Ookla’s. AT&T didn’t take the top spot for speeds in RootMetrics’ most recent report or Opensignal’s most recent report.
AT&T’s news release includes ridiculous digs at competitors:
Speedtest results show we increased our speeds by 45.7% year-over-year, while one of our competitors never ‘checked the box’ on speed with only a 16.5% increase year over year and the other defined what it meant to be ‘Un-speedy’ with only a 9.4% increase year over year.
Year-over-year changes in average speeds don’t on their own indicate whether networks are fast. A visual in the news release is illuminating:
AT&T performed poorly relative to T-Mobile and Verizon in early 2018. In a sense, AT&T was able to get a 45.7% year-over-year increase in speeds because it performed so poorly in 2018.
I’d argue that AT&T’s news release ignores the most important part of Speedtest’s 2019 report. In my opinion, average speed is overrated. For most consumers, it’s far more important to consistently have decent speeds than to have high average speeds.
Ookla reports a consistency metric based on the proportion of tests that exceed a threshold of 5Mbps. Verizon takes the top spot on this metric, followed by T-Mobile, with AT&T coming in third. Verizon also beats out AT&T for coverage availability, another metric that can act as a proxy for consistency.
Bias against Verizon
In Ookla’s main analyses, data is only included from “competitive geographies.” Competitive geographies only include areas where Ookla has a substantial number of test results from at least three major networks. There are defensible reasons for Ookla to use the competitive geographies filter. However, it should be acknowledged that Verizon has the nation’s most extensive network and likely outperforms AT&T and other networks in non-competitive geographies.
Boost Mobile is running a new commercial that features Pitbull and pitches the company’s low prices. Towards the end of the ad, a narrator says that Boost has a “super fast, super reliable network.” The narration is accompanied by this image:
In most commercials that involve carriers making claims about service quality, carriers will use fine print to cite research that backs up their claims. The Boost ad doesn’t include a citation; perhaps that’s because Boost’s claim doesn’t have much substance. “Super fast, super reliable” is super vague. Boost’s service probably is super fast compared to wireless service from 15 years ago. On the other hand, Boost’s service is not super fast compared to most services currently offered by other U.S. carriers.
Boost runs over Sprint’s network. There are a lot of different companies that evaluate network performance, and Sprint tends to do poorly relative to its competitors in the rigorous evaluations. Sprint had the lowest speeds among all four major networks in both RootMetrics’ and Opensignal’s most recent national assessments.
Boost’s claim looks even sillier in light of the fact that its subscribers tend to have low priority data access on Sprint’s network. When Sprint’s network is congested, Boost Mobile’s subscribers will tend to experience slower data speeds than those who subscribe directly to Sprint’s service.
A misleading image accompanies Boost’s misleading claims about quality. The image looks like a coverage map showing extensive coverage, but a disclaimer states: Coverage not available everywhere. Not a depiction of actual coverage.
AT&T has settled with the Federal Trade Commission (FTC) and agreed to pay out $60 million to current and past customers that may have been affected by misleading claims about unlimited data. The settlement is in response to the FTC’s 2014 accustation that AT&T failed to adequately disclose that customers on unlimited data plans could have their speeds throttled substantially. Here are a few bits from the 2014 FTC complaint:
The FTC’s complaint alleges that the company failed to adequately disclose to its customers on unlimited data plans that, if they reach a certain amount of data use in a given billing cycle, AT&T reduces – or “throttles” – their data speeds to the point that many common mobile phone applications – like web browsing, GPS navigation and watching streaming video – become difficult or nearly impossible to use…AT&T’s marketing materials emphasized the ‘unlimited’ amount of data that would be available to consumers who signed up for its unlimited plans…AT&T, despite its unequivocal promises of unlimited data, began throttling data speeds in 2011 for its unlimited data plan customers after they used as little as 2 gigabytes of data in a billing period. According to the complaint, the throttling program has been severe, often resulting in speed reductions of 80 to 90 percent for affected users. Thus far, according to the FTC, AT&T has throttled at least 3.5 million unique customers a total of more than 25 million times…consumers in AT&T focus groups strongly objected to the idea of a throttling program and felt ‘unlimited should mean unlimited.’
Here’s an excerpt from the FTC’s press release from today (emphasis mine):
As part of the settlement, AT&T is prohibited from making any representation about the speed or amount of its mobile data, including that it is “unlimited,” without disclosing any material restrictions on the speed or amount of data. The disclosures need to be prominent, not buried in fine print or hidden behind hyperlinks. For example, if an AT&T website advertises a data plan as unlimited, but AT&T may slow speeds after consumers reach a certain data cap, AT&T must prominently and clearly disclose those restrictions.
I’m glad to see the FTC cracking down on misleading practices. Bogus “unlimited” plans seem to be much more common today than they were in 2014.
Altice Mobile just launched with a tempting offer. Altice’s only plan, its “unlimited everything” plan, is only $30 per line each month. A lot of technology websites have been writing about the new offering, and most of them aren’t mentioning how many limits Altice puts on its subscribers. Altice Mobile is at fault here. The company has been unusually non-transparent about the limitations it imposes.
In my previous post, I was critical of Total Wireless for marketing one of its plans as an unlimited plan, even though it involved a significant limitation:
Total Wireless is at least is transparent in letting customers know that some limits do exist despite labeling the plan as unlimited. Altice Mobile doesn’t put a disclaimer or an asterisk next to its claims:
Altice’s press release is even more misleading:
Altice Mobile offers one simple plan with unlimited everything:
unlimited data, text, and talk nationwide,
unlimited mobile hotspot,
unlimited video streaming,
unlimited international text and talk from the U.S. to more than 35 countries, including Canada, Mexico, Dominican Republic, Israel, most of Europe, and more, and
unlimited data, text and talk while traveling abroad in those same countries.
Potential customers wanting to understand Altice Mobile’s limitations need to find their way to a web page full of legalese titled Broadband Disclosure Information. As it turns out, Altice has lots of limitations:
Mobile hotspot is typically throttled to a maximum of 600Kbps (a fairly slow speed).
Video is typically throttled to a maximum of 480p.
After 50GB of use in a month, video traffic and hotspot traffic are throttled to 128Kbps.
As I discussed in my last post, it’s silly to call a service unlimited while throttling to especially low speeds. The claim in the press release that Altice offers “unlimited video streaming” is particularly misleading. 128Kbps can’t support stable streaming of even low-resolution, 240p video. Turns out the claim of unlimited international data in 35 countries is also misleading. International data after the first gigabyte is throttled to 128Kbps.
Despite the limitations, there’s a lot that’s exciting about Altice Mobile. It might be a good option for people who live in the limited set of regions where it’s available. Even with its limitations, the service still has a competitive price. I hope we’ll see Altice move towards being more transparent with consumers. If you’re looking for alternatives to Altice, consider checking out my list of recommended low-cost carriers.
It’s becoming more common for carriers to offer additional data at 2G speeds after subscribers use up all of the regular-speed data that they’ve been allotted. In most cases, this means subscribers who’ve run out of regular data are throttled to a maximum speed of 128Kbps. It’s a great perk. Imagine you’ve run out of regular data, but really need to use the internet for a moment to pull up a boarding pass, look up directions, or view an email. At 2G speeds, it will probably be frustratingly slow to do any of those things, but that’s a much better scenario than being unable to use data at all.
Most consumers have little clue what 2G speeds amount to in practice. Let me be clear: 2G speeds are really slow for most things people want to do. Music streaming probably won’t work well. Video streaming at low, 240p resolution won’t be possible. Most websites will take a long time to load.
Carriers vary in how they present the perk of extra data at 2G speeds. In my opinion, Mint Mobile and Verizon handle the perk in a commendable way. Both carriers generally describe their plans and data allotments based on the amount of regular data allotted. In contrast, Total Wireless and Tello offer “unlimited” plans. These plans have caps on regular data use. After the cap is reached, subscribers continue to have data at 2G speeds. I think it’s misleading, bordering on outright lying, to call these unlimited plans. It’s just not possible to use data in a normal manner once speeds are throttled to 128Kbps.
In fact, imposing a throttle creates a limit on how much data can be used in a month. If a subscriber manages to transmit 128 kilobits of data every second for an entire month, they’ll use about 40GB of data. While almost no subscribers will come close to reaching it, there is a theoretical limit on these supposedly unlimited plans. It’s roughly: amount of regular data + 40GB.
Disclosure: I have financial relationships with Verizon, Mint Mobile, Tello, and Total Wireless (more details).
I’ve been reading a ton of articles with commentators’ takes on whether a merger between Sprint and T-Mobile will be good or bad for consumers. Almost everything I’ve read has taken a strong position one way or the other. I don’t think I’ve seen a single article that expressed substantial uncertainty about whether a merger would be good or bad.
It could be that everyone is hugely biased on both sides of the argument. Or maybe the deal is so bad that only incredibly biased people would consider making an argument that the merger will be good for consumers. I’m not sure.
I like to look at how markets handle situations I’m uncertain about. In the last few years, I’ve regularly seen liberal politicians and liberal news agencies arguing that we’re about to see the end of Trump’s presidency because of some supposedly impeachable action that just came to light. I’m not Trump’s biggest fan, but I’ve found a lot of arguments about how he’s about to be impeached too far-fetched. I have a habit of going to the political betting market PredictIt when I see new arguments of this sort. PredictIt has markets on lots of topics, including whether or not Trump will be impeached.
Politicians and newspapers have an incentive to say things that will generate attention. A lot of the time, doing what gets attention is at odds with saying what’s true. People putting money in markets have incentives that are better aligned with truth.
Most of the time I’ve seen articles about Trump’s impending impeachment, political betting markets haven’t moved much. In rare occasions where markets moved significantly, I’ve had a good indication that something major actually happened.
Wall Street investors have a strong incentive to understand how the merger will actually affect network operators’ success. Unsurprisingly, T-Mobile’s stock increased substantially when key information indicating likely approval of a merger came out. Sprint’s stock also increased in value.
What’s much weirder is that neither Verizon’s stock nor AT&T’s stock seemed to take a negative hit on the days when important information about the merger’s likelihood came out. In fact, it actually looks like the stocks may have increased slightly in value.
You could tell complicated stories to explain why a merger could be good for competing companies’ stock prices and also good for consumers. I think the simpler story is much more plausible: Wall Street is betting the merger will be bad for consumers.
Maybe none of this should be surprising. There were other honest signals earlier on in the approval process. As far as I can tell, neither Verizon nor AT&T seriously resisted the merger:
Disclosure: At the time of writing, I have financial relationships with a bunch of telecommunications companies, including all of the major U.S. network operators except T-Mobile.
You better cut the pizza in four pieces because I’m not hungry enough to eat six.Yogi Berra (allegedly)
The other day I received an envelope from Consumer Reports. It was soliciting contributions for a raffle fundraiser. The mailing had nine raffle tickets in it. Consumer Reports was requesting that I send back the tickets with a suggested donation of $9 (one dollar for each ticket). The mailing had a lot of paper:
The raffle had a grand prize that would be the choice of an undisclosed, top-rated car or $35,000. There were a number of smaller prizes bringing the total amount up for grabs to about $50,000.
The materials included a lot of gimmicky text. Things like:
“If you’ve been issued the top winning raffle number, then 1 of those tickets is definitely the winner or a top-rated car — or $35,000 in cash.”
“Why risk throwing away what could be a huge pay day?”
“There’s a very real chance you could be the winner of our grand prize car!”
Consumer Reports also indicates that they’ll send a free, surprise gift to anyone who donates $10 or more. It feels funny to donate money with the thought that I might win more than I donate, but I get it. Fundraising gimmicks work. That said, I get frustrated when fundraising gimmicks are dishonest.
One of the pieces of paper in the mailing came folded with print on each side. Here’s the front:
Unfolding that paper and looking on the other side, I found a letter from someone involved in Consumer Reports’ marketing. The letter argues that it would be silly for me not to find out if I received winning tickets. Here’s a bit of it:
It amazes me that among the many people who receive our Consumer Reports Raffle Tickets — containing multiple tickets, mind you, not just one — some choose not to mail them in. And they do this, despite the fact there is no donation required for someone to find out if he or she has won…So when people don’t respond it doesn’t make any sense to me at all.
The argument in the letter is ridiculous.
First, the multiple tickets bit is silly. It’s like the Yogi Berra line at the opening of the post; cutting a pizza into more slices doesn’t create more food. It doesn’t matter how many tickets I have unless I get more tickets than the typical person.
Second, Consumer Reports doesn’t care if a non-donor decides not to turn in tickets. The most plausible explanation for why Consumer Reports includes the orange letter is that people who would otherwise ignore the mailing may end up feeling guilty enough to make a donation. Checking the “I choose not to donate at this time, but please enter me in the Raffle” box on the envelope doesn’t feel great.
Finally, it makes perfect sense why I might not want to participate. Writing my name on each ticket, reading the materials, and mailing the tickets takes time. My odds of winning are low. I’d also have to pay for a stamp.
Let’s give Consumer Reports the benefit of the doubt and pretend that the only reason not to participate is that stamps cost money. The appropriate stamp costs 55 cents at the moment. Is the expected reward for sending in the tickets greater than 55 cents?
Consumer Reports has about 6 million subscribers. Let’s continue to give Consumer Reports the benefit of the doubt and assume it can print everything, send mailings, handle the logistics of the raffle, and send gifts back to donors for only $0.50 per subscriber. That puts the promotion’s cost at about 3 million dollars. The $50,000 of prizes is trivial in comparison. Let’s further assume that Consumer Reports runs the promotion based on the expectation that additional donations brought in will cover the promotion’s cost.
The suggested donation is $9. Let’s say the average, additional funding brought in by this campaign comes out to $10 per respondent. To break even, Consumer Reports needs to have 300,000 respondents.
With 300,000 respondents, nine tickets each, and $50,000 in prizes, the expected return is about 1.7 cents per ticket. Sixteen cents per person. Not even close to the cost of a stamp.
4/12/2019 Update: I received a second, almost-identical mailing in early April.
10/3/2019 Update: I received a few more of these mailings.
Warning: This post is a rant and involves some foul language. Enjoy!
There’s a ridiculous amount of research supporting the idea that humans engage in an incredible amount of deception and even self-deception. People are biased. People respond to the incentives they face. Of course, anyone who has ever interacted with another human knows these things.
Despite this, pretty much every website offering third-party evaluation makes claims of objectivity or independence. Not claims that they try to minimize bias. Claims that they actually are unbiased.
Let’s take TopTenReviews, a site I criticized in a previous post. TopTenReviews says things like:
To be clear, these methods of monetization in no way affect the rankings of the products, services or companies we review. Period.
Bullshit. Total bullshit.
I’ve complained enough in the past about run-of-the-mill websites offering bogus evaluations. What about the websites that have reasonably good reputations?
NerdWallet publishes reviews and recommendations related to financial services.
Looking through NerdWallet’s website, I find this (emphasis mine):
The guidance we offer, info we provide, and tools we create are objective, independent, and straightforward. So how do we make money? In some cases, we receive compensation when someone clicks to apply, or gets approved for a financial product through our site. However, this in no way affects our recommendations or advice. We’re on your side, even if it means we don’t make a cent.
NerdWallet meets Vanguard
Stock brokerages are one of the types of services that NerdWallet evaluates.
One of the most orthodox pieces of financial advice—with widespread support from financial advisors, economists, and the like—is that typical individuals who invest in stocks shouldn’t actively pick and trade individual stocks. This position is often expressed with advice like: “Buy and hold low-cost index funds from Vanguard.”
Vanguard is a firm that has optimized for keeping fees low and giving its clients a rate of return very close to the market’s rate of return. In fact, the firm’s founder, John Bogle, is famous for creating the first low-cost index funds.
Since Vanguard keeps costs low, it cannot pay NerdWallet the kind of referral commissions that high-fee investment platforms offer. So what happens when NerdWallet evaluates brokers? NerdWallet uses a silly evaluation methodology that results in a shitty rating for Vanguard.
Vanguard gets 3 out of 5 stars. It’s the worst rating for a broker I’ve seen on the site.
How does Vanguard’s mediocre rating get rationalized? NerdWallet slams Vanguard for not offering the sort of stuff Vanguard’s target audience doesn’t want. Vanguard gets the worst possible ratings for the “Promotions” and “Trading platform” categories. Why? Vanguard doesn’t offer those things.
Imagine a friend went to a nice restaurant and came back complaining that her steak didn’t come with cake frosting. NerdWallet is doing something kind of like that.
The following excerpt is found on NerdWallet’s Vanguard review under the heading, “Is Vanguard right for you?” (emphasis mine):
Ask yourself this question: Are you part of Vanguard’s target audience of retirement investors with a relatively high account balance? If so, you’ll likely find no better home. You really can’t beat the company’s robust array of low-cost funds.
Investors who fall outside of that audience — those who can’t meet the fund minimums or want to regularly trade stocks — should look for a broker that better caters to those needs.
This is silly. Vanguard’s minimum is $1,000. You shouldn’t buy stocks as an investment strategy if you have less than $1,000 to put into stocks! If you invest in stocks, you shouldn’t regularly trade individual stocks!
From my perspective, NerdWallet is saying that if you are (a) the typical kind of person that should be buying stocks and (b) you don’t use a stupid strategy, then “you really can’t beat the company’s [Vanguard’s] robust array of low-cost funds.”
So there we have it. Despite the lousy review, NerdWallet correctly recognizes that Vanguard is awesome.
NerdWallet didn’t really lie, but it’s biased.
To be clear, I’m being hard on NerdWallet. NerdWallet does a good job aggregating information about financial services and offers decent financial advice in some areas. The evaluation methodology I’m criticizing may not have been maliciously engineered to optimize for profits. NerdWallet may have stumbled into the current methodology. Still, there’s a big problem. NerdWallet’s current methodology is good for its bottom line, so it has a strong incentive not to correct obvious issues. On the other hand, if NerdWallet stumbles into a silly methodology that’s bad for its bottom line, it has a huge incentive to change the methodology.
Sometimes evaluators aim to create divisions between editorial content (e.g., review writing) and revenue generation. I think divisions of this sort are a good idea, but they should not be thought of as magic bullets that eliminate bias. WireCutter is one of my favorite review sites, but I think it makes the mistake of overemphasizing how much divisions can do to reduce bias:
We pride ourselves on following rigorous journalistic standards and ethics, and we maintain editorial independence from our business operations. Our recommendations are always made entirely by our editorial team without input from our revenue team, and our writers and editors are never made aware of any business relationships.
I believe WireCutter takes actions to encourage editorial independence. However, I’m skeptical of how the commitment to editorial integrity is described. Absent extreme precautions, people talk. Information flows between coworkers. Even if editors aren’t intentionally informed about financial arrangments, it’s easy to make educated guesses. Commission rates offered by Amazon’s affiliate program are publicly available.
Bias is sneaky
Running CoverageCritic, I face all sorts of decisions unrelated to accuracy or honesty where bias still has the potential to creep in. For example, which industries should I cover? Industries where companies almost always offer commissions or industries where it’s hard to get a cut from any sales I generate? In what order should providers I recommend by displayed? Alphabetically? Randomly? One of those options will probably be better for my bottom line than the other.
I don’t have perfect introspective access to what happens in my head. A minute ago I scratched my nose. I can’t precisely explain exactly how or why I chose to do that. It just happened. Similarly, I don’t always know when and how biases affect my decisions.
I have conflicts of interest. Companies I recommend offer commissions. You can take a look at the arrangements here.
I try to align my incentives with consumers by building my brand around commitments to transparency and rigor. I don’t make these commitments for purely-altruistic reasons. If the branding strategy succeeds, I stand to benefit a lot.
Even with my branding strategy, my alignment with consumers will never be perfect. I’ll still be biased. If you ever think I could be doing better, please let me know.