AT&T Wins in GWS’s New Report – Reservations Remain

Global Wireless Solutions (GWS) released its latest report ranking the performance of cellular networks in the U.S. AT&T again took the top spot in GWS’s rankings.

I previously wrote about my reservations around the methodology GWS used in 2019. My reservations stand nearly unchanged. GWS continues to assess about 500 markets rather than the U.S. at large. I think this makes GWS biased against Verizon, the network that indisputably leads in coverage.

In its latest report, GWS boasts about having the largest and most comprehensive assessment of cellular networks. The claims seem to be based on the large number of data points GWS collects. In my view, the extra data points don’t make up for the fact that GWS’s underlying methodology isn’t as good as RootMetrics’ methodology.

Network operators pay evaluators to license their awards. Is GWS using a funky methodology because the company stands to earn more from declaring AT&T the best network than it would earn from declaring Verizon the best network?

Beware of Scoring Systems

When a third-party evaluator uses a formal scoring system or rubric, it’s a mistake to assume that the evaluator is necessarily being objective, rigorous, or thoughtful about its methodology.

I’ll use Forbes’ college rankings to illustrate.

Forbes argues that most college rankings (e.g., U.S. News) fail to focus on what “students care about most.” Forbes’ rankings are based on what it calls “outputs” (e.g., salaries after graduation) rather than “inputs” (e.g., acceptance rates or SAT scores of admitted applicants).1

Colleges are ranked based on weighted scores in five categories, illustrated in this infographic from Forbes:2

This methodology requires drawing on data to create scores for each category. That doesn’t mean the methodology is good (or unbiased).

Some students are masochists who care almost exclusively about academics. Others barely care about academics and are more interested in the social experiences they’ll have.

Trying to collapse all aspects of the college experience into a single metric is silly—as is the case for most other products, services, and experiences. If I created a rubric to rank foods based on a weighted average of tastiness, nutritional value, and cost, most people would rightfully ignore the results of my evaluation. Sometimes people want salad. Sometimes they want ice cream.

To be clear, my point isn’t that Forbes’ list is totally useless—just that it’s practically useless. My food rubric would come out giving salads a better score than rotten steak. That’s the correct conclusion, but it’s an obvious one. No one needed my help to figure that out. Ranking systems are only useful if they can help people make good decisions when they’re uncertain about their options.

Where do the weights for each category even come from? Forbes doesn’t explain.

Choices like what weights to use are sometimes called researcher degrees of freedom. The choice of what set of weights to use is important to the final results, but an alternative set of reasonable weights could have been used.

When researchers have lots of degrees of freedom, it’s advisable to be cautious about accepting the results of their analyses. It’s possible for researchers to select a methodology that gives one result while other defensible methodologies could have given different results. (See the paper Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results for an excellent demonstration of this phenomenon.)

Creating scores for each category introduces additional researcher degrees of freedom into Forbes’ analysis. Should 4-year or 6-year graduation rate be used? What data sources should be drawn on? Should debt be assessed based on raw debt sizes or loan default rates? None of these questions have clear-cut answers.

Additional issues show up in the methods used to create category-level scores.

A college ranking method could assess any one of many possible questions. For example:

  • How impressive is the typical student who attends a given school?
  • How valuable will a given school be for the typical student who attends?
  • How valuable will a school be for a given student if she attends?

It’s important which question is being answered. Depending on the question, selection bias may become an issue. Kids who go to Harvard would probably end up as smart high-achievers even if they went to a different school. If you’re trying to figure out how much attending Harvard benefits students, it’s important to account for students’ aptitudes before entering. Initial aptitudes will be less important if you’re only trying to assess how prestigious Harvard is.

Forbes’ methodological choices suggest it doesn’t have a clear sense of what question its rankings are intended to answer.
Confused people

Alumni salaries get 20% of the overall weight.3 This suggests that Forbes is measuring something like the prestige of graduates (rather than the value added from attending a school).4

Forbes also places a lot of weight on the number of impressive awards received by graduates and faculty members.5 This again suggests that Forbes is measuring prestige rather than value added.

When coming up with scores for the debt category, Forbes considers default rates and the average level of federal student debt for each student.6 This suggests Forbes is assessing how a given school affects the typical student that chooses to attend that school. Selection bias is introduced. The typical level of student debt is not just a function of a college’s price and financial aid. It also matters how wealthy students who attend are. Colleges that attract students with rich families will tend to do well in this category.

Forbes switches to assessing something else in the graduation rates category. Graduation rates for Pell Grant recipients receive extra weight. Forbes explains:

Pell grants go to economically disadvantaged students, and we believe schools deserve credit for supporting these students.7

Forbes doubles down on its initial error. First, Forbes makes the mistake of aggregating a lot of different aspects of college life into a single metric. Next, Forbes makes a similar mistake by mashing together several different purposes college rankings could serve.

Many evaluators using scoring systems with multiple categories handle the aggregation from category scores to overall scores poorly.8 Forbes’ methodology web page doesn’t explain how Forbes handled this process, so I reached out asking if it would be possible to see the math behind the rankings. Forbes responded telling me that although most of the raw data is public, the exact process used to churn out the rankings is proprietary. Bummer.

Why does Forbes produce such a useless list? It might be that Forbes or its audience doesn’t recognize how silly the list is. However, I think a more sinister explanation is plausible. Forbes has a web page where schools can request to license a logo showing the Forbes endorsement. I’ve blogged before about how third-party evaluation can involve conflicts of interest and lead to situations where everything under the sun gets an endorsement from at least one evaluator. Is it possible that Forbes publishes a list using an atypical methodology because that list will lead to licensing agreements with schools that don’t get good ratings from better-known evaluators?

I reached out to the licensing contact at Forbes with a few questions. One was whether any details could be shared about the typical financial arrangement between Forbes and colleges licensing the endorsement logo. My first email received a response, but the question about financial arrangements was not addressed. My follow-up email did not get a response.
Greedy businessman on a pile of money

While most students probably don’t care about how many Nobel Prizes graduates have won, measures of prestige work as pretty good proxies for one another. Schools with lots of prize-winning graduates probably have smart faculty and high-earning graduates. Accordingly, it’s possible to come up with a reasonable, rough ranking of colleges based on prestige.

While Forbes correctly recognizes that students care about things other than prestige, it fails to provide a useful resource about the non-prestige aspects of colleges.

The old College Prowler website did what Forbes couldn’t. On that site, students rated different aspects of schools. Each school had a “report card” displaying its rating in diverse categories like “academics,” “safety,” and “girls.” You could even dive into sub-categories. There were separate scores for how hot guys at a school were and how creative they were.9

Forbes’ college rankings were the first college rankings I looked into in depth. While writing this post, I realized that rankings published by U.S. News & World Report and Wall Street Journal/Times Higher Education both use weighted scoring systems and have a lot of the same methodological issues.

To its credit, Forbes is less obnoxious and heavy-handed than U.S. News. In the materials I’ve seen, Forbes doesn’t make unreasonable claims about being unbiased or exclusively data-driven. This is in sharp contrast to U.S. News & World Report. Here’s an excerpt from the U.S. News website under the heading “How the Methodology Works:”

Hard objective data alone determine each school’s rank. We do not tour residence halls, chat with recruiters or conduct unscientific student polls for use in our computations.

The rankings formula uses exclusively statistical quantitative and qualitative measures that education experts have proposed as reliable indicators of academic quality. To calculate the overall rank for each school within each category, up to 16 metrics of academic excellence below are assigned weights that reflect U.S. News’ researched judgment about how much they matter.10

As a general rule, I suggest running like hell anytime someone says they’re objective because they rely on data.

U.S. News’ dogmatic insistence that there’s a clear dichotomy separating useful data from unscientific, subjective data is misguided. The excerpt also contradicts itself. “Hard objective data alone” do not determine the schools’ ranks. Like Forbes, U.S. News uses category weights. Weights “reflect U.S. News’ researched judgment about how much they matter.” Researched judgments are absolutely not hard data.

It’s good to be skeptical of third-party evaluations that are based on evaluators’ whims or opinions. Caution is especially important when those opinions come from an evaluator who is not an expert about the products or services being considered. However, skepticism should still be exercised when evaluation methodologies are data-heavy and math-intensive.

Coming up with scoring systems that look rigorous is easy. Designing good scoring systems is hard.

Thoughts on TopTenReviews

Thumbs down image
I’m not a fan.

TopTenReviews ranks products and services in a huge number of industries. Stock trading platforms, home appliances, audio editing software, and hot tubs are all covered.

TopTenReviews’ parent company, Purch, describes TopTenReviews as a service that offers, “Expert reviews and comparisons.”1

Many of TopTenReviews’ evaluations open with lines like this:

We spent over 60 hours researching dozens of cell phone service providers to find the best ones.2

I’ve seen numbers between 40 and 80 hours in a handful of articles. It takes a hell of a lot more time to understand an industry at an expert level.

I’m unimpressed by TopTenReviews’ rankings in industries I’m knowledgable about. This is especially frustrating since TopTenReviews often ranks well in Google.

A particularly bad example: indoor bike trainers. These devices can turn regular bikes into stationary bikes that can be ridden indoors.

I love biking and used to ride indoor trainers a fair amount. I’m suspicious the editor who came up with the trainer rankings at TopTenReviews couldn’t say the same.

The following paragraph is found under the heading “How we tested on the page for bike trainers”:

We’ve researched and evaluated the best roller, magnetic, fluid, wind and direct-drivebike [sic] trainers for the past two years and found the features that make the best ride for your indoor training. Our reviewers dug into manufacturers’ websites and engineering documents, asked questions of expert riders on cycling forums, and evaluated the pros and cons of features on the various models we chose for our product lineup. From there, we compared and evaluated the top models of each style to reach our conclusions. 3

There’s no mention of using physical products.

The top overall trainer is the Kinetic Road Machine. It’s expensive but probably a good recommendation. I know lots of people with either that model or similar models who really like their trainers.

However, I don’t trust TopTenReviews’ credibility. TopTenReviews has a list of pros and cons for the Kinetic Road Machine. One con is: “Not designed to handle 700c wheels.” It is.

It’s a big error. 700c is an incredibly common wheel size for road bikes. I’d bet the majority of people using trainers have 700c wheels.4 If the trainer wasn’t compatible with 700c wheels, it wouldn’t deserve the “best overall” designation.

TopTenReviews even states, “The trainer’s frame fits 22-inch to 29-inch bike wheels.” 700c wheels fall within that range. A bike expert would know that.

Bike crash

TopTenReviews’ website has concerning statements about its approach and methodology. An excerpt from their about page (emphasis mine):

Our tests gather data on features, ease of use, durability and the level of customer support provided by the manufacturer. Using a proprietary weighted system (i.e., a complicated algorithm), the data is scored and the rankings laid out, and we award the three top-ranked products with our Gold, Silver and Bronze Awards.5

Maybe TopTenReviews came up with an awesome algorithm no one else has thought of. I find it much more plausible that—if a single algorithm exists—the algorithm is private because it’s silly and easy to find flaws in.

TopTenReviews receives compensation from many of the companies it recommends. While this is a serious conflict of interest, it doesn’t mean all of TopTenReviews’ work is bullshit. However, I see this line on the about page as a red flag:

Methods of monetization in no way affect the rankings of the products, services or companies we review. Period.6

Avoiding bias is difficult. Totally eliminating it is almost always unrealistic.

Employees doing evaluations will sometimes have a sense of how lucrative it will be for certain products to receive top recommendations. These employees would probably be correct to bet that they’ll sometimes be indirectly rewarded for creating content that’s good for the company’s bottom line.

Even if the company is being careful, bias can creep up insidiously. Someone has to decide what the company’s priorities will be. Even if reviewers don’t do anything dishonest, the company strategy will probably entail doing evaluations in industries where high-paying affiliate programs are common.

Reviews will need occasional updates. Won’t updates in industries where the updates could shift high-commission products to higher rankings take priority?

TopTenReviews has a page on foam mattresses that can be ordered online. I’ve bought two extremely cheap Zinus mattresses on Amazon.7 I’ve recommended these mattresses to a bunch of people. They’re super popular on Amazon.8 TopTenReviews doesn’t list Zinus.9

Perhaps it’s because other companies offer huge commissions.10 I recommend The War To Sell You A Mattress Is An Internet Nightmare for more about how commissions shadily distort mattress reviews. It’s a phenomenal article.

R-Tools Technology Inc. has a great article discussing their software’s position in TopTenReviews’ rankings, misleading information communicated by TopTenReviews, and conflicts of interest.

The article suggests that TopTenReviews may have declined in quality over the years:

In 2013, changes started to happen. The two principals that had made TopTenReviews a household name moved on to other endeavors at precisely the same time. Jerry Ropelato became CEO of WhiteClouds, a startup in the 3D printing industry. That same year, Stan Bassett moved on to Alliance Health Networks. Then, in 2014, the parent company of TopTenReviews rebranded itself from TechMediaNetwork to Purch.

Purch has quite a different business model than TopTenReviews did when it first started. Purch, which boasted revenues of $100 million in 2014, has been steadily acquiring numerous review sites over the years, including TopTenReviews, Tom’s Guide, Tom’s Hardware, Laptop magazine, HowtoGeek, MobileNations, Anandtech, WonderHowTo and many, many more.11

I don’t think I would have loved the pre-2013 website, but I think I’d have more respect for it than today’s version of TopTenReviews.

I’m not surprised TopTenReviews can’t cover hundreds of product types and consistently provide good information. I wish Google didn’t let it rank so well.

Third-party Evaluation: Trophies for Everyone!

A lot of third-party evaluations are not particularly useful. Let’s look at HostGator, one of the larger players in the shared web hosting industry, for some examples. For a few years, HostGator had an awards webpage that proudly listed all of the awards it “won.”

Many of the entities issuing awards were obviously affiliate sites that didn’t provide anything even vaguely resembling rigorous evaluation:

Fortunately, HostGator’s current version of the page is less ridiculous.

Even evaluations carried out by serious, established entities often have problems. Rigorous evaluation tends to be difficult. Accordingly, third-party evaluators generally use semi-rigorous methodologies—i.e., methodologies that have merit but also serious flaws.

In many industries, there will be several semi-rigorous evaluators using different methodologies. When an evaluator enters an industry, it will have to make a lot of decisions about its methods:

  • Should products be tested directly or should consumers be surveyed?
  • What metrics should be measured? How should those metrics be measured?
  • If consumers are surveyed, how should the surveyed population be selected?
  • How should multiple metrics be aggregated into an overall rating?

These are tough questions that don’t have straightforward answers.

Objective evaluation is often impossible. Products and services may have different characteristics that matter to consumers—for example, download speed and call quality for cell phone services. There’s no defensible, objective formula you can use to assess how important one characteristic’s quality is versus another.

There’s a huge range of possible, defensible methods that evaluators can use. Different semi-rigorous methods will lead to different rankings of overall quality. This can lead to situations where every company in an industry can be considered the “best” according to at least one evaluation method.

In other words: Everyone gets a trophy!

This phenomenon occurs in the market for cell phone carriers. At the time of writing, Verizon, AT&T, T-Mobile, and Sprint all get at least one legitimate evaluator’s approval. (More details in The Mobile Phone Service Confusopoly.)

Evaluators are often compensated in exchange for permission to use their results and/or logos in advertisements. Unfortunately, details on the specific financial arrangements between evaluators and the companies they recommend are often private.

Here are a few publicly known examples:

  • Businesses must pay a fee before displaying Better Business Bureau (BBB) logos in their advertisements.1
  • J.D. Power is believed to charge automobile companies for permission to use its awards in advertisements.2
  • AARP-approved providers pay royalties to AARP.3

An organization that is advertising an endorsement from the most rigorous evaluator in its field probably won’t be willing to pay a lot to advertise an endorsement from a second evaluator. A company with no endorsements will probably be much more willing to pay for its first endorsement.

Since there are many possible, semi-rigorous evaluation methodologies, maybe we should expect at least one evaluator to look kindly upon each major company in an industry. This phenomenon could even occur without any evaluator deliberately acting dishonestly. For example, lots of evaluators might try their hand at evaluation in a given industry. Each evaluator would use its own method. If an evaluator came out in favor of a company that didn’t have an endorsement, the evaluator would be rewarded monetarily and continue to evaluate within the industry. If an evaluator came out in favor of a company that already had an endorsement, the evaluator could exit the industry.