Skip to main content

Command Palette

Search for a command to run...

Why DR and DA Aren’t Authority, and What User-Governed Search Might Look Like

Why backlink scores are proxies, not trust, and how search could become more accountable to users

Published
10 min read
Why DR and DA Aren’t Authority, and What User-Governed Search Might Look Like

Why DR and DA Aren’t Authority, and What User-Governed Search Might Look Like

Most people treat metrics like Domain Rating and Domain Authority as if they measured real world authority. A tool shows a number from 0 to 100 and it feels like a direct representation of how much Google trusts a site.

That is not what these metrics actually are.

In this post I will explain what DR and DA really measure, why they are structurally biased toward big brands and old sites, and why even Google itself has moved far beyond simple link based thinking. I will also explore a thought experiment for a different kind of search system where users have explicit power over rankings instead of everything being controlled by a black box.

What DR and DA really measure

Domain Rating from Ahrefs and Domain Authority from Moz are both proprietary scores. They look similar on the surface. You get a number between 0 and 100 and higher is better. They are also logarithmic, which means that moving from 10 to 20 is much easier than moving from 70 to 80.

Under the hood both metrics are basically trying to answer one question.

How strong is this site in terms of backlinks compared to everything else in our index.

They do not look at your content quality. They do not know how fast your site is. They do not see your conversion rate. They do not know how real users behave on your pages. They are only modeling links.

The tools crawl the web and build their own independent link graphs. Then they estimate the relative strength of each domain based on how many other domains link to it, how strong those domains are, and how generously they link out. That last piece matters more than most people think.

If you read the original PageRank paper you will find a simple idea at the core of the algorithm. Each page can pass on only a limited amount of value through its outgoing links. The more links it has, the less value each individual link can carry.

Most modern link based metrics behave in a similar way because the idea is still valid.

A link from a page that has three hand picked outbound links is worth much more than a link from a page that links out to fifty random sites. The first is selective and editorial. The second looks like a directory or a link farm.

In practice this means that the best links often come from sites that rarely link to anyone. Think of small but respected niche publications, professional associations, or individual experts in a field. They may not have the highest DR in your tool, but when they do link to you it sends a strong signal.

The limitations of DR and DA

Once you understand that DR and DA are just relative backlink strength scores, their limitations become obvious.

First, they are biased by the tool index. Ahrefs sees a different portion of the web than Moz. If a link is not in their index, it does not exist for the metric.

Second, they are one dimensional. They collapse everything into a single number that ignores on page quality, user behavior, technical health, brand, and intent matching.

Third, they are relative and moving. If the overall web graph changes, your number can move even if you do nothing. When another site gains or loses powerful links, it shifts the scale a bit for everyone.

Fourth, they are easy to misinterpret. People celebrate small jumps as major wins and feel stuck when the metric does not move, even while their real traffic and rankings do.

None of this makes DR or DA useless. It simply means they are rough diagnostic tools rather than a measure of true authority.

Google still uses ideas that look like PageRank, but they are embedded in a much more complex system.

Links are no longer treated as generic votes.

A modern link signal is probably weighted by several factors at once.

Who is linking. Is the source part of a trusted seed set, and is it close to other known authorities.

About what. How topically relevant is the linking page and its domain to the target. Two sites in the same niche likely reinforce each other more than a random cross niche link.

From where in the page. An in content editorial link carries more weight than a footer or sidebar link.

How many other links are present. The stingy versus generous site behavior appears again.

How users behave. Do people actually click the link. Do they stay on the destination page or bounce right back to the search results. Does the link appear on pages that users generally trust.

How stable the link is. A link that exists for years and keeps sending traffic looks more natural than a short lived spike that appears and then disappears.

On top of this, Google has moved to entity based and semantic models. Instead of just matching text strings, it tries to understand which entities and topics a site is about and how they relate to each other across the web.

Why this favors big brands and incumbents

All of these layers create structural advantages for large, established brands.

Big brands are often part of the trusted seed sets in link based models. Government sites, major media companies, large universities, and leading associations all sit very close to the center of the trust graph.

They collect links naturally simply by being known. Journalists link to them. Bloggers reference them. People search for the brand by name, which further trains the system that this site is trustworthy and relevant.

They also have enormous behavioral data behind them. Many users click their results by default. They rarely look suspicious in terms of link velocity or anchor text, because the mentions are organic.

New sites have the opposite starting position.

They sit far away from the trust center. They lack branded searches. They have very little historical behavior. Any active link building they do looks more like promotion than like passive earning, which makes it easier for algorithms to discount low quality efforts.

From the point of view of a risk averse search engine this bias is rational. If the choice is between sending users to a known brand with a long track record, and a fresh unknown domain with a handful of links, the safe bet is obvious.

The price is that it becomes much harder for new players to break through, even when they are objectively better.

Is there a search model where users really decide

If Google and similar engines are structurally biased toward incumbents, it is natural to ask whether there is a different model where users have more explicit control over what ranks.

Social platforms are one partial answer. Reddit, StackOverflow, YouTube, and TikTok all use some form of user feedback to rank content.

Votes, likes, watch time, comments, and shares all shape what surfaces. This makes it easier for new content to break through and for communities to curate what matters to them.

However, these systems work inside closed platforms. They do not operate on the open web at the scale that Google does.

A thought experiment user governed search with one scarce vote

Imagine a search system that tries to bring some of this community power into web search.

Every user has a verified profile. There are no anonymous accounts and no easy bot swarms.

Every user gets one meaningful vote per month. Not per query and not per page, but one unit of scarce approval they can assign anywhere on the web.

If you use your vote on a page, you are saying that this page really helped you and that you want more people to find it.

Over time some pages and sites would accumulate these scarce votes. The search system could combine its own relevance and quality models with this thin but highly trusted signal.

Pros of this model

Scarcity forces seriousness. With only one vote per month, most people will not waste it on mediocre content or on manipulative campaigns.

Verification makes spam much harder. Votes would be tied to real humans, or at least to strongly verified identities, which removes whole categories of cheap manipulation.

It creates a public web of trust. Everyone can see which pages and sites have earned support across many independent users.

It rewards depth over volume. Publishers cannot brute force their way to visibility with hundreds of thin pages. They need a smaller number of truly excellent resources that people feel compelled to support.

Cons and challenges

Verification is hard. A global system that verifies everyone raises privacy, political, and logistical problems. Someone has to decide what counts as a real person.

Cold start is severe. New sites begin with nothing and might struggle to get early visibility in order to even earn votes. Early winners could lock in their advantage.

Herd behavior is real. Even with scarce votes, people might follow influencers or social pressure instead of their own judgment. Popularity could still dominate quality.

Most users will not participate deeply. Many people simply want answers and will never bother to spend their vote, or will use it randomly at the end of the month.

Global search scale is brutal. This kind of governance is far more realistic for smaller vertical search engines than for a universal one that indexes the entire web.

Where a user governed model could work

Although this system is unlikely to replace Google, it could work very well in narrower domains where expertise and trust matter more than breadth.

Examples include medical information, legal research, scientific literature, niche professional communities, and internal company knowledge bases.

In those contexts verified participants have strong incentives to protect quality and reputation. The index is smaller. The community is more coherent. A scarce vote has more meaning.

Practical takeaways for people doing SEO today

First, treat DR and DA as useful but limited diagnostic tools. They can help you compare backlink profiles at a glance and spot large movements, but they are not the score that Google cares about.

Second, assume that Google sees much more than your link tools do. It has its own link graph, its own behavior data, and its own semantic understanding of content and entities.

Third, accept that the system favors incumbents and design around it. New sites need sharper positioning, better content in narrower niches, and a focus on earning a few truly excellent links from highly selective and relevant sources.

Fourth, remember that at the end of the chain there is still a human. If you consistently create pages that solve real problems for real people, and do enough distribution for those people to discover and remember you, then over time every metric, including the proprietary ones, will reflect that reality.

It just takes longer than most dashboards suggest.