SEOs have long complained about the web’s obsession with “toolbar PageRank”, the little green bar that can lead to so many bad decisions for link builders, marketers, and webmasters. While the Google of 10 years ago may have used a fairly simple system of calculating link value, today’s search engines employ very complex and nuanced systems to evaluate inbound links and judge the corresponding authority. This post lays out 3 factors that search marketers need to know about link valuation.
Linking Domain Diversity
In a nutshell, 10 links from 10 different domains is more valuable than 10 links from 1 domain. Link Diversity is usually measured in 2 ways – number of linking root domains, and number of linking (class-C) IP addresses. The purpose of both measurements is to evaluate the number of independent agents referencing your content.
In the old days of PageRank, this was never a factor. Links were seen at the page level only, and a numeric PageRank value was passed irrespective of the linking domain. This type of domain-agnostic measurement spurred the popularity of sidewide links in footers and sidebars that SEOs used to love. These days, however, search engines recognize that having several different websites “vote” for yours is more important than a single website voting repeatedly.
Link Positioning on Page
In the early days of Google, PageRank is described,
“PageRank can be thought of as a model of user behavior. We assume there is a “random surfer” who is given a web page at random and keeps clicking on links, never hitting “back” but eventually get bored and starts on another random page. The probability that the random surfer visits a page is its PageRank.”
This model would imply that all links on a page are equally valuable – the “random surfer” is a robot, clicking on Hypertext without any consideration of how and where a link appears on a page. This simple system is far too black-and-white for a modern engine.
Google’s newer “reasonable surfer” system takes into account how a link actually appears to a user. The reasonable surfer is more likely to click on larger links, links that appear in the body of a page and not a sidebar, links more relevant to the topic of the page, and links that aren’t bogged down with highly commercial anchor text – among many other factors.
The concept of “bad neighborhoods” isn’t new to SEO – the idea is that a page linking to webspam has a good chance of being penalized. Google assumes that a good quality page won’t link to junk – if you’re linking to spam, you must be spamming yourself. SEOGadget has an interesting anecdotal post about penalties for linking to spam – just a few days of low quality outlinks can cripple your rankings.
Although this concept is generally well understood as it applies to keeping a clean link profile on your own site, it is often ignored for evaluation link prospects. If a webpage is linking to spam – “Online Poker” and “Mesothelioma Lawyer”type sites, that webpage’s potential to pass quality link juice is probably minimal. I see this linkbuilding mistake often – you’ll sometimes find high PR pages, with comment forms programmed before “nofollow” was invented, spammed to death with Made-For-Adsense links. Although a high PR, followed comment link is often the holy-grail for (arguably unscrupulous) link builders, in this environment, even a link to a good site will be devalued due to surrounding spam links.
These 3 factors are only a few examples of the complexity behind the modern search engine’s link valuation algorithms. There are more link weighting factors – some that come to mind are temporal elements, linking domain trust and authority, anchor and alt text specifics, and much more.
When building links, toolbar PageRank is just one of many things to consider – understanding the complexity of backlink values will help any search engine marketer see Google’s Internet more clearly.