Averting Negative SEO

Turn Negative SEO Around

Bringing down your competition has never been easier thanks to negative SEO. All you have to do is:

[custom_list style=”list-1″]
  • Influence multiple spam annotations
  • Take down competitor links
  • Duplicate competitor content across multiple domains
  • Purchase multiple bad links

And many companies know this. This post will give you an inside look at just how easy it would be for a company to take you down in this manor, and what you can do to avoid it. This is not meant to be a guide to carry out these practices in any way, as they are purely black-hat, unethical, and unfair. We do not condone these practices at all, this post is meant to educate about the dangers of a negative SEO attack and to help businesses avoid it at all costs.


While these negative SEO practices are popular, they may work in your competitors SEO favor. Ironically, it’s not uncommon to see such practices and a knowledgeable person would sabotage their competition using superior methods. Looking for vulnerabilities in one’s SEO requires certain tact and a full audit on your competition would be more suitable. Look for the weak spots and optimize on them.

If you have a site and want to rank for the keyword phrase ‘SEO Consulting’, you will stumble on various competitive sites. We’ll go with the example of the fake website, and assume that it ranks 2nd. If this is a Word Press powered site, it could have various vulnerabilities. However, the site could be in great shape with no rogue plug-ins and be up-to-date, what happens next? We explain below.

Duplicate SEO Duke’s Content

[custom_frame_right]duplicate content[/custom_frame_right]Most people don’t know this but duplicating content is one of the easiest ways to go about negative SEO. There is nothing much that can do and Google is quite terrible at discovering the source. It is the responsibility of the site’s owner to ensure that no technical problems could encourage duplicate content on their site.

There are two ways to produce duplicate content on-site. Adding a query string such as is one of them. This would render the page with the same content and create duplicate content. This technique could work if the site does not have a well implemented Rel=Canonical. It’s also important to note that a page having a different URL with duplicate content should not have an http 200 header render.

Wildcard Sub domains

The second approach would be inserting wildcard sub domains. You can easily identify a site that can tolerate wildcard sub domains by introducing a string of random characters in front of the site’s FQD. You could add a character such as The response you will get from the search engine will be a 404 not found code. However, if you use any other sub domain to replace www, you would get the results as duplicate content. Every page of the SEO Duke Blog that returns this response ‘200 http’ would produce duplicate content. This creates the perfect opportunity to conduct negative SEO and you could de-index your competition using a specific keyword. Once you are done, you will need search engines to see the site and this means linking it. In order to link the site, you will need to use its keyword phrase and in this case we have ‘SEO Consulting’. Again, all this would work if the site does not have Rel=Canonical set up appropriately. If it is, your hard work may be in vain.

What Happens If The Site Is Protected?

The fact that one page has a well set up Rel=Canonical is not the end. If the server can tolerate wildcard sub domains, your competition is still vulnerable. However, this means coming up with a new strategy. The first thing you need to identify is pages within the site that do have Rel Canonical in their source codes. You can configure your spider to search for the omission of the Rel=Canonical tag and a few pages should be able to turn up. This indicates that your negative SEO tactic could still work in your favor.  You can now proceed to devalue those pages by creating duplicate content on-domain. Search Metrics works to identify what the pages which do not have Rel Canonical rank for. You can identify the best keyword phrases to attack in these vulnerable pages and a keyword phrase such as ‘SEO Blog’ would be ideal. Simply place your links within their text to encourage Panda algorithm to find the duplicate content. The placement is enough to confuse Google and force it not to rank your competitor’s page, or in this case Duplicate content at the domain level can be detrimental to the site and while this largely affects the page that is attacked, it could negatively impact the entire site. If this was to be replicated vastly, it can bring about momentous crawl equity and create ranking problems to SEO Duke as a whole.

Linking any kind of duplicate content with the anchor text tells search engine signals that the page or pages are noteworthy for the query utilized in the anchor text. is therefore serving an identical page.

Unfortunately, these techniques can only be used attack pages that do not have a Re Canonical tag.

…and Here is the Important Part – How Can SEO Duke Prevent This?

Online competition for rankings is fierce and any one of you could be SEO Duke. You can protect yourself from such attacks in two ways. The first is correct your server configuration to revert any wildcard sub domains. Secondly, ensure that all the pages on your site have correct Rel Canonical tags.

You can identify pages within your site that are externally linked and indexed without a Rel Canonical tag by using this simple search technique. Enter: site: on the search engine to reveal duplicate installations. The various page domains that are vulnerable to such an attack will appear. Once you have identified your vulnerable pages, protect them by disallowing any wildcard sub domains and ensuring that they have fully qualified Rel Canonical tags within their URL.