Google Penguin Update? Get Real!

Penguin jumping off an ice berg image

Penguin Targeting Spam & Krill For Breakfast

Google’s Penguin Update

If you haven’t heard of Google’s Penguin Update, you must have either been stuck in a crevasse for weeks or have no connections with the SEO industry.

Before you waddle back to your igloo, skim what follows to see if anything ruffles your feathers, picques your interest or even pricks your conscience!

The Penguin update has had business owners and the SEO industry in a flap throughout May as waves of businesses have reported their website visitor numbers being decimated by Google’s latest algorithmic update which specifically targets spam in the form of spammy link building tactics. It’s an ongoing clean-up process i.e. not a one-off exercise so if you are thus far unaffected that, in itself, is no guarantee of safety.

This topic has been covered comprehensively by others and I doubt I could improve on some of the fantastic material I’ve read but no SEO blog could be complete without a few Penguin mentions. So this post encapsulates, in the form of tips and a reading list, some of the highlights of the 50 or so articles I’ve read on this subject over the last few weeks.

In short, the update has, at the centre of its target, overly aggressive, manipulative link building practices for example links created by article spinning, spammy blog commenting or automated processes. Specifically businesses that have ended up with unnatural link profiles, for one reason or another, have also been targeted. An example of an unnatural link profile could be where an unnaturally high proportion of links to a website use link text (anchor text) which is an exact match for the business’s main keyword targets (money keywords). So if an extraordinarily high proportion of the links on other websites that link to this website looked like this, say, “SEO Companies” instead of this “www.adjuice.co.uk” or this “AdJuice” then we would be more at risk of a Penguin slap because Google might consider these links as unnatural (i.e. manipulated solely for the purpose of trying to rank higher for the keyword in the link).

Our Tips For Penguin-Proof, Authentic SEO

1. Keep It Real (Or Get Real)!

You can’t outswim a Penguin so get out of the water and back onto dry land. Build your SEO on something solid. Be authentic. That means there should be some justifiable purpose, over and above just improving rankings, for each and every aspect of your SEO activity.

2. Don’t Knee-Jerk

If you got buried by an ice fall, then thrashing around is likely to get you buried deeper. The chances are you didn’t spot the warning signs or understand the risks so it’s better to take some time to reflect and really understand the components of your recovery plan before you try and execute it.

3. Practice Balanced SEO

Diversify. It’s only natural to spend more time on the areas where you feel more comfortable or where you may have developed an expertise but if you spend all your time blogging then maybe it’s time to front up to social media or some other aspects you’ve been neglecting. This approach means you’re not placing all your eggs in one basket and will make your SEO investment less risky by making it more resilient against all future algorithmic updates.

4. Add Value

When surrounded by the ice-floes that are your SEO program, it’s easy to forget why we’re in business in the first place. That’s to add value in some way, shape or form. If you keep that ultimate objective in your sights, in the calm water beyond the bow wave, your SEO is more likely to be sailing in a safer direction as a natural consequence of taking a long-sighted view.

5. Get Real

Be authentic. I’ve already said that but I can say it again. It’s my blog. :-)

Our Top 10 Favourite Reads Covering The Penguin Update

(Listed in order of date of publication.)

1. Another Step To Rewarding High Quality Sites – Matt Cutts of Google heralds the next attack on spam.
2. Google Launches “Penguin Update”; Targeting Webspam In Search Results – Danny Sullivan of Search Engine Land describes the Penguin update.
3. Google Penguin Update: 5 Types of Link Issues Harming Some Affected Websites – Danny Goodwin of Search Engine Watch details some of the kinds of links that can give Penguins indigestion.
4. Google Penguin Update Recovery Tips & Advice – Danny Sullivan of Search Engine Land offers some hope.
5. Penguins, Pandas, and Panic at the Zoo – Dr Pete of SEOMoz provides extensive advice on how to do zoo-keeping calmly.
6. Anti-Penguin Link Building Plan – Jason A’Cidre of Kaiser The Sage with an epic post on link profile health checks and healthy link building.
7. Two Weeks In, Google Talks Penguin Update, Ways To Recover & Negative SEO – Danny Sullivan of Search Engine Land with some scary advice from Google (like “Start Again”).
8. 3 Hard Lessons to Learn From Penguin: Be Relevant, Be Balanced, Keep it Real – Guillaume Bouchard of NVI on Search Engine Watch.
9. How WPMU.org Recovered From The Penguin Update – Ross Hudgens of Full Beaker Inc on the SEOMoz Blog with an authentic case study of recovering from a Penguin slap.
10. Recovering from an Over Optimization Penalty – A True Story – Nick Eubanks of Factor Media on the SEOmoz blog with another authentic case study of recovering from a Penguin slap.

Please feel free to kick off a discussion below. You can find all our contact details here or more about authentic SEO on our home page.

Come back soon for our next post or, better still, why not subscribe using the RSS button in the footer to get automatic notification of all future updates?

Google Venice Update Update

Curiouser and Curiouser Search Results Yesterday.

gondola-rocking-image

Google's Venice Update Rocks the Gondola

Whilst scrambling to complete my final preparations for a client meeting at 10:00 am yesterday morning (what ever did happen to those days when I could get to my former company’s office in Oslo, Norway before 10:00 am?), I did a few test searches and saw some interesting results that would seem to be related to the Google Venice update. This is a follow up to my previous post Google’s Venice Update Rocks The Gondola.

Here are three examples closely connected with our own County of Surrey and our two closest neighbours – Hampshire and Berkshire.

Example 1.

With cookies and search history cleared, I changed my set location to Basingstoke (Hampshire). You can see where to do that in the screenshots below.

A search on the broad term “SEO” returned our “Hampshire” page in position five, only Wikipedia and three bigger agencies above us.

Search on SEO location is Basingstoke screenshot

A Search for SEO when location is set to Basingstoke

[By the way, I hate sites that have tonnes of “SEO landing pages”, one for every combination of service plus town and/or county. But we do have three at the moment. I can walk from our office in Surrey across the border into Hampshire and then into Berkshire and back to the office all in my lunch break so I’m not beating myself up too much for having a separate page optimised for Berkshire, Hampshire and our home town, Camberley, each of which has wholly unique content.]

Anyway, it’s our Hampshire page showing and that is a change from my previous post. I also tested the same search term with location set to Southampton (a bit further away) and found we did not rank on page one. In the previous post, it was only our Camberley page that was ranking on page 1 for “SEO” i.e. a small town with a population of about 30,000. Now it is pages optimised for County level searches that are ranking. The population of Hampshire is 2 million in very round figures and 1 million for each of Berkshire and Surrey, very roughly, so to rank on page 1 in those locations for very competitive terms is potentially very attractive because of the population numbers.

Might proximity and a clearly geo-targeted page be the dominant factors? There is little else to associate our business with Hampshire. We do have some minor associations with “Hampshire” but the most obvious factor is probably proximity.

Example 2.

This example is very similar to the first. Our Berkshire page ranked sixth for a search on “SEO” when the location was set to Maidenhead. However, when I set the location to Reading, also in Berkshire but a closer and bigger town, we didn’t rank on page one. Why? Other factors must be coming into play which I haven’t detected (yet).

Search on SEO location is Maidenhead screenshot

A Search for SEO when location is set to Maidenhead

Example 3.

This example is a bit more tricky.

At the date of my last post, we had a page at adjuice.co.uk/seo-camberley/ which I have since deleted. It was that page that was ranking fifth on a search for “SEO”. On deleting it, I set up a 301 redirect to the home page and submitted a URL exclusion request (which took effect) in Google Webmaster Tools.

In the comments of my last post, I stated that I expected to lose the ranking of that page. Obviously the page that was deleted could not rank but I expected no other page to rank either. That did turn out to be the case in the days shortly afterwards but, today, our home page is ranking sixth for “SEO” with location set to nearby Guildford.

Search on SEO location is Guildford screenshot

A Search for SEO when location is set to Guildford

So why is the home page now ranking 5th to nearby searchers when it wasn’t a few days ago? Is this due to testing by Google, fine tuning or the other changes I have made to our site in the last few days?

After I deleted our page adjuice.co.uk/seo-camberley/, I created an improved version of that page at a (similar but) new address adjuice.co.uk/camberley-seo/ .

This page has not yet appeared in Google’s search index. But could it still have been an influencing factor in our home page ranking in the “localised organic results” (a phrase coined by Nifty Marketing)? It will be interesting to see whether the new page at /camberley-seo/ will replace our home page at number five in a few days’ time in the “localised organic results” on a search for “SEO”!

What Implications Might The Venice Update Have?

In all three of the above examples, AdJuice has new page one rankings in the localised organic results for very competitive broad search terms like “SEO”.

The pages that have those rankings seem to be very relevant (closely related) to the searcher not only by virtue of the relevance between the search term and on page SEO factors but also by the physical proximity that appears to be suggested by the combination of the searcher’s location (as perceived by Google) and the locations attributed to the ranking page, whether that might be office location, on page optimisation or a mixture of both.

It does not seem necessary to have an office address or physical presence in the same Town or County as the searcher but it does seem to be necessary to be physically close. When we set our ‘searching location’ to be more than 30 or 40 miles from our office, we no longer appeared on page one.

The Venice update may therefore diminish the value of huge numbers of “service + town” landing pages where the searcher is not in close proximity to the physical presence suggested by the pages in question.

However, the use of SEO landing pages that are optimised for locations wthin close proximity of the searcher might just have become an even more effective tactic!

Is the gondola still rocking or will the waters calm this week?

Google’s Venice Update Rocks The Gondola

Significant Changes in Local Search Results.

rocking gondola image

Google's Venice Update Rocks the Gondola

Yesterday, I picked up a post in Search Engine Land by Matt McGee about 40 Google search quality updates in February 2012, also announced in the official Google search blog.

At number 26 in Google’s list of 40 items, is an update dubbed “the Venice update” and Google’s summary announcement on their blog is as follows.

“Improvements to ranking for local search results. [launch codename “Venice”] This improvement improves the triggering of Local Universal results by relying more on the ranking of our main search results as a signal.”

The very next time I logged into Google Analytics, I saw some of the effects of that update had already started to manifest themselves in our search results.

I noticed a visit arising from the term “SEO”. Being a broad, general term, I have never bothered to try and target it and AdJuice does not rank particularly highly for that query. So I carried out a few searches myself and have shown two of the results below.

You can click on these images to get bigger pictures.

Google Venice update example 1

New google search results for "SEO" after the Venice update

Google Venice update example 2

New google search results for "SEO services" after the Venice update

I was amazed to see that a minor page on our site, dedicated to Camberley, was ranking today at number 5 in the main results on a search for “SEO” (by the way, it was number 5 and not number 4 as I’ve incorrectly annotated the screenshot). We also appeared at number 5 on a search for “SEO services”, a term for which we usually appear on page 2 nationally. You can see from the lefthandmost column of the second screenshot that my location was set to Camberley. Prior to the Venice update, our Camberley specific page really only appeared on page 1 when a searcher specified Camberley in their search query. That’s all changed.

With local results being intermingled with national results rank tracking becomes a big issue, as Linda Buquet explains (see first related link below).

Interestingly, our Camberley page did not appear for other very closely related searches like “SEO specialists”, “SEO agencies” or “SEO companies” although other pages of our site rank well nationally for those terms.

Opportunities For Businesses Operating in Local Markets

The Venice update, at first sight, would therefore appear to offer some great opportunities for local businesses to appear on page 1 in the main results to customers who are searching within their locality. Up until now, the focus for local businesses has very much been on Google Place Page Results, Google Maps and the blended local business results introduced in the UK in 2010.

This change suggests that a shift towards optimising for appearances in the main organic results may have become commercially viable again for some small businesses.

Not only does it now look like local businesses will be able to appear on page 1 for major terms but if they have their Google authorship sorted out, then they are also likely to have their result accompanied by their photo, as in my first example above. That’s a big opportunity.

More places in the main organic search results for small businesses must mean fewer places for the big brands / big budgets that dominate some of the most competitive search terms on a national level. Will they have to recoup their lost leads by upping their AdWords budgets or will we see an increase in the proliferation of “service” + “town” pages?

Venice Update Experiment

The page that appeared in the above searches was this one http://www.adjuice.co.uk/seo-camberley/ . Today I deleted this page, set up a 301 redirect to the home page and submitted a URL exclusion request in Google Webmaster Tools for the deleted page.

The page was really a legacy page which served as a landing page for hyper-local searches in our home town. Unlike many “service” + “town” landing pages, it did actually have 100% unique content but it was not providing any real user value so I did not want it suddenly appearing on page 1 for major search terms. No, I’m not mad. It was a crummy page.

It will therefore be interesting to see whether the deletion of the above page will cost us the rankings in the screenshots or whether our home page will now appear instead. If it’s the former, then I won’t be disappointed but if it’s the latter, then so much the better. Will I have removed the vital signals (a highly specific location-based page) in order to appear in these positions or were they not necessary in the first place?

I’ll watch out over the next few days and update this post (in the comments below) as soon as I have the answer!

Please see comments below for updates. I intend on continuing this experiment by making further changes over the next few days as soon as I can find time. Pop back or subscribe.

Related posts:
Google Venice Update – New Ranking Opportunities for Local SEO – Local SEO Blog
The Google Venice Update – Another Nail in the IYP Coffin? – Local SEO Guide
Google Venice update showing locally targeted organic results – Blogstorm
Google Venice Update – Big Changes Based On Locality – PPC Blog

Please feel free to kick off our discussion below or visit our home page for an overview of our services.

Eliminate Duplicate Content to Profit From Thin Air

Silver bangles

Silver Bangles

Duplicate Content Case Study

In this article, I’ll explain how to make profit from thin air by eliminating duplicate content from your website. No, this is not some shady ‘get-rich-quick scheme’ full of hollow promises. This is a real example of how the identification, diagnosis and elimination of duplicate content became the biggest single contributory factor in achieving an increase of 145% in the number of website visitors from free search (aka organic search or natural search) for one of our clients. Just to be clear, an increase of 145% means that the number of visitors was 2.45 times the number of visitors in the same period in the previous year. We followed Google’s guidelines on this topic.

The Benefits of Eliminating Duplicate Content

The graph below shows the number of organic visits in the period January 2010 to December 2011 inclusive.
You can click on the image to see a bigger picture.

Increase in number of organic visits small image

Increase in number of organic visits

AdJuice was engaged by Cavendish French towards the end of March 2011. Cavendish French specialise in a wide range of handcrafted silver jewellery, including stone set silver jewellery with contemporary, classic and vintage inspired collections. We started addressing the issue of duplicate content in May 2011 and the effects started to show almost immediately. Since the benefit really only started to be felt in the second half of the year, we have calculated the 145% increase by comparing the number of visits in the period 1st July to 31st December 2011 with the number of visits in the same 6 months of 2010. If you compare the whole of 2011 with the whole of 2010, the increase is still 70%. What might that growth have been but for the dire economic circumstances we face?

Content May Be King But More is Not Always Better

Sounds like a contradiction, I know. In the world of SEO, content is king. As a rough rule of thumb, more content brings more visitors to your website. Where this does not apply is where the content does not the meet the requirements of being useful, unique and relevant. So more content of a duplicate nature not only brings no incremental benefit but may adversely impact the performance of your site as a whole. In the past, it was just the duplicates that got ignored by Google. Since Google’s ‘Panda Update’ in 2011, duplicate content and other content with low utility value may also impair the performance of the good content (in search marketing terms).

It has never been more important therefore to attend to these kinds of issues. Fortunately Google provides lots of guidance on this problem to help website owners to get the best performance from their sites.

In the summer of 2011, our review led to the elimination of 90% of the pages that google.co.uk had in its index for www.cavendishfrench.com. The number of pages of their website indexed by Google was reduced from over 22,000 to just over 2,000. You can see the number of pages in Google’s index for any website by entering “site:www.anywebsite.com” into Google’s search box. Thanks go to Cavendish French for trusting our recommendations and also permission to publish this.

So, What is Duplicate Content?

Duplicate content comes in many different forms. To the layman, it may suggest that a web page has been copied i.e. the content on the page has been copied and reproduced on another page of another website or another page of the same website. These cases may be legal, illegal, plagiarised, etc.

Ironically, there are vast numbers of cases where website performance is being adversely impacted by duplicate content issues but nobody has knowingly copied anything. Duplicate content exists for a whole variety of technical reasons and I can’t cover them all here. The scope of this article extends only to the circumstances that directly impacted this project. For those that are interested to read up on all the forms of duplicate content that can exist and how to address them, I have included a couple of links at the bottom of this article to excellent resources.

URL Issues

The most common instances of duplicate content in this project related to URL issues. These problems arise where exactly the same or substantially the same content can be accessed in different ways and found at different URL addresses. This problem is very common with online shopping sites.

To get specific, when browsing on the Cavendish French site, the shopper has the option to display 12 products on a page, 20 products on a page or 40 products on page. So, when you arrive on the ‘silver bangles’ page, this is the URL (page address) you first see in your browser address.

http://www.cavendishfrench.com/jewellery/silver-bangles

The default is set to display 12 products on the above page. If you then choose to display 20 or 40 items per page, the URL addresses change to these, respectively.

http://www.cavendishfrench.com/jewellery/silver-bangles?limit=20
http://www.cavendishfrench.com/jewellery/silver-bangles?limit=40

And if you change back to display 12 products, the page URL is not back where you started but is this one.

http://www.cavendishfrench.com/jewellery/silver-bangles?limit=12

So there are 4 page URLs (so far) for ‘silver bangles’. These are all perfectly valid options for the shopper so there is no fault with the design from a usability point of view or UX (user experience). The problem is that, as far as search engines are concerned, these are all technically different pages but with the same content and may therefore be considered copies or duplicates of each other. The task the search engines face is how to decide which one of these pages to show for a search on ‘silver bangles’.

Read on because it gets more interesting.

The shopper also has the option of sorting the products into descending order of price or ascending order of price (this is the default). Sorting the default page into descending order of price creates this additional URL.

http://www.cavendishfrench.com/jewellery/silver-bangles_desc

And for 20, 40 and (back again to the default of) 12 items per page, these URLs.

http://www.cavendishfrench.com/jewellery/silver-bangles_desc?limit=20
http://www.cavendishfrench.com/jewellery/silver-bangles_desc?limit=40
http://www.cavendishfrench.com/jewellery/silver-bangles_desc?limit=12

You can start to see how the number of technically different page addresses, for essentially the same product, is beginning to escalate. In addition to being able to select the number of items on each page and the price order in which they are sorted, the shopper can also select which page they want to view i.e. 1st, 2nd, 3rd etc. These options create yet more URLs.

These options can all be combined with each other so the number of URLs compounds. The 100 bangles that currently exist could therefore be displayed as follows.

100 products displayed 12 per page = 9 pages or
100 products displayed 20 per page = 5 pages or
100 products displayed 40 per page = 3 pages.

That makes 9 + 5 + 3 = 17 page addresses so far. All of these could be sorted into descending or ascending price order so the number of pages increases to 17 x 2 = 34. I’ve ignored other combinations involving the default page addresses because I think that’s enough to illustrate the point! One page existing in 34 different forms, each with a different page address, but all displaying what could be considered to be essentially the same content.

Rel=Canonical

There are a number of different approaches to dealing with duplicate content depending on the exact circumstances. We opted to make use of the “rel=canonical” link attribute. This means, firstly making a decision about which page address, out of all possible alternatives, is the ‘preferred’ or ‘canonical’ version of the page address.

Once that decision had been made, then we had to include a small code snippet into the head section of all the relevant pages (i.e. all 34 pages above and any other duplicates that may exist for ‘silver bangles’).

This is not interpreted by Google as a command but rather a request or a signal that we wish this page address to be treated as the primary or canonical version and that all other similar pages are effectively copies and should therefore be subordinated to this one.

The page of this blog post you are reading has a rel=canonical link in the code.

Now Google and the other search engines know which page we intend to be shown and this is what happened. The effects of this are several. It increases the conversion rate of this product category by increasing the chances that the visitor is shown the best page i.e. the one that we have chosen. It concentrates the link equity of the site by focusing it on fewer pages. That means better rankings.

Better rankings coupled with better conversion rates means more profit. Out of thin air.

There is sound logic behind why this outcome is reasonable and plausible. Google’s primary aim is to provide the best user experience by returning the most relevant and useful results as fast as possible to surfers. Those website owners that help Google to achieve this by setting up their websites in accordance with Google’s guidelines are bound to fare better.

De-indexation of Other Low Value Pages

Whilst we were at it, we also endeavoured to de-index some other categories of pages which added no value by being in Google’s index. Some of these pages were de-indexed using the straightforward ‘meta noindex’ command. For others, we used URL parameter settings in Google Webmaster Tools for a ‘belt and braces’ approach and this angle dealt with de-indexing another 6,000 odd pages.

Where is The Proof?

In the strictest sense, there is no absolute proof that it was the elimination of the duplicates in the index that led directly to the improved website performance. However, the circumstantial evidence is overwhelming in terms of timing. Although there was some of the other usual on site work going on (meta tags etc.) these were not extensive so I’m not in any doubt about cause and effect.

Resources

Google Webmaster Tools on Canonicalisation. Advice on this subject straight from the horse’s mouth.
Duplicate Content in a Post Panda World. The ‘go to’ article on all aspects of duplicate content.

Did you find this post useful or have you had any similar epxerience? If so, please feel free to add your comment below, help others find this post by using the social sharing buttons below or subscribe for future updates using the RSS Posts feed at the top of the page.

Don’t forget to visit our home page for an overview of our SEO services or contact us for more information about how we can help you get your products and services seen by more people.

Plagiarism Checks – Check This

Copyright symbol image

Do Not Copy

Before I start, the image on the left of a copyright symbol is, itself, subject to copyright rules. Yup, I paid a couple of quid to use it here.

The other day I stumbled across an example of another SEO service provider that had copied my content and published it on their website as though it were their own. They shall remain nameless. However, anybody who knows anything about SEO can find out if they want to from the images below! :-) There are lots of free and paid tools, like plagiarism checkers, and other techniques you can use to try and detect instances where your content has been reproduced without your permission. But the way I found this one was rather unusual and made me laugh. I was looking through a list of all AdJuice’s back links using one of my favourite link checking tools and came across one that caught my eye. I visited the linking website to see where and why they had included a link to this website. It turned out that they had copied the majority of my 100% original (but no longer unique!) content on our page describing our organic SEO services and simply published it as their own. The funny thing was that they had not removed the link from within my content to our other page about SEO specialists!

Hey you guys and anybody else out there that might copy content straight from this website, please leave all the links in! We at least get some credit that way for spending all those long hours creating our own 100% original content! One of the golden rules on the internet for using other people’s content for your own purposes is that you should retain all existing links in the content and/or add your own link to the original source so that all visitors, humans and search engines, can readily identify the original author.

Plagiarism – Example 1 (click image for bigger picture)

This is the one where no attempt has been made by the plagiarist to disguise the copying of my content. I don’t really understand why anyone would prefer to copy my content rather than create their own. It’s not as though it’s a masterpiece or some kind of authoritative document. It’s not that difficult to create your own either. Although that does pre-suppose you have a good grasp of the subject you’re writing about. Mmmm … now there’s a thought. I guess it might be nothing more than laziness on the part of some. Or possibly an inability to write. But if that’s the case, they shouldn’t be meddling with SEO because SEO is not for the lazy and not for the illiterate. And there are big issues about trust on the internet so it’s best to play by the rules or not play at all if you want to be taken seriously

Plagiarism example 1 image

Plagiarism example 1 screenshot

Plagiarism – Example 2 (click image for bigger picture)

In this example, the publisher has made a weak attempt to disguise this content as their own. They have substituted a few synonyms, changed some of the headings, changed the order of the headings and so on but the original source is unmistakeable since there are some whole sentences that are an exact match for mine, including punctuation.

Plagiarism example 2 image

Plagiarism example 2 screenshot

Ways to Detect Plagiarism

There are probably many techniques but here are four good ones.

1. Plagiarism checkers

There are a variety of paid and free plagiarism checkers, the best known of which is probably Copyscape. However, when I entered the url of our organic SEO services page, Copyscape couldn’t find any copies. I then tried this free plagiarism checker and it came up with a couple of cases. If copyright protection is high on your agenda, then I guess you would need to review a few thoroughly. I don’t have enough experience of them to recommend any. I found the one I used in this list of plagiarism checkers on Sarah Lam’s blog.

2. Check your back links

If the back links in your content have been preserved i.e. still contain links to your website / original content, then you may trace the copies via your back links. Depending on whether you allow your content to be reproduced, you might be happy with these kinds of copies or still left snarling.

3. Google search

Just copy a reasonably long string of text from your content and paste it into Google’s search box. The text you copy only needs to be sufficiently long for you to be pretty sure that it should be unique on the internet. Firstly, try searching with the text in quotes “like this”. If there are any other instances of that text in Google’s index, you’ll find them in the search results. These results will only show cases where there is one or more exact matches for your search query so this is the quickest and surest way of finding exact copies. This would have revealed the case in my first example above. It may not have revealed the second example. That would depend on whether I had selected a piece of text that had been left untouched in the copied version or whether it had been altered. Then try repeating the search without any quotes. This may provide instances that are not revealed in the first test but it may also produce lots of other web pages that do not include copies of your content so sifting through them might be fruitful or pointless. You won’t find out until you carry out the search.

Be aware that Google will sometimes not display all the results where there are many web pages with the same content so you need to know how to spot this and also how to get all the results displayed. You’ll the idea from the two screenshots below which you can click on to get bigger images.

In this example, I copied and pasted into Google’s search box a string of text from item 5. under the “Core SEO Services” section on our main page about “Organic SEO Services”.

Google search results showing only most relevant results

Google search results showing only most relevant results

Google search results showing all results

Google search results showing all copies

Doooohhhh! I said they would remain nameless but I’ve just gone and let the cat out of the bag. Well I reckoned, since you’ve taken the trouble to read this far, then I owe it to you to satisfy your curiosity! Remember if you go check these sites out, they may have changed their content by the time you get there. Or should have.

4. Plant a trap in your content

I would not have known about this method had I not, some time ago, stumbled across and remembered Ian Lurie’s humorous blog post “Stop Plagiarism in 3 Easy Steps”. Ian shows you how to have a bit of sport and turn those snarls into sniggers.

What Constitutes Plagiarised Content?

We all gain inspiration from the talents and work of others so how close to the original does a copy have to be in order to be labelled as a copy or plagiarised version? I don’t know the answer to that. In the case of an exact copy of all or part of your text, then it’s straightforward. But if the plagiarist goes to great trouble to modify your content, does it ever become their own? How far do they need to modify it in order to establish a claim as the original author? I’d need to consult a lawyer to get an answer on legal rights but for my own purposes, if I read it and recognise it immediately as an exact copy or even weakly disguised derivative of my content, then it is a plagiarised copy.

Why Does it Matter?

It may matter to you. It may not. In the two examples above, it is unlikely to have a detrimental financial impact on AdJuice so the consequences are not severe and amount to little more than irritation. However, if I invest time and effort in creating something, then that is a real investment so why should somebody else use it for free without my permission? If I were asked then, in most cases, I would agree to my content being reproduced elsewhere, provided that there is a link from within that content back to this website. If my content is copied and stripped of its links, then that reflects really badly on the plagiarist. I would far rather spend my time creating the next piece than looking over my shoulder to see who ripped off the last.

Other Ways to Protect Your Content

1. Link to your content from other web pages and sources

The more you can establish links from other sites to your content, the more you are likely to establish your content as the original source, for search engine purposes, over and above copies of it.

2. Google Authorship

Implement Google’s advice on authorship to increase the chances of being crediting as the originator. This may also mean that details such as your photo could appear in Google’s search results alongside the summary of your content.

Need help or advice about content creation and marketing to increase the chances of your content being found at the top of Google? If so, then please get in touch using any of the options on our contact page. Alternatively, please visit our home page for an overview of our SEO services.