In February 2011, Google rolled out an update to their search engine results algorithm which had a dramatic effect on many websites and their ranking positions. This was the first of a series of updates which appeared under the working name of Google Panda, and its implementation had a significant impact on search engine optimisation (SEO) practices which had to change and adapt to it from then on. 12% of web searches were affected, and it signalled a permanent change in the kinds of websites that are able to succeed in search results.
Prior to Google Panda, many web searches were plagued with results that were full of thin, poor quality content which, because it was stuffed full of keywords and linked to from an extremely high number of often disreputable websites, had managed to “game the system” and fulfil Google’s requirements of 1) lots of key words and phrases and 2) a high number of backlinks which “proved” that the site was authoritative. However, users were sick of clicking through to article directories that published duplicate content indiscriminately, and “content farms” that offered them very little value.
Panda update 1.0 was focused on websites written in U.S. English and, two months later, Panda 2.0 was released, impacting all English language queries, including British English sites. After that point, new Panda updates were unleashed on a regular basis, each time eliminating more and more spam-based websites by applying a new search algorithm filter to an increasing number of web results. Webmasters despaired when their search engine positions and their visitor numbers, dropped dramatically, and the update prompted many to rethink their content strategies in order to regain their “number one” search result status.
Over time, Panda updates refined and streamlined their focus, with each new update identifying innovative ways to spot websites that were not good enough to warrant a top search spot. Google’s ability to recognise which content was offering very little value to site visitors improved over time, and although some reputable websites were accidentally caught in Panda’s web, the majority of the sites being de-listed were those that web searchers did not miss.
Website owners could no longer deny that they needed to buck up their ideas. Poor quality content and excessive numbers of “above the fold” advertisements were removed and replaced with writing which was original and great quality, in the hope that when the Google Panda filter was next applied their site would improve its rankings thanks to the improvements it had made. This was far from a straight-forward process, and many site owners needed to do numerous rounds of upgrades in order to regain their search positions.
While many people believe that Google Panda was a single event, it actually remains an ongoing process which is updated and applied most months. This is because Google’s spam team are constantly refining and perfecting the algorithm to ensure that it correctly identifies low quality sites and does not accidentally snare great websites in the process. Just over two years after the original Panda 1.0 update, in March of this year Panda #25 was rolled out, making it clear that Panda was not a one-off scourge but rather a foundation for the anti-spam team to build upon.
For businesses with websites hit hard by Google Panda, improving quality at every opportunity is the only way to have a chance at regaining previous status. The change in SEO practices that Panda forced actually benefit human website visitors as much as bot crawlers, so making positive changes to develop the best quality site content is worthwhile on every level.