
First unleashed in April 2012, the Google Penguin
algorithm, is a webspam algorithm designed to aim websites that use low-quality
link schemes to get a high indexing in Google SERPs. Penguin had an immediate
affect when it was launched in 2012.
In 2012, the original Penguin update aimed ‘webs pam’ and
affected many websites and businesses that were ignorant of the risks of web
spam.
There were a large number of complaints on the actual
post from Google about the first Penguin Update.
The Relationship between Penguin & Panda
Google’s algorithms are seemed to be focused on quality, as Google defines it, and Google has published advice on creating a high-quality websites that will index high in SERPs:
What counts as a high-quality website?
Our site quality algorithms are aimed at helping people
find “high-quality” sites by reducing the rankings of low-quality content. The
recent “Panda” change tackles the difficult task of algorithmically assessing
quality of the website. Taking a step back, we wanted to explain some of the
ideas and research that make a way to the development of our algorithms.
Below are some questions that one could use to measure
the ‘quality’ of a page or an article. These are the kinds of questions we ask
ourselves as we write algorithms that attempt to assess site quality. Think of
it as our point of view at encoding what we think our users want.
Of course, we aren’t disclosing the actual ranking signals used in our algorithms because we don’t want folks to game our search results; but if you want to step into Google’s mindset, the questions below provide some guidance on how we’ve been looking at the issue:
Would you trust the information provided in this article?
Does this article crafted by an expert or geek who knows
the topic well, or is it shallower in nature?
Does the site have duplicate, overlapping, or redundant
articles on the same or similar topics with slightly different keyword
variations?
Would you like to share your credit card details to this
site?
Does this article contains spelling, stylistic, or
factual errors?
Are the posts uploaded according to genuine interests of
readers of the site, or does the site deliver content by attempting to guess
what might rank well or searched in search engines?
Does the article provide original content or information,
original reporting, original research, or original analysis?
Does the page provide considerable value comparably other
pages in search results?
How much is quality control performed on content?
Does the article shows each sides of a story?
Is the site have recognized identity on its topic?
Is the content mass-produced by or outsourced to a large
number of creators, or spread across a large network of sites?
Was the posted article have good editing, or does it
appear sloppy or produced in a hurry?
Would you recognize this site as a reliable source when
mentioned by name?
Does this article present a complete or thorough
description of the topic?
Does this article contain deep analysis or likeable
information that is far known?
Is this a line of page you’d want to bookmark, share with
a friend, or recommend to others?
Does this article have an excessive amount of ads that
distract from or interfere with the main content?
Would you expect to see this article in a printed
magazine, encyclopedia or book?
Are the articles short, unreliable, or otherwise lacking
in helpful information?
Is the content written with great care and attention to
detail compared to less attention to detail?
Would users complain when they see pages from this site?
Now it’s Penguin 4.0
Google updates Penguin, says it now runs in real time
within the core search algorithm
After a near about wait of two-year, Google’s Penguin
algorithm has finally been updated again. It’s the fourth major release, making
this Penguin 4.0. It’s also the last release of this type, as Google now says
Penguin is a real-time signal processed within its core search algorithm. The
special penguin 4.0 algorithm info graphic that gives an introduction to the
algorithm updates and recommended SEO techniques.
Penguin goes real-time
Penguin is a filter designed to capture sites that work
as spam in Google’s search results in ways that Google’s normal spamming
systems might not detect. Introduced in 2012, it has operated on a periodic
basis.
In other words, the Penguin filter would run and catch
sites deemed spammy. Those sites would remain penalized even if they improved
and changed until the next time the filter ran, which could take months.
On October 17, 2014, the last Penguin update, the Penguin
3.0, happened. Any sites hit by it had waited for two years for the chance to
gain again.
According to Google, those long delays are now to be a
thing of the days gone. With this latest release, Penguin becomes real-time. As
Google recrawls and rerank pages — which happens gradually — those pages will
be checked by the Penguin filter. Pages will be caught and/or freed by Penguin
as part of this regular process.
As Google stated in its post:
With this change, Penguin’s data is refreshed in real
time, so changes will get affected much faster, typically taking effect shortly
after a page is recrawled and reindexed.
Penguin Update History Dates:
Penguin 1.0 – April 24, 2012 (3.1% of searches)
Penguin 1.2 – May 26, 2012 (0.1% of searches)
Penguin 1.3 – October 5, 2012 (0.3% of searches)
Penguin 2.0 – May 22, 2013 (2.3% of searches)
Penguin 2.1 – Oct. 4, 2013 (1% of searches)
Penguin 3 – Oct. 17, 2014 (LESS THAN 1%)
Penguin 4.0 – Sept 23, 2016
Penguin 4.0, Google could not get any specific number of the percentage of queries it affected, -because the update is gradually happening and the percentage will constantly be changing.
Leave Comment