Google has been harping about quality content for a while now. They want great content and will provide the best search results to the people who have it (in theory). Of course there are millions of examples showing garbage content at the top of the SERPs but they are slowly improving and are better than the alternatives.
And for every example of a low quality sites occupying a top result you can often find quality results, many of which come from sites where the owners perform zero search engine optimization. Google is far from perfect but they are good and getting better.
But the problem for most website owners is defining quality. The concept is pretty simple yet we know Google’s spiders aren’t able to really judge content and they don’t have enough bodies to manually read every page on the web and rank the content in this fashion.
So they end up relying on the various data they collect and can obtain via spider and do what they can with it. Let’s start by going over those.
On Page factors
Spiders can measure a few things pretty easily. These are by far the easiest to improve on your site and should be where you start.
- Broken links
- Page load time
- Spelling and grammatical errors
- Amount of content and pages
- Content’s relevance to the page title and meta description
These are all common sense. Broken links are bad. Slow loading pages are bad. Broken english and spelling mistakes are bad. Sites with thin content are now proving to be bad. If you try to deceive Google by writing Title and Meta Descriptions on one topic and then the content is about something completely different, they aren’t going to like it.
All of this can be found using Webmaster Tools and various other free tools you can find around the web. Should be obvious so I’ll move on.
This is where we start to get into some grey areas. We don’t know exactly what Google can track and what of the data they do collect they are able to use. Regardless of what Google can do, we can use this data ourselves to analyze our own sites and make improvements where needed. Because what our users consider quality is quality. You should believe Google takes the same stance.
What Google users find useful is what Google considers quality content.
Here are the major traffic statistics to monitor:
- Bounce Rate
- Page Views
- Return Visitors
- Time on Page
- Signups, Subscribers, Comments
- Social Factors
Bounce rate is always a number you should try to reduce whenever possible, yet numbers vary greatly depending on the type of site. A great information site could have a 90% bounce rate because visitors find exactly what they need and move on. So it does vary between sites but if you combine a high bounce rate with a low time on page there is a good chance the user didn’t find what they were looking for or didn’t find the content useful.
If you have a high bounce rate on your homepage you are doing something seriously wrong, a deep internal page or say a contact page can and should be higher.
Return visitors is an excellent way to measure the quality of your content. Regardless of whether Google can see this stat or not, it should be your goal to turn every visitor into a repeat visiter.
Lets think about this in Google’s point of view. They send a searcher results with your page on it, the visitor hits your page and than hits the back button within 30 seconds of visiting your site. That seems like a red flag that the content on your page was not what the searcher was looking for. On the other hand, any traffic that comes from Google that doesn’t return to Google means the user found something they liked, quality.
Off Page Content
This is all of the stuff most SEOers spend their time doing. Went over some post penguin link building in a post the other week if you are interested in that.
It is all about inbound links. Google measures the quality of a link by:
- determining it’s relevance
- where the link appears in the page
- anchor text
- link juice
The links pointing to your pages determine how your pages are ranked and have a big effect on what you are ranked for. Google has also been getting better at analyzing pages and seeing where a link appears in the copy. A link within the body of a pages will tend to be better than a link in the sidebar or footer where most ads and low quality links happen to live.
The web is a hub of interconnected pages. The more connected your pages in the specific niche the more Google will see you as an authority in the niche.
How Does Google Get It’s Data
Google has so many ways of collecting data it isn’t funny. At the same time, they more than likely do not use most of it when determining search results. I do think they use a lot of it when creating algorithm changes though.
Google can collect user data from:
- Chrome - browsing history and bookmarks
- Google Reader - what sites users are subscribed to
- Feedburner - how many subscribers a site has and now many are actively using the feed
- Gmail - sites users view from email
- Analytics - pretty much everything about sites they have access to, personally I think it is better giving Google this access unless you have something to hide
- Webmaster Tools - same as above except they have this information regardless and are sharing it with you the site own, pretty dumb not to have an account
- Google Plus - the one everyone is talking about and one where it can effect your search results
- Youtube - 2nd largest search engine, if you don’t do video you should be
- Blogger - any sites built with Blogger are owned by the Big G
Now most of these may not have an effect on search results. A lot of your data is used to serve up ads selected just for you when logged into your Google account. But I wouldn’t put it out of the realm of possibility that the people responsible for make algorithm changes can use it in some ways.
How Do You Define Quality?
If you are still having trouble finding what it is that makes a page quality, look no further than the sites that you frequent. Any site you visit 4-5 times a week is a quality site in your eyes.
For me, one example is Giants Extra (formerly Extra Baggs). I am a San Francisco Giants fan and check this blog at least four times a week. It is written by one of the beat writers following the team so you know the information is current and most days there are 1-2 posts, which means I can see the lineup and the post game analysis if I missed the game.
The site also has a very active commenters list. Usually a couple hundred comments a post from avid fans. So I can read people opinions and share my own if I feel like it, and know it will be seen and possibly criticized/discussed.
So with this example, I find the site to be of high quality because:
- Frequent updates - new posts before each game (the lineup) and after each game with a post game wrap-up
- Consistency - I know what to expect and this makes it a go-to spot to get quick Giants updates
- Expert knowledge - written by a writer who is with the team and knows the players/coaches
You must have certain sites you visit all the time. What makes those sites valuable to you?
It can be really helpful to analyze your favorite sites and break down what it is they do well. After that, take a look at the most popular sites in your niche and analyze them. See what they do that creates a quality experience for users. Then just apply those same principals to your own site, or develop a strategy that allows you to do it even better.
What Do Your Visitors Want?
I know this post was about how google determines quality but that happens to be the same thing your users deem quality. Google is looking for what it’s users like and look inward at our own sites and our own users is the best way to stay to stay on the right track. Focus on building a loyal following and more traffic will find you, from Google and elsewhere.
If you have a second and made it to the end of this post I’d be curious to know what it is you find useful about this and other blogs in the web design niche. I’ve been neglecting this blog lately and need to take a look at what I’ve been doing and find ways to make it better.