search
top

Why Panda Might Be Taking So Long to Come Back Around

Q: Why hasn’t my site come back after I made all the changes recommended for post-panda SEO recovery?
A: Because panda bears are fat and slow.

Q: Why does Google hate me?
A: Because they want to take over your niche.

All kidding aside, it can be frustrating when you pay attention to all of the issues pertaining to site quality and nothing happens. When you spend weeks or months improving your website, getting rid of all of your duplicate or thin content, removing redirecting or nofollowing hundreds or thousands of pages, stripping the Adsense ads off the top of your page template, changing the text from “More Resources” to “Helpful Ads”, paying thousands of dollars to have the best possible, most-useful copy written for every page on your site, throwing salt over your shoulder, consulting the chicken bones, praying, crossing your fingers and sacrificing lambs – (just kidding on the chicken bones part) – and… Nothing. Happens. Your search engine rankings are still in the toilet.

My hypothesis is correct: ThreeI have a hypothesis on why that might be. It certainly isn’t original (is anything these days?) and certainly isn’t scientific or tested with double-blind studies. But it is a hunch based on many years of working in this industry and paying attention to what is, what has been, and what the smartest people I can find say “will be“.

What Has Been is that LINKS – glorious, coveted, page-rank-giving, random-surfer-calling, Anatomy-of-a-Large-Scale-Google-foundation-making links have been the major user-feedback signal and indicator of site quality for over a decade. What Is, unfortunately for some, is that other user-feedback signals, probably dozens of them, have risen to the point of collectively eclipsing the power of the link. What Will Be is that the indicators of site quality will be so difficult to game that most SEOs will find it easier to just improve their website’s quality, rather than focusing on all the different minor factors, including things like alt tags, header tags, meta descriptions, anchor text, keyword usage, sitemaps, and even title tags and links. Most of those won’t go away. But getting ALL of them down still doesn’t mean you have a GOOD site. It only means you have a search-engine optimized site (as “optimized” is defined now, not in a few years).

But what is “Good”? Quality is qualitative right? Ahhh, but that’s the big change. The Panda update is proof that Google has started to figure out how to quantify quality better. After all, they can’t look at every website and decide if it’s “good” or not, so – in the end – this is all data-driven. And that is why there has been so much collateral damage. If the data says your site sucks, it sucks. But let’s say you make it not suck anymore. Maybe you’ve done all that I mentioned above and more, but your rankings still aren’t back. Wasn’t the Caffeine update all about improving infrastructure so Google could make changes faster? Shouldn’t you get back into the rankings soon after you make those changes? Here’s that hypothesis that I mentioned earlier:

The newly-emphasized user-feedback signals indicating site quality take time to reach the point of statistical significance necessary for Google to be confident of the results. That would sort of explain this seemingly archaic answer in a caffeinated, asynchronous world of Google updates.

They could just be “punishing” us and making us suffer long enough that we get the point and never stray again. Or maybe it’s a devious plan by Google to get into the most-affected niches. Or they could be holding off so we can’t run small, isolated tests to figure it out. But I think the above hypothesis is the most simple explanation and, using Occam’s Razor, I’ll have to assume for now that it is the correct one. Let’s examine this thought before going on to discuss what these new or newly-powerful signals are:

Things like title tags are easy to process. You change it, Google recrawls the page, updates their index, does the calculation and, Voilà! The SERPs have changed. The same is true for most other on-page factors. Links are a little more complicated because Google has to take into account all of the sites that link to you, and the sites that link to them, and so on and so forth until the whole web is taken into account and they figure out how long it is likely to take some blindfolded weirdo who has nothing better to do but sit at his desk clicking links all day to land on your web page. Oh, and what was the word that was hyperlinked, and how far down the page was it, and how many other links were there and yadda yadda… it sounds really complicated. As I’m sure it is! But all of that is available to them right away. It’s all right there in their massive buildings holding massive servers with massive amounts of data. What isn’t available to them right now is what Bob Smith in Cincinnati, Ohio thinks of your website. They won’t know that until Bob Smith goes to your site, makes a determination, and tells Google by performing some sort of action. Even then they won’t know if your site is “good” or not. They need thousands of people to do that from all kinds of different channels. Those thousands of “actions” are what I’m calling user-feedback signals of quality. And it takes time to collect them in the same way that it takes time to get thousands of people to answer a survey.

I don’t know much about statistics, but I know enough to tell you that asking ten people (out of millions) if they like A or B better won’t let you draw a conclusion, even if 9 out of 10 of them say A is the best. That 90% means nothing when your sample size is so small because the standard deviation is big enough to make it almost as likely that B would come out the winner if you kept on asking more people. So Google needs more data and, unlike their previous signals of quality, this data can’t just be plucked from their servers right now. It will – in time – be plucked from their servers on an ongoing basis, but the ball has to get rolling first. They ran Panda with the data they had, tweaked it with some smaller updates later, and now they are collecting data from user feedback signals to find out if your quality has improved. At least, that’s what I’m thinking right now.

So Let’s Take A Look At Some of Those Fancy New User-Feedback Signals of Quality…

Here’s how I’m defining them to myself and my team at this point:

Any way a user might tell Google (either directly or indirectly) that they are happy or unhappy with the quality of a web page.

So what are some of the ways a user might tell Google that they are happy or unhappy with your site or page? And what can you do about them? The first thing you’ll notice is that most of them aren’t “new”. They may have been given more weight, but they aren’t new. Let’s start this incomplete list out with a few old ones:

  • Back-Clicks from SERPs
    (i.e. clicking the “back” button after a user visits a URL found in the search results. Note this is slightly different from Bounce Rate).
    Examples of what we can do about it:

    #1 Remove all “thin” content pages from the index.
    #2 Put our best content highest on the page (instead of ads and intro filler crap)
    #3 Just have a better site that makes it less likely a user will click the back button. Example: NO pages that say “This item is currently unavailable”.
  • Bounce Rate
    (Leaving after seeing only one page. They could close the browser, click “Back”, use a bookmark, Type something else into the URL bar, or leave your site via a link on your page.)
    Examples of what we can do about it:
    #1 Put an interstitial page between your affiliate link and the exit so they visit another “page” before leaving. This is a good opportunity to give them a call-to-action, like “Y’all come back now, ya hear!”
    #2 See all of the tips from “Back-Clicks….” section above.
  • Avgerage Time Spent on Site
    (You need to be better than industry average here. Take a benchmark and try to improve it by doubling the time, even if the ultimate “goal” is to get them to leave via an affiliate link.)
    Examples of what we can do about it:
    #1 See interstitial page tip for bouncerate if you’re an affiliate site.
    #2 Add video content, which captures attention for at least 30 seconds. Same with great images (uh… pot calling the kettle black here).
    #3 Get them to interact: Comments, Polls, Games and any form of user-generated content.
    #4 See tip above RE: Putting your best content at the top of the page
    #5 Get visitors to communicate on Facebook via your site. See: www.worldviewmedia.tv…allow-integrated-discussions-on-your-website/
  • Average Pages Per Visit
    (Don’t think Google has access to this kind of data just because you don’t use Google Analytics? Want to buy a bridge in San Francisco?)
    Examples of what we can do about it:
    #1 See interstitial page tip above for affiliate sites.
    #2 More / Better interlinking of related pages
    #3 The Power of Pagination (I.e. I should have broken up this long-ass article into a series and linked to the next part at the bottom. Pot calls kettle again.)
  • Conversions
    (This is data we can give to Google via Google Analytics if we choose to, though they may have other ways of getting it. I’d choose to give it to them…)
    Examples of what we can do about it:
    #1 Set up more goals, conversions and conversion funnels in Google Analytics.Show Google that people “convert” on your website – in other words, they found what they were looking for. Examples: Goal 1 = Newsletter Signup. Goal 2 = Tweet Button. Goal 3 = FB Like . Goal 4 = FB Follow. Goal 5 = FB Comment (See #6 under Avg. Time on Site above). Goal 6 = Comment on a page/post. Possible Goal = Bookmarks. CONVERSION = Clicking on an affiliate link, buying something, or filling out a lead generation form.
    #2 After finding your baseline on all of these goals you can use conversion-rate optimization (CRO) tools to improve your CR. Not only will this look good to Google, but it will make you more $$$!
  • Bookmarks
    (Especially for Chrome, Firefox and Mobile)
    Examples of what we can do about it:
    #1 Having a better site is always going to be the best way to get more bookmarks, as will PR and all of the things you can do to improve user experience.
    #2 Test a “bookmark this page” button. It may not work, but you’ll never know until you test it. See “Conversion” goals above.
    #3 Everyone on your team should have all of your sites bookmarked. So should their family and friends. Every little bit helps.
    #4 Is this an appropriate call to action for social media channels?
    #5 Is this an appropriate call to action for your email list?
  • Plus One Use
    (This one’s an IF. As in IF this ever gets out of testing mode and is widely adopted, unlike most of Google’s other social endeavors)
    Examples of what we can do about it:
    #1 Is this an appropriate call to action for social media channels? Maybe not yet, but we should keep our eye on it.
    #2 Every person on your team should not only “Plus 1″ your sites and some other sites, but so should family and friends. Every drop in the bucket counts. If you feel a little risky check out Mechanical Turk.
    #3 Is this an appropriate call to action for our email newsletter subscribers?
    #4 And, of course, having a better quality site will result in more “Plus 1s” organically.
  • Feed Subscribers
    (As you probably know, Google owns Feedburner and Google Reader, among many other data sources they could use for like, including the data they collect from Firefox users.)
    Examples of what we can do about it:
    #1 Use Feedburner if you have a really popular feed. Make it easy for Google to see how many subscribers you have.
    #2 Increase feed subscribers. One way to do this for an eCommerce site is to have product feeds in every category, including for brands. Let people subscribe (by email if they want) to find out when new products come in. This is especially useful for those “Sorry we’re out of stock” pages mentioned earlier. But there are many ways to increase your feed subscriber numbers, including putting the RSS icon on your FB page, and making it more prominent on your site. You could even show an annoying pop-up like Shoemoney. If you’re feeling risky again and want to ask those helpful Turks to create a Gmail/Hotmail account (which they’ll probably just forget about) and use it to subscribe to your feed, remember that subscribers who never open or read your feed is probably not a good signal of quality.
  • Email Newsletter Subscribers
    (Google has access via Gmail to millions of email accounts. They know who subscribes to what, even if your newsletters don’t get published.)
    Examples of what we can do about it:
    #1 More cross-promotion of your email newsletters if you have more than one.
    #2 Use Facebook for growing your newsletter list by putting your signup box on your FB page and telling people why they should sign up. Running some FB ads might be worth a test too.
    #3 Add a pre-checked “subscribe to Newsletter” box on feed subscribe boxes (See #2 on Feeds above)
    #4 Do more co-registration with non-competitive businesses in your industry – preferably with larger email lists than you have.
  • Social Media
    (Google has openly stated that they are using mentions/likes on sites like FB and Twitter as ranking factors. Other possibilities are Digg, Reddit, Stumbleupon, Buzz [owned by Google], Linked-In…)
    Examples of what we can do about it:
    #1 Give your social media team the creative, content and budget resources needed to grow your following on FB and Twitter.
    #2 Try growing your community on other sites like those mentioned above, as well as Tumblr, Flikr [owned by yahoo], YouTube [owned by Google], Quora, etc…
    #3 Establish someone in your company as an expert / thought leader. CEOs are great for this, but it could be someone else. Matt Cutts is an example of this for Google.
    #4 I’m no social media expert but they’re not difficult to find. Pay attention to what these guys and gals have to say for more tips on how to improve your SMM.
  • Brand Mentions and Searches
    (These are really two different things: “Brand Mentions” and “Brand Searches”. But they both require the same tactics in order to improve them.)
    Examples of what we can do about it:
    #1 PR. Seriously, and I don’t mean distributing a crappy press release for some crappy links on PR Newswire. I mean REAL PR. If you don’t know it, hire someone. Seriously.
    #2 Continue to improve your social media presence (see above).
    #3 Apps and Widgets: iPhone, Android, Other mobile platforms, WordPress… Name it after your brand.
    #4 Just good ol’ fashioned marketing and PR. Non-Developer SEOs, including myself, need to start learning from books like this or find a new career in a few years.
  • Sentiment
    (It’s possible that Google does brand sentiment analysis. Things like “Blue Widget Inc. Sucks” or “Somedomain.com Scam” are reputation management issues, but could they be a ranking factors too?)
    Examples of what we can do about it:
    #1 Work positive sentiment phrases into articles, reviews, testimonials and press releases when appropriate: “Somedomain.com is a great website for…” OR “I love Blue Widgets Inc! It’s my favorite widget website because…”
    #2 Can you leverage social media for this? “Tell us why you love Our Brand, Inc. and you may win a free something-or-other”.
    #3 Study up on how the social media folks (see links above) perform sentiment analysis and what they do about it.
  • SERP Blocking
    (People have the power, when logged into Google, to block certain domains from search listings if they don’t like them)
    Examples of what we can do about it:
    #1 Just making a better site and using all of the tips above will go a long way in helping this.
    #2 Some starter tips: Don’t spam, don’t show annoying pop-ups that are difficult to get rid of, and don’t show ads as your first content block.
  • User Generated Content!
    (This gets an exclamation point because it’s really the ONLY tactic that I’ve seen noticeably and consistently, if only temporarily, improve rankings that were affected by the Panda updates.)
    Examples of what we can do about it:
    #1 Make it as easy as possible for people to comment on your site (without opening yourself up to uncontrollable bot spam)
    #2 ASK! Sometimes it’s that simple. When you finish a blog post or article ask people to comment.
    #3 Use something like Gravatar to allow people to show their avatar/icon when commenting. Don’t dismiss “ego” as a major driving force in getting someone to comment.
    #4 Encourage people to ask questions. Example: Q&A box in the sidebar of an eCommerce site that promise an average response time of X-minutes (in lieu of Live Chat) . As they submit their question they will also be giving you their email address and subscribing to the comments area (See Feeds section above) so that they’ll be emailed as soon as their question is answered. You could even have a pre-checked “subscribe to our newsletter” button there too. Their question, and your answer, will go into the comments area on the page somewhere (try this on an eCommerce product page). This gets you more free content, engages users, and grows your email list all at the same time.
    #5 If you run a WordPress blog use a plugin like this to allow people to subscribe to your comments. This is how you get back-and-forth conversations going.
    #6 Don’t be afraid to “seed” the first comment or two. It might be a good place to try out alternative spellings of your keywords. Just don’t overdo it. People can usually tell a fake comment when they see one, especially on eCommerce and affiliate websites.
    #7 If appropriate, consider sending out a call-to-action on FB and Twitter asking them a question. You should let them know that their comments may be featured on tour site, but you shouldn’t make them come to your site to leave the comment. Just take their comments off Twitter/FB and put them on your site under their first name. Example for a clothing store’s FB page “Tell us why you want this dress!  *One lucky Friend will get a free one! **Your answer may be featured on website.com.”
  • Links
    (The oldest user-feedback in the book, and what Google’s algorithm was based on from the beginning)
    Examples of what we can do about it:
    #1 Get more. Seriously, I’m not going to post two-dozen bullets about link building when there are countless great articles out there that you can read. I will say this though…
    #2 You can get more links from traditional and new-media PR efforts than you could ever “build” manually, and far better ones (probably for less money) than you could ever buy.

You made it to the end? WOW. You have stamina! I bet you also have some great ideas for other “examples of what we can do” about these user-feedback signals (and others) that may be indicators of quality to an algorithm that needs to quantify the qualitative. Or maybe you just disagree with me. That’s OK, I’m just thinking out loud. That’s what personal blogs are for. Either way, let’s hear your comments…

Angry Panda image from iferneinez’s photo stream

12 Responses to “Why Panda Might Be Taking So Long to Come Back Around”

  1. Donna says:

    I’m with ya. Lately, I’ve been thinking about a similar scenario, and I posted about it on a forum, but no one is paying much attention to it. Maybe that’s because it’s stupid, but I’ll stupidly rrepeat it here anyway. :) One of my first thoughts was that maybe Google was using some sort of WOT API (Web of Trust – mywot.com) or similar public trustrank metric, to see what kind of trust ranking ordinary people give a site, and if not so good, pandalize it. If good, or no data available, leave it alone. This wouldn’t have to come from WOT itself, Google may have its own that we don’t know about, or perhaps Google’s herd of quality raters have this kind of toolbar system as well. Whatever/however, this kind of crowdsourced rating system might be in play. In addition, they could be using something similar to how crowdflower connects with mturk to crowdsource small tasks. For instance, someone has hired crowdflower to crowdsource a question “How relevant are these 33 search results (UK only)?” on mechanical turk. If Google is crowdsourcing both trust rankings like WOT does, and various other crowdsourced signals such as relevance, etc., then I can see how Panda could be so hard to figure out, and also why it can’t be run all the time. So yeah, what you said, and what I said…or some of the above or none of the above. Dang.

  2. Everett says:

    LOL @Donna… Dang. My sentiments exactly. :-D

  3. Anna says:

    Wow. So complex, and it makes so much sense! Why couldn’t this change have happened in the winter when I have time away from the garden to ponder it long and hard?

    Google’s going to love my reaction to this page — clicked on the link to read the whole thing from my RSS feed reader, left a comment, and bookmarked it so I can peruse it again and again. :-)

  4. Marty Martin says:

    As usual, a very thorough post. Bookmarked. ;)

    As I read down and down the page, I kept thinking of ideas I could contribute to the list, but then ultimately you’d mention it. (Citations, Sentiment, UGC, Amazon Turk)

    I tend to agree that Google is taking more and more into account making it increasingly difficult to focus on a handful of factors as a comprehensive strategy.

    Some other user signals I think Google is probably looking at include:

    1. On page scroll depth
    2. User authority in social media mentions (ie: how authoritative is the guy/gal who just tweeted about you)
    3. Mouse clicks/movements while on page (tin foil hat on)
    4. User engagement (ie: are they in fact watching the video(s) on your page [and for how long?] or interacting at all on page?)
    5. Page load speed
    6. Backlink neighborhood (This goes without saying but it wasn’t mentioned in your post and still plagues some sites so thought I’d mention it)
    7. How does your backlink profile (and other metrics) look compared to industry averages of the same data? (IE: Do you have WAY more deep links compared to home page links than most sites?)
    8. Logged in user (Google Accounts) tracking (again, tin foil hat on). But if Google knows a user and their individual “trust” it may pay more attention to signals from subsets of users than others.

    So yeah, those are a few off the top of my head. I know there are others!

  5. Everett says:

    Anna thanks! It’s too bad I don’t blog about gardening at 1:am huh?

    Marty, those are some great additions to the list! We’re actually doing something about scroll depth already be using named-anchor links and jumping people down from the top to our premium, unmonetized content at the bottom of certain pages.

    I think I did mention user authority, but probably in a different way when discussing that we should create experts / thought leaders. Shows we’re on the same page though. ;-)

    User engagement is sort of a combination of commenting/UGC, converting on goals, and – as you mentioned – watching videos. A lot of this can also be summed up with Avg. time spent on page.

    Page load speed is a known factor, however small, but I’m missing how that can be a user-feedback signal.

    Backlink neighborhood and profile: I agree. I sort of skimmed over the link stuff because it’s what we’ve all been looking at for years and wanted to explore some new feedback features. But you’re absolutely correct IMHO.

    Your last point is the best and I read some recent posts about this issue that discussed a few patent applications. Not sure if I can find them again easily, but they suggest that Google is definitely looking at the individual “trust” worthiness of social media folks who share your content, content authors and – possibly – just plain old visitors. I’m sure Google trusts you so thanks for bookmarking. :-D

  6. Everett says:

    PS, I just sent the following email to NPR’s Morning Show:

    Subject: Missing Comment From NPR Interview with Google’s Matt Cutts

    Hello,

    Mr. Cutts has said that you left out a comment he made during an interview. NPR may not have thought that comment was of value, but the SEO community thinks it is critical information.

    Please see this conversation from Twitter:
    http://twitter.com/#!/mattcutts/statuses/66665671207038976

    “@tomcritchlow answered that for NPR, but they didn’t use it. Answer not as amenable to 140 chars.”

    And the follow-up with his “short” version of that answer:

    http://twitter.com/#!/mattcutts/statuses/66679295564722177

    “@tomcritchlow short version is that it’s not data that’s updated daily right now. More like when we re-run the algorithms to regen the data.”

    This is very important to our industry because it could mean that Google is collecting more data from various sources. The comment could help us get a clue as to what those sources are and why Google doesn’t have the data yet that would allow them to do another update.

    Can we get access to that missing quote from Matt Cutts?

    Thank you very much for considering this request. Please let me know if there is anything I can do to facilitate.

    Kind regards,

    Everett Sizemore

  7. Robert Bacal says:

    This is SO good. That is it’s good to find intelligent analysis of this issue. I have a lot to say on much of your post, Everett, so I’ll cut it into snips. I think some of what you wrote is dead on. Some of which can actually be supported by evidence or hints from Google. On other points, I don’t know. And a few, I think might be wrong.

    After all, only Google knows, and I’m not sure even they do (such is how algos work in that complex ones are so complex that even the builders can’t tell what they will do in any particular case without modeling the algo, or releasing it into the wild.

    With your permission I’ll post a number of replies because otherwise the points will be lost.

  8. Robert Bacal says:

    How Google does it, or at least probably. I’m a former social science researcher and data analyst and I have more than a passing familiarity of how you measure something like “quality” without actually measuring quality. You read that right. Here’s how it’s done in psych. assessment and research.

    1) You hire and train a number of humans to review and categorize something (in this case, sites) into various categories (ie. honest looking, solid information, etc transformed. You get a bunch of sites that end up in the WOW pile, and sites that fall into the lower categories, so you know what sites are considered good and not, as determined by real humans. However, we don’t necessarily know why they got categorized in their categories, and we do NOT need to know (that’s key).

    2) You use a statistical process called discriminant analysis that looks at hundreds and hundreds of POSSIBLE variables that distinguish the sites in different piles from sites in the other piles. It’s a correlative process. Here’s the weird part.

    The variables that best discriminate between sites in the various piles need not have ANYTHING to do logically with “quality”. They need only correlate with the human categorizations statistically such that their use discriminates.

    So, it could turn out that sites with dark background would “tend” to be rated as worse than those in the top category. IF site background does discriminate between good and bad, you have a good variable to use, even though it really doesn’t make a whole lot of sense, or fit with OUR notion of quality.

    Understand here that Google isn’t concerned with any one site, but only that the collective search results get better. The statistical techniques are designed to work to that goal, and NOT to ensure fairness, or even appear logical.

    3) Test. You do simulation runs of the computer algo categorizing sites using the algo, then you have humans look at the results to assess the quality of the algo OVERALL. However, because you use simulation models and not production data per se, the results may be a bit off. The simulation component may or may not be used by Google.

    4) Into the Wild. You get close enough and you release the algo into the wild, and once again, you test, tweek, test, until your algo, STATISTICALLY discriminates well reducing false positives and negatives.

    This process, for something like Google, never stops, and it’s likely the same process used for page quality for adwords, and page quality for adsense pages. All the same, although the algos may be similar or quite different.

    Ok. So, implications.

    The factors used are derived mathematically and need not be logical or sensible, which is why your ideas are somewhat flawed. Literally, they could find that if the site owner’s name begins with “B”, the liklihood of it being a good site is better than if the site owner’s name begins with “S”. IF that discriminates it often will get thrown in to the algo. You can’t figure it out. Period.

    Because the process MUST BE iterative (it’s actually iterative on the statistical side because the math process involves successive tweeking of the weightings of the discriminating factors. It continues for quite some time, much as is the case in any multiple regression processes that are done over and over until you get a “best fit” possible with a specific data set.

    However, here, the data is always changing which means that it will never end — the best fit process.

    During the period of time when the algo is tested and continuously improved, it makes sense not to apply it to everything all at once, because you will get bad tweaks. So, the upshot is that any site changes probably wouldn’t be reflected in results for months and months. Which also has the benefit for google of making it hard to game the system.

    I’m oversimplifying a bit, and not always being completely accurate here, cause I’m trying to keep this comprehensible.

    I’m the first to admit I’m doing some speculation, but the process I’ve described pretty much fits. Google DID hire humans to review sites. It has provided additional guidelines recently but clearly have not disclosed the variables they use, except to talk about things in generalities about logically linked discriminants. They will NEVER tell us what the non-logical ones are.

    End of part one. So IF what I’ve described is accurate, and frankly, I know of no other practical way for Google to do what it does, what OTHER implications are there?

  9. Robert Bacal says:

    Part 2: The logical variables used and on which you’ve commented/speculated.

    Many are very good. Keep in mind that there are probably correlated variables that are not logically linked to what you or I would consider a good site. And that some of the ones you suggest might NOT discriminate between good and bad sites as categorized by the “humans”. I could be, though that even something as logical as time spent on a site could actually be characteristic of POOR websites in terms of the discriminant analysis.

    I agree that it’s logical to look at bounce rate, time on page, etc if we are considering only variable we think should work. They might not work the way we think statistically or they might work directly opposite.

    One you mentioned I’m particularly doubtful of and that’s user generated content. Sites that have succeeded with user generated stuff have done so on the basis of links. In fact, IMHO, most user generated content is quite poor, information wise. Does Google want to steer people to sites with great, accurate information? Or does it want to steer sites to where people just gab, and say all kinds of unsubstantiated junk?

    I don’t know, but I’m guessing they are moving to the former and not the latter. There is some anecdotal evidence to suggest that sites with user gen’ed content got hit hard with Panda. EHow, for example. It’s “thin content” they have said they DO NOT want, and most thin content involves user generated content OR the capability to have user generated content.

    Which is one reason I’m shutting down all our blogs and going back to static html articles with depth and length.

    Remember that in Google’s recent guidelines, they mentioned the expert status of the author. (Again, they must do this in a correlative, discriminant way). User gen content doesn’t fit that.

    Anyway, my point is this. If you and I sat down and drew up a number of logical “indicators” that would suggest people find a site useful, we’d choose many, even perhaps most in your list, which is amazingly comprehensive.

    Except, that isn’t how Google does it, or at least it could be part of it, and part of it ONLY.

    it’s all cold number crunching done on a massive scale and driven by a set of mathematical goals applied to the problem.

    So, once again, it’s conceivable, though not likely, that the correlates of good sites versus bad could be exactly OPPOSITE to what you and I would figure out logically over a brew.

    I think that’s about it. I probably have more to say, but I’m getting old.

  10. Everett says:

    Robert,

    I see you, like me, are not afraid of the keyboard. :-D

    I’m in total agreement that the process is more logical in the mathematical sense than in the philosophical sense. But I’m more of a philosopher than a mathematician so this is the lens in which I view the world.

    All that you say makes very good sense when you think about the massive scale at which Google operates. And you are correct that they hired human reviewers, and those documents even “leaked” out so we know some of the things they were looking at and making human, qualitative judgements on.

    I do, however, have to refuse to believe that there is NO logic (as in how you and I would logically think about quality over a beer) involved. If I were to let myself believe that then I’d just have to find another job. ;-)

    And I also respectfully disagree with your assessment on the power of user-generated content, but that might be an issue of semantics. As mentioned in the post, adding user-generated content (in the form of comments) was the one and only think that I was able to do to achieve, and reproduce, results in bringing back some keyword search rankings. The content, in many cases, didn’t even include the keyword I was going for. The mere fact that new, user-generated content appeared on the page seemed to be enough to temporarily boost rankings for those pages (so thanks for your comments! Just kidding; this site wasn’t affected by Panda.). So whether it was something they logically added, or something that just happened to be correlated with quality based on their samples, my limited testing does indeed show user generated content as being a factor… one of the strongest, in fact.

    But back to semantics – eHow had “user generated” content in as much as they hired laymen to write articles that they were woefully unqualified to write. But when I refer to “user generated” content, I mean the type of stuff you see here in my comments section, or the type of restaurant reviews you’ll see on Yelp.

    Pay attention to the sites that did well in niches like local listings (restaurants, business listings, etc..) or coupons and you’ll see a pattern there.

    Thank you for your comprehensive answer. I’m glad you enjoyed reading my analysis and that you left such a polite, thoughtful comment.

    Cheers,

    Everett

    PS: “Meet The Needs of Your Customers”
    Your site’s motto has it right either way. That is what any high quality website, or business, is going to have to do. Period.

  11. Rick Ramos says:

    You give a lot of ideas and a lot of them are based on theories and practices that I think you’ve been developing for your whole career. I say that because the examples you offer at the end are, in my mind simple, standard best practices and fall in line with a lot of what I learned from you during my internship almost two years ago. There is no workaround for quality. Unfortunately only those who really cared about quality were employing it while many times those who did not care about quality were achieving better rankings.

    Myself being just a small handful of years in this practice, it is clear that many of the tricks and shortcuts that my predecessors used to use to get higher rankings don’t work anymore. New school SEOs have a new set of rules and standards to live up to that will be largely contingent upon user’s actions. While technical standards like architecture, markup and keyword density may not change, the importance of links could give way to tweets, likes, shares, Plus One’s or whatever user-action ends up being adopted as a standard.

    From my perspective SEO is moving away from the Wild West and it is becoming more integrated with marketing and public relations than ever. This is especially true for websites that do not rely on ad impressions and click through rate. While rankings have always been very important, I have always put more importance on Bob in Cincinnati. It’s encouraging to see more weight being put on quality standards. I believe that it is a boon for digital marketers in every niche of the business who take pride in the products that they support.

  12. brett says:

    There are a billion articles on twiiter by “experts” stating how to make SEO better. Most of these articles are filled with info so obvious or outdated that it makes me feel bad that I invested 60 seconds even to click their link.

    Your article is the first one I’ve read in a long time that is actually GREAT and provides real-life actionable things you can DO – right now – to improve your page ranking.

    Well done, and kudos to you. Not sure why you gave this info away for free though… Gotta work on your salemanship. Maybe that can be your next article: “How to sell digital expertise when everybody gives it away free on Twitter.”

  13. [...] are some reasons I think what I think. As I said, it’s a hypothesis based on some educated guesses, lots of experience, and some [...]

  14. [...] could still be leveraged for internal linking to prop-up your better content. Then comes Panda, and now your cheap content can pull down your good content, as well as category pages, home page [...]

top