I’ll be live blogging the “so you want to test seo” panel at 10:30 pacific time. Check back here for live updates. This should be a good session.
Actionable, testing, etc. These aren’t words you normally hear when people talk about SEO. So glad to hear them.
Several people in the room admit to testing SEO. One guy admits that he’s perfect and doesn’t need to test his SEO.
Conrad Saam – director Avvo is speaking now.
He’s talking about statistical sampling and the term “statistically relevant” – I feel this is something that many SEOs fail at.
The average person in this room has 1 breast and 1 testicle. A good example of how averages can be misleading.
He’s now talking about sampling, sample size, variability, and confidence intervals. Also the difference between continuous and binary tests. This is very similar to my college statistics class.
His example of “bad analysis” looks awfully similar to some of the stuff I’ve seen on many SEO blogs. It would have been real easy for him to use a real example from somebody’s blog.
Bad analysis: showing average rank change in google based on control.
Good analysis: Do a type 2 T Test. Excel can do that.
It’s all about the sample size when doing continuous testing
http://abtester.com/calculator – good resource for calculating confidence.
Non representative sample
non bell curve distribution
not isolating variables. This one is huge in SEO as there are over 200 variables considered.
Eww… he’s talking about the google sandbox. Just lost some cred with me, as I don’t believe in a Google sandbox – but it does make his point when testing SEO – that we can’t be 100% sure some changes actually caused the results.
Next up, John Andrews
An seo wants:
to rank better
avoid penalties and protect competition.
As an agency one wants to actionable data to help make the case why we want that.
Claims: (that need to be tested)
PR sculpting does/doesn’t work.
Title tags should be 165 characters
only the first link on a page counts.
there is no -30 penalty
John says that authors of studies and blogs place more value on the claims and not so much value on the claims. the difference between marketing is that marketers tell stories and make claims – scientists deal with data.
Problems with SEO studies
Remarkable claims get the most attention.
Studies are funded by sponsors who have something to gain.
There’s virtually no peer review.
Success is based on attention not validity.
“citations” are just links – and not as valid as real citations.
Note to self, but a copy of “the manga guide to statistics”
So how can we contribute?
Science is slow boring and not easy.
Most experiments don’t produce significant results
scientists learn by making mistakes
As SEO’s we’re stat checkers. We’re too busying seeing how much we just made and how many visitors we just got to deal with experiments. That’s so true.
Tips: Publish your data without making claims. Be complete and transparent. Say “this is what I did and this is what I saw” and people will email you, cite you, or repeat your experiment. Invite discussion about your test.
A good example of this was Rand’s .org vs .com test where he didn’t account for wikipedia bias and also didn’t alot that most .com domains were brand names (which he excluded)
When it comes to SEO testing, just say what you saw. Let the data tell the story and let others come up with the same analysis that you did. That’s science. Publishing claims is often just a push for attention. Man, that’s so true.
Next up, Jordan LeBaron.
Don’t trust Matt Cutts, test your own shit. Different things work in different situations.
Plan. Execute. Monitor. Share. Maintain Consistency.
Branko Rihtman – a molecular biologist who runs seo-scientist.com
Define question, gather info, form hypothesis, experiment, analyze and interpret data, publish results, retest. That’s the scientific method.
Choose your testing grounds. don’t use real or made up keywords, use nonsensical keywords made up of real words (like translational remedy or bacon polenta)
How to interpret data:
Does the conclusion agree with expectations? Does it have an alternative explanation? Does it agree with other existing data? Bounce the findings off of somebody. Don’t have definite conclusions.
Statistical analysis is hard. get help from somebody who knows statistics. Understand correlation and caustation, understand significance. Don’t rely on average.
Avoid personal bias. Don’t report what you want to see or what you thought you saw, report what you actually saw.
You can learn a lot from buying branko a becks. It’s a known fact that scientists can’t hold their alcohol.