Ryan Jones Blog – dotCULT.com Ryan Jones Blogs About Internet Culture, Marketing, SEO, & Social Media

August 31, 2011

Black Hat Technique turned white: Posting to Social Networks

Filed under: Main — Ryan Jones @ 2:28 pm

OnlyWireYou always see posts talking about evil black hat methods and automation. You’ve been warned about things like Xrumer, Scrapebox, etc and how the black hats use them to dominate search positions – but has anybody ever thought about using them for white hat techniques?

The technique I want to talk about today is submitting to social sites like Digg, Fark, Delicious, reddit, stumbleupon, etc. Done wrong, this can be a shady black hat technique. Done right however it can be a very valuable form of PR. After all, if you aren’t willing to tell people about your own sites why would anybody else?

One of my old favorite black-hat tools was something called Social Poster. It’s basically a giant list of social media submissions pages that you can use as shortcuts when spamming submitting your site to social media sites. Sure you can use it for spam, but when you’re genuinely submitting quality content to a few sites, it’s not really spam. It’s promotion. The key is to use tools like this in moderation and use them genuinely.

Lately though, I’ve been playing with another tool called OnlyWire. Onlywire works like socialposter above, but it saves you time by doing the submitting for you. You can choose what networks (and you still need an account on each network to tell it) and it does the hard work for you. They even have bookmarklets, wordpress plugins, a blog button for your users, and an API. The features are a bit more robust than social poster. The pro version starts at only $10 per month, but there’s also a free version with limited submissions. For most white hat SEOs not wanting to submit a ton of stuff to a ton of sites, the $10 version is more than sufficient. (Remember, the key to staying white hat is to only submit relevant stuff to relevant sites, not everything to every site.)

So Check out Onlywire – it’s a useful tool that blurs the lines of black hat / white hat SEO to help you increase links and visits. And remember: most black hat tools started as a way to automate white hat ideas, and then quickly turned up the volume. Black hat tools can be super useful for legit SEO – as long as you keep the volume back at a reasonable level and use some common sense.

August 30, 2011

Google+ Won’t Be Abused like Pagerank

Filed under: Main — Ryan Jones @ 11:02 am

I was reading a post by @rustybrick about google+ being a ranking factor and noticing how the twitterspehere seemed to collectively think that it opens the door for abuse. Many SEOs were making comparisons to how people link spam and anticipating the same for pluses. They point to the already available services that sell pluses as proof. (which, by the way is a waste of money when you can get them by other methods like cross site + buttons in iframes or paying as little as $0.01 each on Turk)

We can talk about better ways to game the system later, but for now I’d like to say that I don’t think Google+ will present anywhere near the spam problem that pagerank/link spamming does.

I’m not saying that people won’t try to spam pluses – they will. Plus counts will surely be affected by spammers and the companies that sell them aren’t going away. I’m just saying that the spam pluses won’t be effective at all.

We all know that paid and spammy links still work and work well. The reason they work is because it’s awfully difficult for Google to detect un-natural linking patterns. They’ve gotten pretty good at it, but a lot still slip through the cracks.

Spam pluses will be easily found and ignored in the algorithm

With Google+ and pluses however, they won’t have that problem. There’s one key difference between pagerank and plus. With plus, your real name is associated with everything you plus – and your history and patterns are all stored in Google’s system. That’s one of the benefits of Google’s real name policy (flawed as it may be.)

See, when you plus something you do it publicaly – with your real name. Unlike links, selling pluses requires one account per plus. Those accounts require real people. It won’t take long for patterns to emerge and Google to figure out which accounts are doing the spammy pluses. Once they do I imagine they’ll take the scalpel approach that they do for paid links where they simply “cut” them out of the link map.

Google and Bing already do this pretty well with their Twitter as a ranking factor implementation. They look at your followers and followees, your tweet history, link history, etc and come up with something similar to a trust rank. Well guess what? Google+ knows way more about you than Twitter does – so deciding who to trust is much easier than it is with a site from Twitter. And I don’t see Twitter spam accounts working that well. In fact, Google+ is probably tied to your Twitter account anyway so there’s no reason they couldn’t use that reputation you’ve already established.

I think that’s the beauty of plus. Having your pluses and history readily available allows Google to form patterns and decide just who to trust in rankings. Regardless of how much of a factor it is (I don’t buy that it’s as small a factor as Barry claims) Google will get pretty good at knowing whose pluses count and whose don’t.

August 17, 2011

Google Analytics Changes Session Definitions

Filed under: Main — Ryan Jones @ 2:48 pm

If you weren’t paying close attention yesterday you may have missed a small announcement from Google about an update to sessions in Google Analytics. It’s a small update that will affect less than 1% of total users, but if you’re in that 1% of users (or have your analytics set up wrong) you may see some crazy effects.

Here’s what Changed:
Sessions used to end at the end of the day, after 30 minutes, or when the user closed the browser. Now, a session will no longer end when a user closes their browser and can alsoend when a user’s utm parameter changes. That last one’s the important one.

Basically, if I come in to your site from a paid search ad, that’s a session. If let’s say 20 minutes later I click on a link in a tweet with a ?utm_source tag. Previously that would be part of my same open session. Now however Google Analytics will end that old session and start a new one.

This solution provides better attribution for campaigns. Basically, it’s GA switching to a last-click attribution model. There’s plenty of arguments about whether last click, first click, or weighted attribution are best but I’ll leave that for another post.

Let’s talk about potential issues
According to Google, 99% of users won’t see a problem. If you’re seeing your visits significantly rise for no reason though, you might be in that other 1%. You’re bound to see some small increase in visits due to the browser closing change, but some sites will huge changes. Here’s how that can happen:

Suppose you set up your Google Analytics lazily or just plain wrong. For some reason (and there’s a lot of websites who I won’t pick on that are doing this) you put ?utm_source on your internal site links. Perhaps it’s a promo box on your homepage that links to one of your other pages and you want to track it as a campaign – an easy mistake for a novice to make.

Under the old system, there was no problem here. Under the new system though you’ve got an issue. Now, anybody who clicks on that promo box is essentially ending their current session and starting a new one! That could throw off your total visits / sessions and any calculations that use them.

The solution here is to make sure you’re not using any ?utm_ parameters on your internal site links. That’s not what they were made for. There’s a lot of great ways to track that type of thing in Google Analytics (all beyond the scope of this blog post and better answered by the Google Analytics Help files.)

In light of the recent change though, now is a great time to re-check your analytics implementation and make sure everything’s working as it should.

August 16, 2011

I am the #1 most handsome man

Filed under: Main — Tags: , , , , — Ryan Jones @ 3:51 pm

I’m writing this post to update the interwebs about the number 1 most handsome man on the internet. Despite contrary belief, I am the #1 most handsome man on the web, not Andrew Sprung. I’m way more handsome than him. I guess we’ll have to leave this up to the voters though – or simply rig the vote.

I know it’s tough competition, but how tough can it be to win most handsome over this:

Most Handsome Man

Number One Most Handsome Man

Dude, WTF is this?

This is just me having some fun with search. A few months ago one of my colleagues noticed that his blog post was getting traffic for some handsome man related terms. He then bragged that he ranked #1 for “#1 most handsome man” – so I did what any good SEO would do and set out to crush his dreams and dominate his site in the SERPs. Mission Accomplished! (oh and the picture? well do an image search for the terms above and you’ll see why that picture was included.)

August 9, 2011

Omniture Tracking Codes and SEO

Filed under: Main — Ryan Jones @ 3:40 pm

Yesterday I was eavesdropping on a Twitter conversation between @alanbleiweiss and @omniturecare where they were talking about the use of Omniture tracking IDs and how they relate to SEO. It got interesting, and it reminded me of the solution we use on Ford.com for tackling this issue.

Any good SEO will tell you how much they despise tracking parameters on URLs. You know, things like ?utm_source= , ?bannerid= , ?intcmp= , etc…. Every good analytics package allows you to set up tracking IDs to track various things. They can be used to track campaigns, referring sites, affiliates, or even which link on the page somebody clicked on. In general, they’re very useful.

The problem with tracking IDs though is that they can really mess with a site’s SEO. Unless told otherwise (and we’ll get to that) when search engines encounter a URL with a parameter on it, they treat it as a separate URL than the version without it. Basically, by adding ?utm_source=feedburner to your URL, you’ve created 2 versions of the same page. In a word, duplicate content. That can be bad.

Fortunately, Google webmaster tools lets you specify specific parameters that you want Google to “ignore” – but that only works for Google.

Another solution is the canonical tag – but as Alan pointed out in his tweets, it’s only a suggestion and even though Google follows it, other search engines may not.

So webmaster tools and canonical may work for 90% of the cases and not cause problems, but sometimes that’s not good enough.

On Ford, we take an interesting approach. Take a look at this Truck Page and see if you can spot it. When you visit and hover over the name of a truck (like F-150) you see a clean URL. In this case it’s http://www.ford.com/trucks/F150/

When you click the button though you’ll notice that you actually visit http://www.ford.com/trucks/f150/?fmccmp=fvlp-tt-cons-f150 – a URL with a nice tracking code that tells us you clicked on F150 from the left column rather than the middle or right. We then use that to do other analytics and customizations that are beyond the scope of this post – but it’s a useful and required tracking code.

Why do we do that? Simple, because we don’t want to count search engines in our analytics and we don’t want them to index the URLs that have the tracking code in them – as that would also bias our data.

So how do we show you (and search engines) a clean URL without parameters but take you (and not search engines) to the one with parameters? Easy – we use the HTML/Javscript onClick event.

Essentially, the code looks like this:

<a href="http://www.ford.com/trucks/f150/" onclick="javascript:[Function that creates URL with parameter] return false;">F-150</a>

That extra “return false” in there isn’t an accident either. This technique won’t work without it.

This way search engines follow a nice clean href, but users who actually click get the tracking ID appended. This technique isn’t guaranteed by itself. It’s just one extra step we can take to try to keep the tracking parameters hidden. Combined with canonical and webmaster tools parameter exclusion it works pretty well to prevent those tracking IDs showing up in search engines.

Powered by WordPress