Ryan Jones Blog – dotCULT.com Ryan Jones Blogs About Internet Culture, Marketing, SEO, & Social Media

April 27, 2012

Penguins Don’t Have To Be Scary – My SMX Toronto Recap

Filed under: Main — Ryan Jones @ 9:39 pm

Google penalties at work

If you missed the Big Google Kitchen Sink Panel at SMX Toronto featuring yours truly, @duraninci , @kdobell , and @aaranged you’re partially in luck – because I’m going to recap my part of the presentation here. That’s right, I’m liveblogging myself (3 days later.)

If you’ve read any SEO blogs lately you’ve suddenly become afraid of cute little animals like Pandas and (more recently) Penguins. (Ok, I didn’t say Penguin during my presentation, I said “over optimization” but this is the benefit of blogging 3 days later – I can change things.)

Penguins and Pandas don’t have to be ferocious though, they can still be all cute and cuddly – if you do your SEO right.

Anybody new to SEO who started reading the major industry blogs or attending conferences in the last few months might get the impression that Google’s algorithm works by compiling a list of relevant URLs and then applying penalty after penalty until only 10 links remain. After all, penalties are all we seem to talk about lately. The good news is (and Google will probably confirm) that their algorithm doesn’t quite work that way.

Matt Cutts Google Penalty

Google's goal isn't to run around like Eric Cartman as a hall monitor and penalize everything in sight. Really.

Google isn’t out to penalize. They’re out to reward quality content. Specifically, quality content that helps a user solve a problem. More on that later.

Algorithm updates and penalties are a fact of life. Google makes over 350 changes to their algorithm each year. That means we as SEOs need to change our methods. In the old days, SEO was all about reverse engineering an algorithm. That’s just not possible today. Instead, we need to practice what’s called Sustainable SEO.

Wayne Gretzky once said “A good hockey player players where the puck is, a great hockey player plays where the puck is going to be.” The same thing is true for SEO. We can’t optimize for the Google of today and expect to get anywhere tomorrow. We have to anticipate what Google is trying to reward and start turning our sites into that. (Likewise, anticipate what they’re going to start penalizing and stop doing that.)

In actuality it’s not that hard to do. During his SMX keynote @outofmygord talked about how the Titanic was sank by the 7th iceberg it was warned about. Had they not ignored the first 6 warnings, they wouldn’t have been hurt by the 7th. The same is true of Google. @mattcutts has given us hundreds (if not thousands) of iceberg warnings over the last decade. He’s been screaming about bad things not to do, and how Google wants quality content. The problem is, most of us SEOs just didn’t listen. We kept going full steam ahead with our other efforts; mainly because they kept working. Then one day we woke up, logged into our analytics platform and saw this:

Google Penalty

Boom! Iceberg. And the worst part? It completely surprised all of the SEOs who were ignoring the warnings and some of them are realizing that there aren’t any lifeboats left.

It’s time we start listening to the warnings: Algorithm Chasing Isn’t Sustainable.

Have you ever asked “How can I rank for this term?” If so, you’re doing it wrong. Instead, put on your user hat and ask “when somebody searches for this term, what would they expect? What would they find useful?” – then go build that. You’ll get much better results.

We sometimes forget that we as SEOs are the 1% of searchers. It can be beneficial to put ourselves in their shoes and look at search how they see it. (Hint: they see it a LOT different.)

A goal to strive for is if a user does a search and they don’t see your site, they should think search is broken. That’s called being an authority – or in Google Quality Rater terms, a vital result.

So how do you be that vital result? By Differentiating and adding value. (by the way, @rhea did a talk about this at SMX west. It was awesome so I’m not going to go into it.)

When Microsoft released Bing (Bing? What’s that doing in a Google panel?) Bill Gates said that “the future of search is verbs” and he couldn’t be more right. Users are searching with the intent of doing something. They no longer just want information. The more your site can help them accomplish that task in a useful manner, the more the search engines will reward you. At least, that’s what they’re trying to do.

Remember, we see tons of articles about sites getting penalized by Google and “ruined” – but for each site that loses rank another one gains. The trick is to be the site that gains. To do that you’ve got to play where the puck is going not where it’s been.

Did you catch my talk and just want the slides? No problem, you can download them here – or even download the slides from past conferences I spoke at.

March 14, 2012

New Job, New Slides, and Other Updates

Filed under: Main — Ryan Jones @ 3:29 pm

I haven’t been blogging a lot lately (here or on SEJ for that matter) because I’ve been extremely busy. In the last 90 days I’ve spent a week at SES San Diego, SMX West in San Jose, and another week in Atlanta for my actual job.

Speaking of my actual job, starting this month I’m no longer working at Team Detroit and am no longer working on the Ford account. I’ve taken a position as manager of search strategy & analytics at SapientNitro where I’ll be managing a search team and working a lot on the Chrysler account.

As always, now is a great time to mention that the views on this blog (and twitter, facebook, etc) are always mine and never those of my employer or client and that I’ve not been paid to post anything.

If you’re looking for slides from those last 2 conferences I mentioned, I’ve created a section of my personal website for posting my SEO Presentations and conference speaking schedule. The SMX west deck has a great @mattcutts penalty photoshop too – feel free to use it if you like. There’s also some liveblog coverage of my SMX domain migration panel by Lisa Barone and by Barry Schwartz.

Hmm, what else. I’ve been reading a lot of great SEO articles lately too and am hoping they spark me into taking time out of my day to actually write some more blogs. attribution models are a hot topic with me right now, so you may see me talking about that lately. I’ve also read a great article about how not to lose your shit online.

Anyway, that’s about it. Hopefully I’ll be back blogging soon. Take care.

January 6, 2012

Racism and Class Discrimination are Different

Filed under: Main — Ryan Jones @ 12:29 pm

You can always tell it’s election season by the number of racism and discrimination lawsuits that start popping up in the news. It also serves as a good reminder about classism (that’s a word right?) and racism.

The debate usually centers around the same 2 key issues with only the location of the debate changing. It’s always the date/time/location of polling places, and the attempt to ID voters.

Contrary to popular believe, requiring IDs from voters is NOT racist. Assuming that African American voters are less likely to have valid ID however IS racist. It seems to me that the very people claiming these bills are racist are helping further the racial stereotype by doing so.

Saying that your race won’t leave the house to vote on their own unless the polls are tied to church services isn’t doing any favors either.

I’m not advocating any of these practices, I’m just saying that counting class discrimination as racism is in itself another form of racism – one that can easily be prevented.

Quite simply, the ability to go to the polls and the ability to have a valid ID have NOTHING to do with the color of one’s skin and are NOT racial issues. These are CLASS issues – purposely designed by the powers that be to try and keep the lower class from voting – regardless of race. (again, not right either but that’s the subject of another article)

When people claim that these efforts are racist, they’re also insinuating that certain races are by default lower class – and that’s what’s really racist.

January 5, 2012

Analytics Are Everywhere

Filed under: Main — Ryan Jones @ 3:06 pm

Vending Machine AnalyticsHave you ever looked at something and then asked “why?” Have you ever wondered “why did somebody make that decision?” I find myself doing it all time time, increasingly I’m asking myself “I wonder what type of metrics they looked at when making that decision.”

While walking down the Vegas strip Pubcon with some friends (on our way to epic dinner) who begged me not to name them in this anecdote we couldn’t help notice the guys handing out hooker cards. For the next 20 minutes we started asking questions like “How much are those guys paid?” “Do you think they work on commission?” “How do they track their conversion rates?” “what type of conversion rates do you think those cards have?” “do they use unique call tracking phone numbers?” “is this the world’s oldest affiliate network?” “Man I bet they’d be able to do an awesome panel at an affiliate marketing conference.”

Chances are that they don’t use analytics at all, but what if they did? imagine the possibilities.

The same thoughts were racing through my head about 20 minutes ago when I went to the vending machine to grab myself a Cherry Coke Zero. I looked at the stock of the vending machine and said “Why on earth would there be only 2 rows of diet coke, but 4 rows of grape soda? There’s no way a can of grape soda outsells a bottle of diet coke at an ad agency – is there?”

Then I spotted the candy vending machine and noticed that a bag of M&M’s cost twice as much as a bag of skittles, which was 12% cheaper than the bag of cheesy popcorn and 5% more expensive than gum. The pricing just didn’t make sense. Was it demand based? Item-cost based? Margin based? Or was it simply random?

Do you think vending machine operators stock based on previous demand? The fact that we’re always out of Diet Coke tells me that our contractor doesn’t, but what if he did?

It’s what you look at that matters

.
In both of these cases (and you though I wouldn’t be able to tie hookers and vending machines together…) there are probably metrics involved, but what metrics? I know that both the hookers and vending machine operators are looking at things like cost, revenue, margin and the other basics. They wouldn’t be able to sustain their business if they didn’t.

But those metrics can’t help them make day to day business decisions. In other words, they’re not as actionable as other metrics they could be looking at. In the vending machine example things like rate of sale, comparative demand, average price per sale, etc could obviously help them increase sales – but are they using them? (I’ll leave the hooker metrics to you as a take home exercise.)

And that’s the point. Analytics are everywhere; but it’s not just if you’re measuring it’s what you’re measuring that matters. The same applies to your SEO, Affiliate marketing, and general website strategies. The out of the box metrics like pageviews and visits over time are great, but how actionable are they? Can they really help you make decisions to improve your business? If your business has been mostly offline up to this point, building your online presence is going to require a big upfront time investment. Hiring digital marketing by Move Ahead Media can take all of this busy work off your plate so you can focus on what you do best.

If you’d like to learn more about making your analytics actionable, I’ll be giving a presentation with Brent Chaters at SES Accelerator in San Diego. I’ll be sharing some tips about what we measure on Ford and what decisions those metrics guide us to. You can also check out Cortney Fletcher’s eCom babes course cost to learn how to utilize analytics for your online business.

November 28, 2011

How To 301 Redirect A Website

Filed under: Main — Ryan Jones @ 3:37 pm


How do I 301 Redirect my website? I get asked this question more often than I should, so instead of constantly answering it I’ve decided to create a blog post I can point people to. But first, let’s start off with some background.

Why Would I Want To Redirect A Page?

There’s several valid reasons to implement a 301 redirect. Maybe you just want to move a page do a new domain. Maybe you’re combining two pages into one and need to redirect one of the URLs. Maybe you have a vanity URL that you’re using in TV or Radio because it’s smaller. If you’re taking down last year’s product, it’s a good idea to redirect the page to a newer version instead of returning a 404 not found. In fact, anytime you take down content I’d look for a place to redirect that’s helpful to the user.

So what does 301 mean anyway?

301 is the HTTP status code for a permanent redirect. The official definition of a 301 is as follows:

“The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible. This response is cacheable unless indicated otherwise.”

In layman’s terms it basically tells a search engine that “Hey, this page has moved to this new address. Update your records accordingly.” It also allows the search engines to properly transfer any trust or ranking signals associated with the old domain to the new one (if they so choose)

301 is the preferred method of redirection for SEO. Sadly, the default method on most IIS web servers though is a type 302. 302 returns a status of Found(Elsewhere). Realistically it should only be used when the webserver is doing it due to some hiccup, and generally never on purpose.

Wait I thought 302 meant temporary? You’re right, it did. It’s been replaced with a 307 redirect now. A 307 redirect tells search engines that “this is temporary, please re-visit the orginal URL the next time you come calling.” With a 307 or 302, no pagerank/trust/authority is passed to the resulting location. In other words, this is bad. There’s also a hardly used type 303 redirect which says “go to this other location, but use a GET request instead of a POST.” If done right, you shouldn’t ever need to use 303. (If you’re still interested in the differences, here’s a great status code pamphlet by Sebastian.

Ok, so how do I implement a 301 redirect?

There’s several ways to do a 301 redirect. .htaccess is the easiest but we’ll also cover how to do it at the page level, as well as how NOT to do it. We’ll use the basic example of redirecting the non-www version of your website to the www version. So for the examples we’ll be redirecting http://example.com to http://www.example.com

301 redirects with .htaccess

If you’re running Apache and can edit your .htaccess file this is the simplest way. Just put the following in your .htaccess file.
RewriteEngine On
RewriteCond %{HTTP_HOST} ^example.com [NC]
RewriteRule ^(.*)$ http://www.example.com/$1 [L,R=301]

301 redirects in PHP

In some cases, you can’t do it with .htaccess so you’re stuck doing it at the page level. If you need to do a 301 redirect in PHP, here’s how:

<?php
header(“HTTP/1.1 301 Moved Permanently”);
header(“Location: http://www.example.com/”);
?>
Note: you’ll need to make sure you don’t echo out any HTML or text before executing these functions.

301 redirects in ASP

If you’re using ASP, it looks like this:

<% Response.Status = "301 Moved Permanently" Response.AddHeader "Location", "http://www.example.com/" Response.End %>

301’s in Python

If you’re using Python you’re already geeky enough that you don’t need me to tell you, or you work at a Google. But here’s how to do it anyway:

from django import http
def view(request):
return http.HttpResponsePermanentRedirect (‘http://www.example.com/’)

301 Redirects in IIS

If you’re using IIS I pity you, but it’s still possible to do a 301 redirect. You just need to pay attention to detail and make sure you check that box that says permanent. Here’s a good guide to configuring IIS redirects. Here’s another step by step guide with screenshots.

How NOT to Redirect

No primer would be complete without telling how NOT to do it so you can avoid these SEO death traps.

Don’t Use JavaScript Redirects
JavaScript redirects aren’t guaranteed to be crawled by search engines and may not work with all users. Here’s what they look like so you can spot them.

<script type=”text/javascript”>
window.location.href=’http://www.example.com/’;
</script>
note: they could also use document.location or document.url in place of window.location.href but those are depreciated or don’t work in all browsers.

Don’t Use META refresh tags either.
Meta refresh tags are even worse than JavaScript ones, and they look like this:

<meta http-equiv=”refresh” content=”0;url=http://www.example.com/”>

And that’s all there is to it. Happy redirecting.

November 11, 2011

Over Thinking SEO: Inverse Document Frequency

Filed under: Main — Ryan Jones @ 9:26 pm

If you came to pubcon this week you probably noticed some very odd questions being asked at some sessions. My favorite question was when somebody asked @mattcutts for a better method of doing doorway pages. yeah, seriously!

My 2nd favorite question though was also Matt’s fault. At the end of one of Alan K’necht’s sessions somebody asked about inverse document frequency and how they can improve theirs to get better rankings. If you’re scratching your head right now and saying “dude, WTF?” then you’re reacting precisely how you SHOULD react.

So where did this question come from? Well, at the pubcon mixer Matt, myself, and somebody I can’t remember were bullshitting about mostly non-SEO related stuff. Anyway, we started talking about Amit Singhal’s background and how smart he is and somehow gravitated to computer science research. That’s when we got talking about the topic of Inverse Document Frequency (idf) and how big of an expert that Amit is in information retrieval.

I take some of the blame because as Matt was attempting to explain IDF in technical terms I jokingly said “basically it means that keyword stuffing is more effective where there aren’t many competing documents.” Sadly only Matt got the joke because it wasn’t just a reference to inverse document frequency, it was also a reference to how Google most likely uses inverse document frequency. Everybody else seemed confused, but giddy like they just stumpled upon some secret ranking sauce. Sorry guys, you didn’t :'(

Ok, so what does all this stuff mean? It’s quite simple but let’s start with the basics. When we say collection frequency we’re referring to how many times the given term occurs across all the documents on the web. When we say document frequency we’re referring to the number of pages on the web that contain the term. Pretty simple right? (side note: comparing your document frequency to the collection frequency is most likely one way Google detects both relevance AND keyword stuffing)

But to do that, they need the inverse document frequency. Inverse Document Frequency is simply a measure of the importance of a term. It’s calculated by dividing the total number of documents by the number of documents containing that term, and then taking the logarithm. An even more simple explanation is to say idf is computed such that rare terms have a higher idf than common terms.

So why? Basically, if there’s only a few documents that contain a term, they should get a higher relevance boost than a case where there’s multiple documents containing a term. That’s all idf really is. It’s nothing you need to worry about – unless you’re writing algorithms or dealing with document retrieval.

But I’m pretty sure Google doesn’t just use idf alone anyway. That’s way too simple of a way to return results. So let’s get more technical. Matt was getting really technical in our talk and although I don’t think he said the term, he was actually describing tf-idf, and that’s what my keyword stuffing joke was about. So, sorry if that got people thinking along the wrong lines.

So what the hell is tf-idf? Don’t let the minus sign fool you. tf-idf is actually calculated by multiplying the term frequency of a document by the inverse document frequency. This means a document with lots of a term, where there aren’t many documents containing that term will have a much higher tf-idf; hence my joke about keyword stuffing.

And that’s basically all there is to it. Sadly, inverse document frequency won’t be the new buzzword at next year’s pubcon, but that doesn’t mean it isn’t interesting. Information retrieval was a once-boring area of computer science that suddenly became the most interesting thing in the world to geeks once search engines came about. I loved that talk and and made me pine for my days in college studying computer science.

Hope that helped clear everything up. Pubcon was a blast and I’m sure many of us look forward to implementing the various theories we learned. Well, all of them except inverse document frequency of course.

November 8, 2011

Liveblogging Pubcon: SEO hot trends

Filed under: Main — Ryan Jones @ 3:28 pm

Going to take a stab at live blogging some sessions here. Moreso so that i can try out coveritlive.

October 24, 2011

What Percent Of Visitors Are Logged In To Google?

Filed under: Main — Ryan Jones @ 5:31 pm


There’s been a lot of panic lately from webmasters about Google’s recent announcement that they’re going to be defaulting logged in searchers to secure search. Metrics and SEO gurus instantly realized that this change would mean a loss of refer data from search – specifically what keyword the visitor used.

Despite what Google’s real erasons may be, this change started as a privacy feature but has quickly caused SEO’s everywhere to run screaming.

But it got me asking – just how much of my traffic will this actually affect? So I went about finding out.

Since the feature hadn’t rolled out yet and since google SSL doesn’t pass any referers anyway, I had to figure out a way to detect who was logged into Google. I found 3 methods for doing this. I won’t provide the code or intricate detail, but I’ll briefly describe them.

The first involved linking to an image that you had to be logged in to a Google service to see, then using some jquery to see if the image loaded. A pain in the ass and way too long to code. The next best involved using a Google+ API, but not everybody (see apps users) has Google+ yet so that wouldn’t be fair either.

I settled on exploiting Google adsense re-targeting. Not 100% ideal, but should give me a pretty good picture. Basically, for adsense to re-target you the browser has to pass your cookie back to the website running adsense. That’s detectable and that’s what I did. (again, this is NOT fully tested, so if you see a gaping hole here please let me know.) Also of note, I did this on a different site of mine, not this one.

A quick 3 lines of PHP and a _gaq.push later and I was tracking it in Google Analytics.

In a 2 day span (Saturday and Sunday – normal traffic lows for me anyway) I had 8,980 unique visits. Of those, 2,103 had a Google cookie set. That’s 23.4% of all my visitors.

But let’s not panic yet

That’s 23.4% of ALL visitors. They could have searched, typed in my URL, or followed a link.

When I look at just visitors from Google.com that were logged in, that number shoots up to 30% (2068 out of 6538 – yes most of my traffic indeed does come from Google.) That’s a pretty high number!

Is there reason to panic? Probably not just yet. While my tests show 30%, Google claims this number will most likely be about 10%. Since the change only applies to Google.com users and not toolbar searchers, I can see the disconnect – however if Google decides to roll this change out to any Google user who’s logged in, no matter how they search, then I do see it becoming an issue.

10% won’t change your data enough to make it less actionable. Neither will 30%, however what worries me is the difference in behavior I noticed between logged in Google users and regular users. It’s quite possible that this change may affect areas other than just your keyword reports.

Here’s some more interesting insights about logged in Google users.

  • less than 1% of Yahoo and Bing visitors were logged in to Google
  • Logged in users had twice as many pageviews per session and spent twice as long on my site.
  • Logged in users had an almost 0% bounce rate, as compared to a 50% bounce rate overall
  • Logged in users were 20% more likely to be a repeat visitor.

October 13, 2011

Sustainable SEO

Filed under: Main — Ryan Jones @ 12:21 pm


Good SEOs get hit by algorithm changes. Great SEOs see their traffic increase.

If I’ve learned one thing in my years as an SEO it’s that success comes not from chasing algorithms but from chasing visitors.

Last month I asked my team what came to mind when they thought of sustainable SEO. I got a few definitions of the word sustainable, but nothing concrete that applied to SEO – so I took the question to twitter. Much to my surprise, I didn’t get any good answers there either.

I’d like to think the first 3 sentences of this post paint a good picture of what sustainable SEO is, but just in case they don’t let’s spend a few minutes talking about it.

Way back in 2004 @mattcutts let slip a comment about b tags carrying slightly more weight than strong tags due to Google’s slowness to catch up to HTML versions. Several prominent SEOs went around changing their strong tags to b tags only to change them back years later.

Fast forward to a couple years ago when @randfish preached about pagerank sculpting with nofollow and tons of SEOs spent countless hours redoing footer links on their clients sites only to find out that the technique never worked in the first place.

Modern day content farms like Mahalo and Demand Media are more recent examples of algorithm chasing strategies that failed to provide long term value.

These are all examples of non-sustainable SEO. Quick-fix algorithm chasing may work in the short term, but it’s just going to create more work for you in the long term and possibly expose you to unknown issues.

Sustainable SEO is all about quality

No matter what Google’s algorithm looks like 10 years from now, you can bet that searchers will still want useful sites and Google’s algorithm will be focused on returning those quality, useful sites. The individual factors that determine the rankings may change, but the goal will still be the same – and that’s what you should focus your SEO efforts on.

The trick to creating sustainable SEO strategies is not to react to algorithm changes, but to anticipate them. With every algorithm change Google makes they ask themselves “does this make the results more relevant to the query?” and you should do the same. Instead of starting with a site and asking “how can I make this rank for [term]” start with that term and ask yourself “what kind of results would I want to see?”

Sustainable SEO is all about increasing the quality of your site. While things like bounce rate, time on site, conversion rate, page speed, ect are not (for the most part) ranking factors, they are great indicators of the quality of your page. If you’ve got issues in these areas, chances are there’s something you can do to increase the quality of your pages.

Once you learn sustainable practices, you can start working on self-sustaining practices.

What do I mean by self sustaining? The 2nd part of creating a sustainable strategy is using techniques that will not only survive the test of time, but continue to work for you as time goes on.

A good example is how we handle vehicles on Ford.com. You’ll notice that the URL for the 2012 mustang is the same as it was for the 2011 mustang, which is the same as it was for the 2010 mustang. They’re all at www.ford.com/cars/mustang

Amazon does the same thing with Kindle URls. Apple does it with their products. The macbook pro always lives at http://store.apple.com/us/browse/home/shop_mac/family/macbook_pro regardless of which model is out.

By keeping product URLs the same we allow all of the previous years link building efforts to work for future products. If a product ever becomes discontinued a simple 301 redirect to a similar product preserves all that link equity.

SEO experts at https://sirlinksalot.co/ say that sustainable SEO starts with your linking structure and HTML. Good strategy and code is an essential foundation for you to build upon. By using a good linking structure you can improve your ranking on search engines, as well as attract more visitors who are interested in what you have to offer.

Then, it’s about looking to the future.

Don’t wait for Google to tell you what you should be doing before you implement it. People were looking at page speed way before it was a ranking factor because they knew that a faster site was more useful to their users. People like @pageoneresults were preaching about rel=author and other microformats way before Google put out a blog post. Linkedin and allrecipies had hCard and hRecipe implemented way before Google even started showing rich snippets. They recognized ways to make their sites better for users and did so – without Google telling them. When the algorithm eventually changed, they were in prime position to take advantage of it. That’s sustainable thinking.

There’s really no secret to finding what Google is going to use next either. All you have to do is read the HTML specifications. Rel=prev and rel=next have been in there for quite some time now. It’s only logical that Google use them. rel=search and rel=tag also exist. They’re not used yet, but will they be? Most likely.

So stop chasing the latest algorithm change and start focusing on what the next one will be. TL;DR

    See this here, a tool for writing good content for sustainable SEO. Ask yourself ” what=”” would=”” make=”” this=”” more=”” useful=”” to=”” users”=”” and=”” do=”” that=”” -=”” regardless=”” of=”” whether=”” or=”” not=”” it’s=”” a=”” ranking=”” factor.=””
  • Sustainable SEO is about chasing users not algorithms
  • It starts with good link structures and content strategies
  • Sustainability = Quality
  • Keeping up on HTML standards can keep you ahead of the Google announcements

October 11, 2011

When Search People Take Over

Filed under: Main — Ryan Jones @ 2:09 pm

I can’t wait until search minded people take over the business world. I’m not talking about SEOs, but people who truly understand the value of search marketing at www.zerobounce.net and how search is just the tip of the iceberg.

The paradigm is shifting. It started with search but will evolve to every form of business. Those who don’t adapt their business models will be left behind begging congress to bail them out.

You probably already guessed from the photo above, but I’m talking about the switch from push marketing to pull marketing. If you haven’t noticed, it’s all about pull now – and companies are noticing.

The traditional push model consists of creating a project and selling you on the need. Think of the Auto industry where workers create vehicles and they sit on a lot where salesmen talk you into the “best car for you.” The pull model works the other way. It starts with a need and then helps the consumer fill it. A great example is Ford Australia – where you won’t find a pre-built car sitting on a lot. Instead, customers tell Ford exactly what they want in a car and Ford builds it for them. Guess which model has higher conversion rates and increased customer satisfaction?

Bill Gates recently said “the future of search is verbs” and he couldn’t be more right. Verbs and pull go hand in hand. They both express a need. Consumers today don’t care about your hyped up PR. They don’t want a carefully crafted marketing message. Today’s consumers have a problem at hand and they want a solution. They’re looking to do something – and the marketer who listens to their problem and provides a solution will be the one who capitalizes.

But it’s not just marketing

All of business is changing to the pull model. Nothing is safe. TV. Radio. Music Sales. Newspaper. Magazines. Even your local department store circular. All of these things have traditionally operated on a push model in which they’ve spent millions of marketing dollars to convince you what you should buy – and all of these areas are struggling to adapt to today’s pull oriented consumer.

TV viewers no longer plan their day around network schedules. TV guides are a relic. They’ll watch the shows, but on their own terms. DVR, On-demand, and subscription services like netflix and made it possible for viewers to watch what they want, when they want, where they want, and on their choice of device – yet several networks are still trying to limit access to shows, delay them several days, and pass laws trying to break your DVR.

Movies and music no longer become available for purchase every Tuesday at your local Best Buy. In fact, music isn’t even purchased – it’s streamed. That new hit song won’t be bought on an album, it will be streamed to friends at a party on Spotify, added to a Turntable.fm queue, and shared with Facebook friends in a playlist. At least, that’s what listeners are trying to do amid constantly increasing royalties that price these services out of existence.

Radio isn’t just listened to in the car. It’s available on multiple devices and even saved for later via podcast – that is, of course, if your favorite radio station isn’t delaying those streams for 24 hours in a misguided attempt to increase real time ratings.

And therein lies the problem. Consumers haven’t stopped wanting music, movies, and TV – they just want it in a way that it fits with their schedule. This is a battle of control; one that the consumers will ultimately win.

That’s why I can’t wait for search people to get involved. Imagine if Google were running cable TV or a music label. How do you think they would change TV, or even banking? True search people think differently. Their thought processes start by trying to figure out what the user is looking for and how they can provide it. That very thought process is lacking in most of today’s business leaders.

Old media is dying. Search is the future – and I’m not just talking about Google / Yahoo / Bing. I’m talking about everything, literally. And it’s exciting. I can’t wait to see how today’s pull oriented consumer shapes the business of the future and which companies still cling to their pushy ways.

« Newer PostsOlder Posts »

Powered by WordPress