What I learned at SearchFest 09 (SEMpdx)

This is a summary of the new SEO tricks that this particular old SEO dog learned about SEO from SEMpdx SearchFest 2009, held at the Portland Zoo conference facility in Portland, Oregon, on March 10, 2009.

It is in no particular order:

What is EVERY visitor to your site worth to you?

If you don’t know anything else about your visitors, it is essential to know the dollar value of each visitor to your site; how much each visitor is worth to you. You calculate this by dividing the value of your conversions (those doing what you want them to do on the site, i.e., signing up for your newsletter, or buying your product, or calling you on the phone), divided by the number of visitors.

Value of conversion divided by number of visitors needed to get that conversion, equals the value of each visitor.

You should already know the VALUE of a conversion if you’re doing any Google Adwords marketing. This is different than the COST of a conversion, which is what each conversion costs when you pay Google Adwords or Yahoo Search Marketing at the end of the month.

The Value of a Conversion is figured this way: If it takes 100 conversions to make one sale of one product, sold for $1000, then each conversion is worth $10. So how many visitors does it take to make one conversion? If it takes 100 visitors to get one conversion (each conversion being worth $10) then each visitor is worth ten cents.

We have one client getting a 17% conversion rate from visitors to his site (signing up for his newsletter). That’s a really high conversion rate; he’s been tweaking his offers and the landing pages for several years to get there.

On average, over time, 1% of all conversions (on that site) will result in a sale of about $6000 worth of services. So each conversion (newsletter signup) is worth $60 (and costs us $3.43 to produce, using Google Adwords).

So if 17% of all visitors are conversions, then (where’s my college algebra when I need it?) how much is one visitor worth? It takes 588 visitors to make 100 conversions and thus make one sale. So 588 visitors is worth one sale of $6000. That makes each visitor worth (tada!) $10.20 on that site. ($6000 divided by the 588 visitors needed to make one sale = $10.20.)

That’s a stat we need to track and manage for each of our clients.

What does a site look like with Javascript disabled?

One of the speakers at SEMpdx brought up something I will be using when doing future analyses (SEO audits) of websites. What does the site look like when you disable JavaScript? Can you still navigate it? Do sections of it disappear? Google, by design, does not read JavaScript (which is client-side technology, meaning it only works at the browser level, not at the server level). So if your copyright date is supplied by a Javascript looking for the year and inserting it into your copy, then Google won’t ever know what the copyright date is.

If you want to hide links or cut all PageRank flowing from one page to another, then the best way to do that is by hiding the link inside JavaScript.

I’ve been told (don’t know if I believe it yet) that use of a rel=nofollow tag on a link to page can result in that page not being indexed at all by Google, even if there are some links to that page that are NOT using the rel=nofollow.

For example, if your home page links to your services page several times (perhaps through the top navigation tabs and the side navigation menu, and in the body of text on the home page, that would be three links to the services page) then, as I understand how it works, Google is only going to count one of those links anyway. But if you put rel=nofollow on two of those links, then the third link may also be considered rel=nofollow by Google. So we shouldn’t use rel=nofollow with complete abandon. I’d love to hear from anyone with data confirming or disputing that assertion.

But if you put two of those links into JavaScript, then Google won’t even see them or know they exist as links. (We’ve done that for many years.) Google will only find the link that uses a standard a href= anchor tag to create the hyperlink to the services page.

Don’t use the old Urchin Google Analytics code

If the Google Analytics code you are using contains the term “Urchin” in it, upgrade it to the new code available from Google Analytics. It will give you much more statistical analysis capability.

What happens to 404 pages on your site is important

Some sites do not do anything about 404 errors. So if a link breaks within their site, they don’t have a system set up to catch and handle such broken links that result in a “404 Page Not Found” error.

We have for many years used a simple piece of php code that returns a 404 error code (so Google knows the links is broken) and then routes to request over to the sitemap.html or sitemap.php file, so that a visitor trying to get to a page that no longer exists is routed to a sitemap, so he or she can select where to go next. We do this just because it is helpful to visitors; a blank page that only says “404 page not found” is not very helpful.

However, if you don’t set it up to use something like our system above, then many servers will route 404 page not found errors over to their own default page not found error — which transfers all the page rank from those pages to their default 404 page, removing the PageRank from your site and giving it to them.

If you have thousands of pages and many 404 pages, this can become a severe leak of PageRank.

For help on how to set up 404.php and 401.php pages that will route correctly to your sitemap, and still return a 404 error to Google, contact us

Link System Too Confusing?

One gal from Google, Susan Moskwa, said that if your internal linking system (navigation) is confusing to visitors, then it will be confusing to Google. She also reiterated what most SEO people know, “Don’t have more than 100 links on a page”. She stronly advised SEO types like me to “Make usability for visitors the top priority of your navigation system and linking structure.”

Her advice ties in with “siloing” — the recommended SEO practice of “containerizing” content. Instead of having a flat site with everything at the root level, break things up into folders, and link down into them from the home page, and link between pages in the same “silo” or “container” but don’t link between the sub-pages of different silos. That will flow the maximum PageRank down to each category (aka silo or container). Organize your site around your keywords, and you will have a better chance of Google figuring out what each section of your site is about, and thus ranking better for it.

Press Releases need an RSS Feed

Doug Hay of Expansion Plus brought this up, and while it is obvious, I hadn’t thought of it. (D-oh!)

Press releases sent out by PRWire or MarketWire need to be placed on your own website, with an RSS feed, about half an hour prior to going out on PRWire or MarketWire. (We also use the article marketing services of SendArticles.com.)

We are contacting our clients and advising them that they need an RSS feed on their news/press pages on their sites.

Doug also mentioned that putting a link to a video in a press release is a good idea.

Why does Wikipedia Rank so High at Google?

Dan Boberg quoted Google CEO Eric Schmidt as saying, “Wikipedia is mankind’s greatest gift to mankind.” I can’t find that quote online, so make no guarantee of authenticity; I am quoting Dan Boberg, not Eric Schmidt.

But if true, that is certainly revelatory as to WHY a lame Wikipedia article, written by people with an axe to grind or an apple to polish, rank so highly.

Because Eric likes Wikipedia.

MSN.com is busted

Derrick Wheeler of Microsoft gave a great short presentation on handling large problematic websites.

Microsoft.com (MSN.com or “msncom” as it is called at Microsoft) is a combination of about 200 websites, containing about 1.5 billion pages. That’s Billion with a B. Many of the pages no longer exist. Many pages are available through different paths, so have up to 500 different URLs pointing to the same page. Things are shifting all the time, and there’s no standardization of technology in use, site architecture, or anything else really.

It’s just as hit-or-miss as it seems.

I don’t envy Derrick his job – he’s trying to get that gargantuan site organized and sorted out, and working with about 200 different groups to do so. Ugh. I thought I had problems.

The good news is, he recently implemented some restructuring that got rid of half a million unnecessary pages from their index. Yippee! Only 1,499,500,000 pages to go!


And there you have it, what I learned from 8 hours of seminars at SEMpdx.

Jere Matlock


  1. Why do Wikipedia articles rank so high in Google? « My Software Notes - June 6, 2009

    […] answer seems to be “None of the above.”  Read this article about the “SearchFest 09″ conference and about 3/4 of the way down the page you’ll see this: Dan Boberg quoted Google CEO Eric […]

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

All Rights Reserved.