Archive for the 'SEO Resources' Category


Contextual Links for SEO

Getting backlinks to your site is probably the most difficult, time-consuming, misunderstood, pain-in-the-ass task associated with search optimization, yet widely considered to be, ultimately, the most important single element to a successful SEO campaign. We’ve talked before about the mechanics of acquiring backlinks and the kinds of places you try to acquire them from. Today, we’ll get into a notion that is frequently misunderstood and even more frequently mis-applied.

The contextual backlink

A contextual backlink is simply a link that appears inside text. Let’s see an example.

Are you moving to or out of Eugene, Oregon? Store your extra things in a heated, secure facility. Specializing in environmentally controlled storage, RV storage, boat storage, moving supplies, and all your Eugene storage needs.

Of course, the link, Eugene storage, is contextual. Why do we care? Because Google seems to really like contextual links, as if they believed such a link is more likely to be naturally placed than a link that looks like:

Eugene Storage
West Eugene Heated Storage

Grants Pass Storage Facility
Click here

This type of link works especially well in blogs—and you might have noticed that now almost every damn blog you encounter is literally stuffed with them. Look at this blog excerpt from TechCrunch:

Contextual links example 1
Anything in green text, whether underlined or not, is a link. The red dots on the right edge are my marks indicating lines containing contextual links. I see eight in this post alone. Some of them might be considered useful, like the link for “soundcloud integration” as somebody might conceivably be interested or curious in that topic. But really. A link for “existing arrangement?” That link has nothing to do with anything except some small SEO boost for somebody who most probably paid for it.

We’d have to call that sort of contextual linking SPAM. The sad thing about it is, is that the scheme actually seems to work. That’s just the thing ComDex (remember ComDex?) was doing to boost JC Penny’s ranks on thousands of keywords.

Just ’cause it works don’t  necessarily make it right.

Still, contextual links—when applied in a responsible manner—can be very helpful for the reader, as well as for the search performance of the site being linked to.

Here’s example of some contextual links that are a little less spammy.

Warehouse management troubles cost American wholesalers millions of dollars per year. It’s a pity, because there are a host of technological solutions that could be employed. For instance, a complete inventory management system using RFID hardware combined with warehouse management software could reduce costs due to lost inventory.

In the above paragraph, the contextual links lead to pages that might actually be useful to someone reading the article they’re embedding in. They are also clearly links, and the link text is descriptive enough that a reader would know just what she’s clicking on.

One last note about contextual links. You can easily use CSS to make contextual links disappear into the text—no blue color, no underline. Don’t do it. For one thing, it makes them pretty useless as links. For another, more important thing, when Google finds them they will know without question that your intention was to fool them.

And it’s not nice to fool mother Google.


Another SEO Infographic

Last Monday we showed you a pretty sweet infographic representation of SEO. So why not continue? Here’s another we found at Search Engine Land, The Periodic Table of SEO.



Keyword Density: Does It Matter?

If you’ve ever looked into SEO as a possible solution to your website traffic shortage, you’ve probably come across the concept of “Keyword density.” This is usually expressed in a statement something like:

Is your keyword density optimal? Get your keyword density to the magic number xx% and riches will rain down on you!

Run a Google search on “keyword density” and you’ll return about 1.7 million results, many of them tools designed to analyze your density so you can optimize it and become like an SEO God.

Okay. Keyword density refers to the ratio of a particular keyword to the rest of the indexable words on a given page. So, in the sentence:

Try Seafood Charlies deluxe seafood package–just right for Mom!

The density of the keyword “seafood” would be 2 out of 10 keywords, or 2/10, or 20%.  Supposedly, this tells Google, Bing, et al that the page is about “seafood” and convinces them to rank it higher than a similar page with only a 10% density for that particular keyword.

Only the story goes that too high a keyword density means you are “over optimized” and that counts against you. The gurus tell you that there is a sweet spot of keyword density. I’ve heard anywhere from 2% to 9% pretty often.

So, is it true? Do search engines really care about the ratio of keyword to text in any given page?

Um, well, maybe not so much. At least not in the simple way usually implied. Sure, Google’s famous 200+ element algorithm probably figures in density somewhere, in some fashion, to some particular end. But let’s be clear here:

There is no “optimum” keyword density percentage.

What there probably is:  a red flag that arises if the density is high enough to raise suspicions of manipulation. And the keyword density number that triggers that red flag will not be predictable at all because:

  • it will be a different value for different keywords
  • it will be a different value for different industries
  • it will be a different value for different page text word counts
  • it will be a different value for different contextual situations
  • it will depend on whether there is other evidence of manipulation on the same page

Which brings up the real reason keyword density is a bit of a straw man in a stuffed shirt chasing a red herring.

Google (and Bing too unless they’re really behind the curve) has grown smart enough to parse context. And not by counting words. The modern algorithm understands context in much the same way you do. They scan headlines and subheads, they note emphasized text, they determine what a paragraph is actually about by reading the sentence subjects and relating them to the words surrounding them. They pick out the key concepts and relate them to synonyms appearing in the immediate vicinity. Relate them to images. Relate them to the text in links pointing to that page.

But mostly, they DON’T GIVE A DAMN if the keyword density of a page is 2% or 10% or 35%, just as long as the page content makes sense and isn’t full of deceitful practices.

Proof? The number 1 page in Google for the search term “SEO” is: with a keyword density of 0.77%. That’s pretty low by any standard.

The good news? It turns out that if you write reasonably well, with your subject matter firmly in mind, with a keyword or two on a sticky note at the bottom of your monitor, you should naturally come up with a lovely keyword density. No really., with a density of 2.88% was not tweaked in any way to enhance the density. (I know because I wrote it, and I DO NOT CARE about keyword density, not one little bit.)

Write naturally. Write for a human audience. Know what you’re talking about. And it doesn’t hurt to understand what the humans are most likely to type into Google when they want to read what you’ve written.


10 Step SEO: Sitemaps, part 2

Let’s put this 10 Step thing to bed, then, shall we? Last week we talked about HTML sitemaps, the kind that live on your site and are linked to from your pages and that actual real people might even be able to access and use. This week, we’ll delve into their more esoteric cousin: the XML sitemap.

In 2005, Google unveiled what they called the “Sitemaps Protocol.”  The idea was to create a single format for building a sitemap file that all (or at least “most”) search engines could use to find and index pages that might be otherwise difficult to crawl.  This protocol uses XML as a formatting medium. It’s simple enough to code by hand, but robust enough to support dynamic, database-driven systems.

At first, only Google crawled sitemap.xml files, but they encouraged webmasters to create and publish them by opening a submission service. You would build an XML sitemap, upload it to your web server, then submit the URL to Google via their webmaster interface. The Goog would crawl it, and—in theory—follow all the links and index all your pages.

It actually worked rather well. Pretty soon, all the web pros were calling the system “Google Sitemaps” and uploading and submitting like crazy. With so many sitemaps installed on so many websites, it wasn’t long before the other major engines adopted the protocol.

Are XML sitemaps a magic bullet?

No. Don’t be silly. But they are useful additions to a website’s structural navigation, especially for complex architectures that may be resistant to spider crawls. We’ve used them on many sites and find that a valid XML sitemap can lead to a faster, more accurate indexing.

So what is this thing?

It’s really a pretty simple construction. You could easily make one without any understanding of XML at all. The Sitemaps Protocol dictates a text file, with the extension “xml,” using this template:

<?xml version="1.0" encoding="utf-8"?>
<urlset xmlns="">

Every page on your site that you want crawled will have an entry between <loc></locl> markers. You do not have to set every parameter. This would be a valid sitemap for a site with a home page and three internal pages.

<?xml version="1.0" encoding="utf-8"?>
<urlset xmlns="">

The other parameters—lastmod, changefreq, and priority—are nice ideas, but ideas we’ve never seen have any effect. So use ’em or don’t. You can write an XML sitemap with any text editor. Just be sure to save it with “utf-8” encoding and with the name sitemap.xml. (To save in “utf-8” encoding in Notepad, click “save as” and you’ll find it in a pull-down menu at the very bottom of the box.)

And wait! It can be even simpler! The Sitemaps Protocol also stipulates that a simple list of URLs in a text file like:

(The file would be named “sitemap.txt” instead of “sitemap.xml” and also must be “utf-8” encoded.)

And wait again! Even simpler than that! There are a host of online tools that will turn a list of urls into an XML sitemap, or even spider your site for you and produce the sitemap file from that.

There are just a couple of rules to be mindful of:

  • Sitemap files cannot be over 10 MB
  • Sitemap files can be compressed as a gzip file
  • The maximum number of URLs per file is 50,000
  • Multiple sitemaps can be linked together with a “Master Sitemap”
  • Sitemaps should not contain duplicate URLs
  • Sitemaps should be referenced in your robots.txt file using this notation:
    • Sitemap: <sitemap_location>
      (of course, “sitemap_location” would be the actual URL address of your sitemap file)

When you have the file ready, you should use one of the many XML sitemap verification services. An invalid sitemap won’t help much.

Should you submit the file to search engines?

You can. If your site is brand new, it might help. But if you’ve done it right—complete with an entry in the robots.txt file—you really shouldn’t have to. Google, Bing, and Yahoo all know where you live.

Other sitemap resources
Wikipedia on Sitemaps
Google’s List of Sitemap Generators
xml Sitemap Validator


5 Things Search Engine Spiders Can’t Read

Search engine spiders are really good at two things: capturing text and following links. This is, in fact, pretty much all that they do. They hit a site, read all the text they can see and then follow every link they can find to another page, where they do the same thing. Over and over. Until they run out of links to follow. Which is all well and good, unless your website makes it hard or impossible to do these things. When that happens, you may find your pages missing from the search results. And that would suck.

Here are the 5 main reasons a spider has trouble with your site:

    1. Flash. Sites built on Flash can be awesome, beautiful, seductive, dynamic and the best ones leave your visitors with a very positive experience. They also leave spiders with an empty stomach.

      Example: Waterlife. Great flash presentation, very effective messaging. No spiders allowed.

      A spider sees it like this:

      Spidered Text :
      Spidered Links :
      No spiderable links found.Meta Keywords :
      No meta kewords found.Meta Description :
      The interactive story of the last great supply of fresh drinking water on Earth.
    2. JavaScript. Some of the coolest site navigation schemes ever are built on JavaScript. They can be elegant, striking, intuitive and effective. Unless you happen to be a spider.

      Example: EricJ. Very cool sliding landscape thingy takes you for a ride, but spiders go nowhere.

      A spider sees it like this:

      Spidered Text :
      Eric Johansson
      Spidered Links :
      Meta Keywords :
      No meta kewords found.Meta Description :
      No meta description found.
    3. Text in Images. Images are great—they’re what make the internet pretty and interesting. And you get to use any font you want, including some pretty wild typography. But spiders don’t see pictures, so all your pretty pictures—and any words inside them—are wasted.

      Example: Font Fabric. Great typographic design work here. But notice that in the spider report—even though a bunch of links are read—none of the words in the images show up in the spidered text.

      A spider sees it like this:

      Spidered Text :
      Fonts | Free Fonts, Buy Fonts, Windows Fonts Home About Contacts Subscribe 13 Comments Gabriel Sans 38 Comments GOTA Free Font 51 Comments HERO Free Font 57 Comments NULL Free Font 20 Comments Aston font 38 Comments AGE Free Font 4 Comments Reader font 42 Comments SAF Free Font 129 Comments Dekar Free Font Categories All fonts (41) Free (21) Tutorials (1) Fontfabricâ„¢ – Content (RSS) – Comments (RSS) – The Unstandard theme
      Spidered Links :
      Meta Keywords :
      buy fonts
      free fonts
      custom fonts
      high quality fonts
      nice fontsMeta Description :
      Our goal is to create high-quality fonts which stand in a unique class of their own, and which will serve as a good base for any designer project whether it be web, print, t-shirt design, logo etc.
    4. Image Navigation.  You want your navigation to stand out, to jump right off the page, right? Well, get yourself some fancy, colorful, maybe even animated buttons! Hell yeah! Problem is, if the button is purely graphic, the spider can’t read your link text. Even if it has no trouble reading the link. This happens a lot less now than CSS makes fancy readablebuttons easy enough to do, but I still see it from time to time.

      Example: ASU Student Intranet. Functional navigation, should work fine for all the little student visitors. But notice in the spider view below. Nowhere do you see the words “new employees.” Spiders see the links, but not the link text. You do want the spiders to see your link text, don’t you?

      A spider sees it like this:

      Spidered Text :
      Student Affairs Intranet JavaScript DHTML Menu Powered by Milonic Student Affairs Intranet Welcome The Student Affairs Intranet (SAI) is designed to share information quickly and easily within our departments. Reference materials and interest topics are located under “Staff” and “Tech” on the menu bar, and new items are frequently being added to these menus. This site is best viewed using Internet Explorer (IE) version 6.0 or higher (Windows), Safari 2.0 or higher (Mac), or Firefox 3.0 or higher (both Windows and Mac). For easy access to this site, you may want to Current Virus Update (Windows) 5583 (4/13/09) 7.1.x & 8.5.0i software Upcoming Events… Sorry… Nothing is scheduled at this time See detailed events listings Mac OS X Users (new 11/13/07): VirusScan for Mactel 8.6 has been released to the ASU Community for use. The software requires a PowerPC or Intel based Mac computer, Mac OS X Tiger (10.4.6 or later) or Mac OS X Leopard (10.5) Operating system. VirusScan for Mactel 8.6 can be downloaded from New Virus Scanner in Town: Windows PC Users: VirusScan 8.5i has been released to the ASU community. This version of VirusScan is compatable with Windows NT, 2000, XP and 2003. VirusScan 8.5i (ASU configured) can be downloaded from the ASU Web Site Patch 6 is also available from the same location…..
    5. Your Mind. That’s right, hard to believe but true. Search spiders cannot read your mind. You have to show them every damn thing you want them to see. You have to explain every damn thing you want them to figure out. You need to give them very clear directions to whichever damn place you want them to go.

Oh, yeah. And they used to not read PDFs. But now they mostly do. So nevermind.


10 Step SEO # 10: Sitemaps

Okay, then. We’re closing out our series on 10 Step SEO with something that a lot of folks neglect, disparage, or misunderstand: the lowly sitemap.

Just like the name says, a sitemap is a map to a web site. Simple. Really, just a list of links that point to all the pages.   There are two kinds of sitemap we’ll discuss here: HTML and XML. Today, we’ll focus on HTML sitemaps; next Thursday, we’ll get into the more exotic XML variety.

HTML sitemaps are placed on a regular web page and typically linked from the home page. Conceivably, this sort of reference could prove useful for people who are looking for specific information on a large or complex site. And in fact, up to 25% of internet users used to rely on sitemaps at least some of the time to find content.  I say “used to” because that number hit its height in about 2002. Since then, there has been a steady decline in sitemap use to around 7% in 2008 to its current level of  something somewhat less. So if nobody’s really using your HTML sitemap, why do  you need it?

The answer is, of course, search marketing. Search spiders aren’t very smart (as we’ve noted here before). They have trouble following certain kinds of links and reading some sorts of link text. Sometimes they get trapped in loops they can’t get out of. Sometimes they index vast numbers of dynamically generated pages that don’t really exist. Sometimes they skip entire sections of a site. An HTML sitemap—properly designed—provides an easy set of pathways into the site for spiders to follow.

And by properly designed, we mean that HTML sitemaps should:

  • be made of nothing but text links.
  • contain no links other than the map links (no need for normal page navigation here).
  • be built on a logical structure (one example follows):

  • contain no more than 50 links per page*—if you have more links than that, you should separate your sitemaps into multiple levels. For instance, you can make sitemap_1 with links for category and sub category only and with additional links to sub-category/product pages. Or, you can break them into multiple pages based on a simple alphabetical sort. Or whatever. Just be sure to link multiple sitemaps to each other.
  • be prominently linked from the home page. Sitemap links in the footer are okay as long as there isn’t much content above it on the page. We prefer linking to sitemap from above the main header whenever possible.
  • be kept up to date. Larger sites should consider investing in scripts or other technology to automate their sitemaps. Generating them dynamically will ensure that the links are always current.
  • be linked from every indexable page on the site. If a spider comes into your site for the first time from somewhere in the deep pages, this will help it crawl back up the structure to find the rest of them.

* Reason: some spiders will only follow and index a set number of links per page, always starting from the first they encounter. This number is different for different search engines, but 50 seems pretty safe. This is also the reason to place your sitemap link at the top of the page. If your homepage has 50+ links on it before you get to the sitemap, some engines may never see it.

Next week we’ll end this mess once and for all with a discussion of the mysterious and elusive XML Sitemap Protocol.


Jakob Neilson’s Alert Box (Jan. 6, 2002)
Jakob Neilson’s Alert Box (Sept. 2, 2008)
Sitemap Useability (2008, PDF)
The Right Way to Think ab0ut Sitemaps (Aug 9, 2007)


Ethical Blog-Commenting Your Way to Links

I was going to write a post today about blog commenting as a link-building strategy. I had it all worked out. I was going to tell you that even though I utterly despise blog comment spammers, there really is an honest and ethical way to get links from other people’s blogs. No, not the all-too-common:

Hi! I really really liked your site! I’m thinking of a number from 3 to 10! If you can guess it, I’ll link to you. Can you give me any pointers on such a great blog? Thanks!

Then I saw this post over at SearchEngineWatch (which is one of my favorite places for SEO news and views).  Contributing blogger Kristi Hines (not to be confused with Chrissie Hynde, who is another kettle of fish entirely), nailed it so well that I’m just going to point you to her article and call it good.

How to Use Blog Commenting to Get Valuable Backlinks and Traffic