Chapter 4: Designing and Developing a Search Engine Friendly Website

Let’s take a look at why search engines interpret text, images, and all other web content the way they do and why they don’t see the same things we see when we look at a page on the internet.

Indexable Content

Web crawlers usually ignore all content that is not text because it is considered insignificant. This is why it is crucial to put the most critical content into text HTML format. Content such as images or Flash files will largely be ignored. Elements such as Java applets, Flash files and images can be made a bit more prominent with a few steps:

  • All images (jpg, png, gif) will benefit from having alt text assigned to them. These alt attributes allow search engines to “see” and identify images.
  • Every Flash and Java plug-in can have related text placed on the same page.
  • Search boxes can have text links and crawlable navigation added to them.
  • Audio and video content should have a text transcript added so that it can be found by search engines.

Making Links Crawlable

One of the most important things in website design and development that is often overlooked is the ability of a search engine to crawl through a link structure. It is vital for search engines to be able to see navigation links so that they can find all web site content and list it on search engine indexes. Many people often design a fantastic navigation system for human users, but forget that navigation needs to have text or HTML elements in it in order for crawlers to see and identify it.

Submission Forms

Sometimes users are required to complete a form, such as a password or login, before continuing to more content. It is important to note that, unfortunately, crawlers won’t try to submit forms, so pages that do require a password, login, or even a survey to be completed before continuing will not be seen by search engines.

Unparseable JavaScript Links

Search engines often don’t pay much, if any, attention to JavaScript links. Therefore, you should include additional HTML links, or even replace your current JavaScript links altogether with HTML, in order to allow crawlers to find and identify your links.

Using robots.txt or Meta Robots to Block Pages

If you wish to keep your site or parts of your site from being crawled by search engines, you can use a file called robots.txt or the tag Meta Robots. This method won’t just block rogue bot access, but rather all search engine crawling, as well. Only use these methods if you don’t want your site or parts of your site to be seen or indexed by search engines.

Frames and Iframes

The use of frames and iframes bears consideration. While both are technically “crawlable” there are structural issues in how search engines follow information. It becomes evident quickly that this is the domain for those well-versed in search engine design, indexes and how search engines follow links. Unless one is a highly skilled advanced user, it is best to avoid frames altogether.

Search Boxes

A related issue is that webmasters often make the mistake of believing that search boxes can be accessed by search engines to determine what is being searched for by visitors. Sadly, crawlers don’t perform searches to find much of this content and millions of pages are rendered inaccessible until a crawled page links to them.

Likewise, links in Flash, Java and other plug-ins have similar implications. Such links should only be used when their behaviour is well understood.

Too Many Links?

Perhaps the most important consideration is pages that contain hundreds of links. Search engines will only crawl a limited number of links on a page. Search engine designers impose this restriction to cut down on spam and to conserve rankings. Pages with copious links are at risk of not getting discovered and indexed.

Keyword Usage and Targeting


Keywords are an important factor when it comes to the search process. They are the foundation of the search. Information retrieval is based on keyword searches. Websites are tracked through keyword-based indexes as searches are made. Information is retrievable within seconds by these millions of small databases storing specific keywords and phrases. It is important to make sure that you are specific with your words if you want to achieve a high ranking status on search engines.

Keyword Domination

Keywords dominate how we understand and communicate exactly what we want with search engines. With the right keyword searches, the search engine will match what we are looking for. The more specific, the better. It helps to find the right pages and rankings to include the proper spelling, the correct order of the words, punctuation, and capitalization.

To maximise rankings for your site, use the exact keywords in metadata, text, and titles for which you want to be ranked. It is important that you make your key phrases as detailed as possible. This will help you to get higher ranking results when it comes to the search engines.

Keyword Abuse

However, one thing to avoid because more bad comes out of it than good is “stuffing”. This is when you add too many keywords to links, text, URLs and meta tags. It is an unnecessary tactic people use to manipulate and misguide search engines.

Before, web search tools depended on keywords as a signal, regardless of how keywords were really used. Search engines aren’t at the same standards as humans are but machine learning has pushed forward positively.

An important thing to remember is to use keywords strategically and in a natural way. Keep your content aligned with each another. You want things that are relevant to your information thoughout. Try to avoid putting a bunch of random stuff together. The use of keywords, in terms of ranking, should be directed towards what people are looking for instead of trying to rank highly for all keywords.

On-Page Keyword Optimisation

Finding the formula to getting to the top of search engine rankings is simple once you know how search engines use and target keywords to rank content. This information will allow you to efficiently produce online content that will consistently place at or near the top of search engines.

Research in search engine optimisation patterns and long-term trends based on keyword usage strategies indicates that knowing when, where, and how to use keywords effectively throughout your article is paramount to the chance of your page placing highly on the search result page. The higher the ranking, the more people see your site on the search result page. Human beings have a short attention span. Most do not venture beyond the first page of search results. If your site doesn’t measure up, because it hasn’t been optimised through proper keyword usage, then fewer people, if any, will ever see your site.

Driving traffic to your site and increasing your revenue don’t start with someone clicking on your site; no they starts with getting your content ranked highly enough on search engines to position yourself in a place to be seen so people can find your site to click on. Optimisation is key when you create content on your site.

It all starts with location. Location of keywords throughout your content in specific places will rocket (and keep) your results near or at the top of any search engine. The first step is to know how to properly tag your content. Start with the title tag. Position your keyword or keywords at the beginning of the title tag. Placing this verbiage as near as possible to the start of the title tag is the first step in search engine optimisation.

Next, you work the keyword(s) into the body of your page. Weaving the keyword(s) throughout the tapestry of the body of your copy a minimum of two to three times will catch the eye of search engines. Pages with a lot of content can benefit from including the keywords more often than two or three times but you don’t want to overuse them as this would detract from the meaning of your site, make your content redundant and boring.

Do not forget that there is power in pictures. Images can be used to help drive traffic to your site and increase your visibility on search engines twofold: web search results and image search results. The alt attribute is used as an accessibility tool, but most people miss out on the perfect opportunity it provides to push their web page to the top of the rankings in both web search and image search results. This oversight can easily be corrected by including keyword(s) or keyword phrases at least once in the alt attribute of each image on your page.

Your page is well optimised if you have completed these steps but there is still more you can do to improve your ranking. Include your keyword(s) or keyword phrases in the URL. Even though meta description tags have no effect on how your page is ranked on search engines, it is still important to include keywords and phrases in your meta description tags as this is what searchers read as a description of your content on the search engine results page, making this information vital to attracting searchers to click on your page.

When used properly, keywords will effectively and efficiently position your page, gaining you the exposure your website deserves. You will place at or near the top of search engines and drive revenue-creating traffic towards your page while exposing your content to a greater audience.

Title Tags

Of all the components in a web page, the title is one of the most crucial elements to focus on. This is because the title represents everything the page is about. It should be a very brief yet an accurate and condensed reflection of the page. Titles are very useful to users visiting a web page, but search engines also use them to send those visitors to a web page based on a title’s relevancy.

It’s important to understand just how significant title tags are to search engine optimisation. Here are four very critical things to remember when optimising title tags for search engines and for usability.

  • First, pay close attention to length because search engines only return 65-75 characters displayed on the results page.
  • Second, place keywords that are of greater significance closer to the beginning. This will ensure the search engine is rendering the most relevant keywords and makes it more likely that a user will click them in the results.
  • Third, words at the beginning of a web page carry more weight and for this reason, you may want to include your brand in the title tag. If your brand is well known, it will help users who are familiar with it, but it can help people become more aware of it too. Ending title tags with a brand name increases this brand awareness.
  • Finally, you want to consider what impression your brand is giving off with these title tags. Readability is critical here. At this step, you really need to consider the whole user experience. You want to make sure everything is engaging and grabs the attention of users. Search engines are smart and they know what type of experience a user will have there. Your job is to make sure the search engine finds value on your web page.

Meta Tags

Meta tags were first used to describe a website’s content. To get you more acquainted with these here are some primary meta tags and a description for each.

The meta robots tag is the control that tells a search engine what to crawl. It can do this on a per page basis. It’s like an advisor that tells the search engine what to do on each page.

You may want the search engines not to index a certain page and you can do just that. The index/noindex tag tells the search engine whether to index that page or not. By default, every page is treated as indexed and you need usually to only include the noindex tag when you want something excluded.

Sometimes you may want to include or exclude links within a page. You can include or exclude the crawling of links on a page with the follow/nofollow tags. Most links are always included unless you use the nofollow tag.

Maybe you don’t want the search engine to save a cache of a page. Well, you can do that also. The noarchive tag tells the search engine to not hold onto a cache version of a page.

When a search result is returned it usually displays a title, URL, and a description. It may be important to you to exclude the description of a specific page on the search results page. If you use the nosnippet tag, then the search engine will not display any description on the results page and instead, it will just show the title and the URL.

The noodp/noydir tells the search engines not to load the snippet about the page from DMOZ or Yahoo. The noodp/noydir tags are specialized tags. These tags work especially well with images.

The meta description tag is a concise description of the content of the page. The meta description tag is a useful tool to get the user to click on your link. This description can have a powerful impact on your traffic. Creating a great description and one that highlights keywords can really affect the click-through rate. You want to keep your descriptions somewhere in the 160-character length. Most search engines will cut out anything longer than that. It’s important to understand that if you don’t use a meta description tag the search engine will pull snippets from other areas of the page. The search engine can pull the snippet from anywhere it deems relevant. This is the reason some people like to create their own descriptions. It really gives you more control over what is seen on the search results page.

URL Structures

Because a search engine will show in its results the URLs, these can be significant in enticing a user to click on your result. A URL is used, too, to rank documents, which means there is an advantage to utilising the right keywords in URLs. Think carefully about the URLs you choose for your web site because it is important to search results.

A URL will show up in the address bar of the web browser, which does not usually affect search engines. However, there can be a negative impact on user experience if the URL is poorly designed and structured, which makes this choice so key to success.

How to Construct a URL

Be empathetic. This means getting into the head of the average user when you see your own URL. Are you able to forecast what you will find on the page? If so, you have described it well. If not, then you have more work to do. The user does not know what you offer. Do not worry about giving a lot of detail, but just give a general idea that is clear.

Stay short and sweet. It is important to describe your site, but it is easier to copy and paste a URL into a text, email, or blog when it is not too long and does not have all kinds of slashes and symbols. This also ensures that it can be completely seen in the results of the search. People do not want to work too hard to figure out the results of their search.

Be static. Again, do not use too many symbols and numbers since you are writing for human beings. If you do use such complicated formulas you may also be lowered in rank.

Use hyphens for word separation instead of things such as underscoring, plus signs, or spaces that cannot always be accurately interpreted by every web application.

Problems with Canonical and Duplicate Versions of Content

Search engines are getting stricter when it comes to sites having the same content as other sites. They are cracking down by giving these sites lower rankings and the problem is definitely one of the worst problems a site can face.

Duplicate versions of content is when multiple versions of very similar text appears on separate URLs. This can cause a big problem for search engines because it makes it difficult to determine which one to show to the searcher. To be able to show the searcher the best results, search engines tend to exclude content in the results that are duplicated. A search engine will try to determine which content is the original one and because of this your content, if duplicate, can be ranked lower.

If you practice canonicalisation you can organise your content so that every piece you work on has only one URL associated with it. If you have many versions of content on a website then it makes your site more likely to have to deal with the problem of duplicate content and less likely to get the traffic you are looking for. Also, if you have multiple pages that you can combine into one page there is the possibility that they will rank better because they are no longer in competition with each other.

Another method used to reduce duplicate content on a particular website, instead of search engines, is the Canonical URL Tag. This will basically tell the engines that multiple pages should be looked at as one but does not redirect your visitors to another URL. This is different from the 301 redirect which works with all visitors not just search engines. There is the fear that the Canonical URL Tag might be used manipulatively and Canonical URL only operates on a single root domain whereas a 301 has cross-domain functionality. By using the Canonical tag in the page that has duplicate content you can direct to the URL that you are hoping to rank for and thus improve your site.

Rich Snippets

Using rich snippets is a way to get your 5-star ratings appearing in results. Information for rich snippets is created from code that is embedded on a web page. This allows webmasters to mark up content to deliver structured data to search engines to potentially improve the appearance of their results in search queries.

It’s not required to use rich snippets and structured data but it’s a growing factor that gives webmasters an advantage.

Stopping your Content from Being Copied

There are a lot of websites that will take bits and pieces of websites and re-use it for their benefit. ‘Scraping’ is automatically fetching someone else’s content and re-publishing it as your own. This is unfortunate because some scrapers tend to do a lot better than the origins of the content or the websites that they stole from.

One thing that helps to combat scrapers is to ping the major blogging and tracking services (Yahoo!, Google) immediately when publishing content into any type of feed format. This is made simple if you use a content management system on your website.

Another way to fight back is to include links in the content that come directly back to you because most scrapers do not edit the content they have taken. Search engines will catch wind of this. You are essentially using the scraper’s laziness against them because they don’t really do anything to the information; instead, they just copy and paste the content. In order to do this, you have to use absolute links and not relative links in your internal linking structure.

If it gets to the point where the scraping is so bad and your content and rankings are taking a beating, you may want to consider taking legal action called a DMCA takedown. This will help you get your traffic back.

Need Some Advice?