You have learned from the previous chapter how search engines behave. They crawl, index, and rank web content according to their algorithms. Thus, improving your website is vital. In this chapter, you will learn the aesthetic of a good website. From content structure, URL optimization, page load, down to link building. Chapter 3 website optimization has 3 parts: On-page, Technical, Off-page.
Website optimization overview:
- On-page SEO – This refers to all measures that need to be taken directly within the website in order to improve its position in the search rankings. On-page optimization focuses on website copies and URL structure.
- Technical SEO – This refers to the process of optimizing the website for the crawling and indexing phase. The main goal of technical SEO is to optimize the infrastructure of a website.
- Off-page SEO – This is about everything that doesn’t happen directly on the website. Off-page optimization includes things like link building, social media, and local SEO.
Let’s dive in On-page SEO – Web content development.
Chapter 3.1: On-page / On-site optimization
Website content should exist to answer searchers’ questions, to guide them through your website, and to help them understand your website’s purpose. Content should not be created for the purpose of ranking highly in search alone. But there are several ways on how to write web content for search engines.
The dos and don’ts in the website content. In writing website copies, content writers must be aware of what works and not in content optimization. Let us first take a look at the things we need to prevent from web content.
The don’ts in content development, low-value tactics to avoid.
- Thin content
- Duplicate content
- Keyword stuffing
- Auto-generated content
Thin content. Web content with no added value
While it’s common for a website to have unique pages on different topics, an older content strategy was to create a page for every single iteration of your keywords in order to rank on page 1 for those highly specific queries.
Thin content is badly written articles that are only there for SEO. It is automatically generated content purely to draw in clicks, or full pages only there to target a different variation of a keyword.
How to spot thin content?
- When the content has a low word count that usually composed of one to two paragraphs
- When the content is repeated throughout the site
Duplicate content a mirrored pages
Duplicate content is content that appears on the Internet in more than one web address. It can still sometimes impact search engine rankings.
Why does duplicate content matter?
For search engines, having duplicate content can present three main issues:
- They don’t know which version(s) to include/exclude from their indices.
- They don’t know whether to direct the link metrics (trust, authority, anchor text, link equity, etc.) to one page, or keep it separated between multiple versions.
- They don’t know which version(s) to rank for query results.
For site owners, having duplicate content their websites can suffer rankings and traffic losses which lead to two main problems:
- The website will not be shown in SERP. Search engines are forced to choose which version is most likely to be the best result. This dilutes the visibility of each of the duplicates.
- Link equity can be further diluted because other sites have to choose between the duplicates as well. instead of all inbound links pointing to one piece of content, they link to multiple pieces, spreading the link equity among the duplicates. Because inbound links are a ranking factor, this can then impact the search visibility of a piece of content.
How duplicate content issues happen?
Below are the most common ways duplicate content is unintentionally created.
URL parameters, such as click tracking and some analytics code, can cause duplicate content issues.
For example: www.widgets.com/blue-widgets?c... is a duplicate of www.widgets.com/blue-widgets?c...&cat=3" class="redactor-autoparser-object">
www.widgets.com/blue-widgets is a duplicate of www.widgets.com/blue-widgets?cat=3&color=blue
One lesson here is that when possible, it’s often beneficial to avoid adding URL parameters or alternate versions of URLs.
http vs. https or www vs. non-www pages
If your site has separate versions at “www.site.com” and “site.com” and the same content lives at both versions, you’ve effectively created duplicates of each of those pages. The same applies to sites that maintain versions at both Http:// and https://. If both versions of a page are live and visible to search engines, you may run into a duplicate content issue.
Cloaking is a blackhat SEO strategy presenting different content or URLs to human users and search engines. This is done by delivering content based on the IP addresses or the User-Agent HTTP header of the user requesting the page.
Keyword stuffing is a blackhat strategy for SEO keywords that are loaded into a web page’s meta tags, visible content, or backlink anchor text in an attempt to gain an unfair rank advantage in search engines.
The dos in content development – improvement of web copies.
Since there’s no secret sauce to ranking in search results, search engines rank pages that they determine are the best answers to the searcher’s questions. In today’s search engine, it’s not enough that your page isn’t duplicate, spamming, or broken. Your page has to provide value to searchers and be better than any other page Google is currently serving as the answer to a particular query. Here’s a simple formula for content creation:
To fix duplicate content you need to use a 301 redirect to the correct URL, the rel=canonical attribute, or employing the parameter handling tool in Google Search Console.
301 Redirect use to deal with duplicate content. This redirect is used from the “duplicate” page to the original source of content.
Rel=” canonical” attribute informs search engines that a specific page should be considered as though it were a copy of a given URL, and all of the links, content metrics, and “ranking power” that search engines use on this page should actually be accredited to the specified URL.
More content, long content, and original content. Google will reward you for it, and better yet, you’ll naturally get people linking to it! Creating high-level content is hard work, but will pay dividends in organic traffic.
Always remember, there’s no magic number when it comes to words on a page. What we should be aiming for is to satisfies user intent.
Beyond content optimization
In this part of chapter 3, you’ll learn some important on-page elements that will help search engines understand your content. For a full discussion on how to write web content for search engines spider check it here. A complete guide for content development.
Header tags are an HTML element used to designate headings on your page. The main header tag, called an H1, is typically reserved for the title of the page. It looks like this:
Learn the best practices for 3.2 Headings (H1, H2, H3, H4, and H5)
Internal links help Google find, index, and understand all of the pages on your site. If you use them strategically, internal links can send page authority to important pages. In short: internal linking is key for any site that wants higher rankings in Google.
Anchor text is the clickable text in a hyperlink. SEO best practices dictate that anchor text is relevant to the page you’re linking to, rather than generic text.
A title tag is the page’s descriptive, HTML element that specifies the title of a particular web page. They are nested within the head tag of each page and look like this:
<title>Beginner's guide for Search Engine Optimization | remar.me</title>
To learn more about title tag check it here 3.1 Title tag and meta description
Meta descriptions are HTML elements that describe the contents of the page that they’re on. They are also nested in the head tag, and look like this:
<meta name="description" content="This SEO for beginner's guide tends to teach you all of the basic principles behind Search Engine Optimization & this will show you how search engines work." />
To learn more about meta description check it here 3.1 Title tag and meta description
Best practices for URL structure
URL stands for Uniform Resource Locator. URLs are the locations or addresses for individual pieces of content on the web.
Clear page path naming
Search engines require unique URLs for each page on your website so they can display your pages in search results, but clear URL structure and naming is also helpful for people who are trying to understand what a specific URL is about. For example, which URL is clearer?
example.com/desserts/chocolate-pie or example.com/asdf/453?=recipe-23432-1123
URL length + Keywords in URL
While it is not necessary to have a completely flat URL structure, many click-through rate studies indicate that, when given the choice between a URL and a shorter URL, searchers often prefer shorter URLs. Like title tags and meta descriptions that are too long, too-long URLs will also be cut off with an ellipsis. Just remember, having a descriptive URL is just as important, so don’t cut down on URL length if it means sacrificing the URL’s descriptiveness.
Keyword overuse in URLs can appear spammy and manipulative. If you aren’t sure whether your keyword usage is too aggressive, just read your URL through the eyes of a searcher and ask, “Does this look natural? Would I click on this?”
example.com/services/plumbing/plumbing-repair/toilets/leaks/ vs. example.com/plumbing-repair/toilets/
Turn to next page Chapter 3: Website optimization – Technical SEO
Chapter 3: Website optimization
Chapter 4: Measuring SEO success
Glossary, resources, and tools.