Featured Image

Posted by alexis-sanders

SEO is about understanding how search bots and users react to an online experience. As search professionals, we’re required to bridge gaps between online experiences, search engine bots, and users. We need to know where to insert ourselves (or our teams) to ensure the best experience for both users and bots. In other words, we strive for experiences that resonate with humans and make sense to search engine bots.

This article seeks to answer the following questions:

  • How do we drive sustainable growth for our clients?
  • What are the building blocks of an organic search strategy?

What is the SEO cyborg?

A cyborg (or cybernetic organism) is defined as “a being with both organic and
biomechatronic body parts, whose physical abilities are extended beyond normal human limitations by mechanical elements.”

With the ability to relate between humans, search bots, and our site experiences, the SEO cyborg is an SEO (or team) that is able to work seamlessly between both technical and content initiatives (whose skills are extended beyond normal human limitations) to support driving of organic search performance. An SEO cyborg is able to strategically pinpoint where to place organic search efforts to maximize performance.

So, how do we do this?

The SEO model

Like so many classic triads (think: primary colors, the Three Musketeers, Destiny’s Child [the canonical version, of course]) the traditional SEO model, known as the crawl-index-rank method, packages SEO into three distinct steps. At the same time, however, this model fails to capture the breadth of work that we SEOs are expected to do on a daily basis, and not having a functioning model can be limiting. We need to expand this model without reinventing the wheel.

The enhanced model involves adding in a rendering, signaling, and connection phase.

You might be wondering, why do we need these?:

  • Rendering: There is increased prevalence of JavaScript, CSS, imagery, and personalization.
  • Signaling: HTML <link> tags, status codes, and even GSC signals are powerful indicators that tell search engines how to process and understand the page, determine its intent, and ultimately rank it. In the previous model, it didn’t feel as if these powerful elements really had a place.
  • Connecting: People are a critical component of search. The ultimate goal of search engines is to identify and rank content that resonates with people. In the previous model, “rank” felt cold, hierarchical, and indifferent towards the end user.

All of this brings us to the question: how do we find success in each stage of this model?

Note: When using this piece, I recommend skimming ahead and leveraging those sections of the enhanced model that are most applicable to your business’ current search program.

The enhanced SEO model

Crawling

Technical SEO starts with the search engine’s ability to find a site’s webpages (hopefully efficiently).

Finding pages

Initially finding pages can happen a few ways, via:

  • Links (internal or external)
  • Redirected pages
  • Sitemaps (XML, RSS 2.0, Atom 1.0, or .txt)

Side note: This information (although at first pretty straightforward) can be really useful. For example, if you’re seeing weird pages popping up in site crawls or performing in search, try checking:

  • Backlink reports
  • Internal links to URL
  • Redirected into URL

Obtaining resources

The second component of crawling relates to the ability to obtain resources (which later becomes critical for rendering a page’s experience).

This typically relates to two elements:

  1. Appropriate robots.txt declarations
  2. Proper HTTP status code (namely 200 HTTP status codes)

Crawl efficiency

Finally, there’s the idea of how efficiently a search engine bot can traverse your site’s most critical experiences.

Action items:

  • Is site’s main navigation simple, clear, and useful?
  • Are there relevant on-page links?
  • Is internal linking clear and crawlable (i.e., <a href=”/”>)?
  • Is an HTML sitemap available?
    • Side note: Make sure to check the HTML sitemap’s next page flow (or behavior flow reports) to find where those users are going. This may help to inform the main navigation.
  • Do footer links contain tertiary content?
  • Are important pages close to root?
  • Are there no crawl traps?
  • Are there no orphan pages?
  • Are pages consolidated?
  • Do all pages have purpose?
  • Has duplicate content been resolved?
  • Have redirects been consolidated?
  • Are canonical tags on point?
  • Are parameters well defined?

Information architecture

The organization of information extends past the bots, requiring an in-depth understanding of how users engage with a site.

Some seed questions to begin research include:

  • What trends appear in search volume (by location, device)? What are common questions users have?
  • Which pages get the most traffic?
  • What are common user journeys?
  • What are users’ traffic behaviors and flow?
  • How do users leverage site features (e.g., internal site search)?

Rendering

Rendering a page relates to search engines’ ability to capture the page’s desired essence.

JavaScript

The big kahuna in the rendering section is JavaScript. For Google, rendering of JavaScript occurs during a second wave of indexing and the content is queued and rendered as resources become available.

Image based off of Google I/O ’18 presentation by Tom Greenway and John Mueller, Deliver search-friendly JavaScript-powered websites

As an SEO, it’s critical that we be able to answer the question — are search engines rendering my content?

Action items:

  • Are direct “quotes” from content indexed?
  • Is the site using <a href=”/”> links (not onclick();)?
  • Is the same content being served to search engine bots (user-agent)?
  • Is the content present within the DOM?
  • What does Google’s Mobile-Friendly Testing Tool’s JavaScript console (click “view details”) say?

Infinite scroll and lazy loading

Another hot topic relating to JavaScript is infinite scroll (and lazy load for imagery). Since search engine bots are lazy users, they won’t scroll to attain content.

Action items:

Ask ourselves – should all of the content really be indexed? Is it content that provides value to users?

  • Infinite scroll: a user experience (and occasionally a performance optimizing) tactic to load content when the user hits a certain point in the UI; typically the content is exhaustive.

Solution one (updating AJAX):

1. Break out content into separate sections

  • Note: The breakout of pages can be /page-1, /page-2, etc.; however, it would be best to delineate meaningful divides (e.g., /voltron, /optimus-prime, etc.)

2. Implement History API (pushState(), replaceState()) to update URLs as a user scrolls (i.e., push/update the URL into the URL bar)

3. Add the <link> tag’s rel=”next” and rel=”prev” on relevant page

Solution two (create a view-all page)
Note: This is not recommended for large amounts of content.

1. If it’s possible …

You can read the full article at Moz Blog

http://seopti.com/wp-content/uploads/2013/07/moz-logo.pnghttp://seopti.com/wp-content/uploads/2013/07/moz-logo.pngAdminSEOMOZ
Posted by alexis-sandersSEO is about understanding how search bots and users react to an online experience. As search professionals, we’re required to bridge gaps between online experiences, search engine bots, and users. We need to know where to insert ourselves (or our teams) to ensure the best experience for...