Technical SEO · April 2026

Technical SEO: the invisible reason you're not ranking

You've written the content. You've targeted the right keywords. You've got decent backlinks. And you're still not ranking. The problem might not be in what people can see. It might be in what only Google can see.

← Back to Insights

What technical SEO actually is

SEO is usually talked about in two layers: on-page (your content, your keywords, your headings) and off-page (backlinks, authority, mentions across the web). Technical SEO is the third layer, and it's the one most founders don't know about until something goes wrong.

While on-page SEO is about the content of your pages and off-page SEO is about links and authority, technical SEO is about whether Google can efficiently find, read and understand your website. Think of it as the infrastructure layer: if the infrastructure is broken, everything built on top of it performs below its potential, no matter how good the content is.

A useful analogy: imagine you've stocked an excellent shop, positioned it well and told people where it is. But the door is locked from the outside and there's no sign indicating it's open. Technical SEO is unlocking that door.

Crawlability: can Google even find your pages?

Google discovers and indexes your pages by crawling them, following links from page to page much like a reader following references in a book. If important pages are blocked in your robots.txt file, excluded from your sitemap, or have no internal links pointing to them, Google may never index them. A page that isn't indexed doesn't rank. Period.

Check your robots.txt for inadvertent blocks. This happens more often than you'd think after a site migration or CMS update. A developer sets a staging environment to block search engines and the setting carries over when the site goes live. A single line in the wrong place can de-index your entire site without any visible error in the browser.

Verify your pages are indexed via Google Search Console's URL inspection tool. If a page you care about returns "URL is not on Google", that's your first thing to fix, before you touch the content, the links or anything else.

Duplicate content: are you competing with yourself?

Duplicate content occurs when the same, or very similar, content is accessible via multiple URLs. This happens more commonly than most people realise. HTTP vs HTTPS, www vs non-www, URLs with and without trailing slashes, and category or tag pages in WordPress that aggregate post content are all common culprits.

When Google finds duplicate content, it has to choose which version to rank, and it often doesn't choose the one you'd want. You end up splitting your ranking signals across multiple URLs, diluting the authority that should be concentrated on a single definitive page.

Canonical tags are the fix. A canonical tag is a small piece of code in a page's header that tells Google: "This is the definitive version of this content. Attribute all signals here." Without canonical tags, you're essentially entering the same page in a race under multiple names and wondering why none of them win. Most WordPress SEO plugins handle this automatically, but only if configured correctly from the start.

Core Web Vitals and page experience

Since 2021, Google uses page experience signals as ranking inputs. The three Core Web Vitals (Largest Contentful Paint, Interaction to Next Paint, and Cumulative Layout Shift) measure how fast the main content loads, how responsive the page is to interaction, and whether elements jump around while the page is loading.

A site that fails these metrics is at a measurable disadvantage in competitive search results. These are not vague signals. Google publishes the exact thresholds: LCP under 2.5 seconds, INP under 200ms, CLS under 0.1. Search Console shows you exactly where your site stands against each one.

Common causes of poor Core Web Vitals on small business sites include unoptimised images served at full resolution, render-blocking JavaScript loaded in the wrong order, fonts that load late and cause layout shifts, and cheap shared hosting that can't serve pages fast enough under load. None of these are difficult to fix, but you have to know they exist before you can address them.

Structured data: the missed opportunity

Structured data is code added to your pages that helps Google understand what type of content they contain. Not just "this is a webpage," but "this is an article published on this date by this author" or "this is a service offered in this location at this price range" or "this page contains a list of frequently asked questions with their answers".

When Google understands your content type, it can display rich results in search: FAQ snippets that expand directly in the results page, star ratings pulled from review schema, "How to" steps displayed inline. These rich results increase click-through rate significantly, because your listing takes up more space and provides more information before the user even visits the site.

Most small business websites have zero structured data. Adding it correctly (using Schema.org markup for your business type, your articles, your FAQs) can measurably improve the click-through you get from existing rankings without changing your position at all. It's one of the few technical SEO levers that produces visible results in the search results page itself.

How to audit your own technical SEO

You don't need an agency to do an initial check. Here are five things to look at today:

1. Run your homepage URL through Google's URL Inspection Tool in Search Console. Check it's indexed and that there are no crawl errors or mobile usability issues flagged. If you haven't set up Search Console yet, that's step zero.

2. Check your Core Web Vitals report in Search Console. Look specifically for URLs flagged as "Poor" in the CWV report. These are your highest-priority pages to fix first.

3. Search Google for site:yourdomain.com and check the number of indexed pages against what you'd expect. If you have 30 pages on your site and 200 are indexed, you have a duplicate content problem. If you have 30 pages and 5 are indexed, you have a crawlability problem.

4. Install Screaming Frog (free up to 500 URLs) and run a crawl of your site. Look for pages returning 4xx errors, pages missing an H1, and pages with no internal links pointing to them. Each of these categories represents a fixable technical issue.

5. Check your robots.txt at yourdomain.com/robots.txt. Make sure nothing important is blocked. If you see Disallow: /, your entire site is blocked from Google and you have an urgent problem.

None of this requires technical expertise to check. What it requires is knowing where to look, and then knowing what to do with what you find. That second part is where most founders either get stuck or get it wrong.

Mode's technical SEO audit covers every one of these areas and comes with a prioritised fix list, ranked by impact not complexity, so you know exactly where to start.

GET A TECHNICAL AUDIT