Every item I check when auditing a site: crawlability, indexed pages, internal linking, broken links, server errors, duplicate content, page speed, and structured data.
A technical SEO audit is a systematic review of the technical factors that affect how search engines crawl, index, and rank a website. It covers everything from how robots can access your pages to how those pages perform on mobile devices. The purpose is to identify problems that are suppressing organic visibility and prioritise fixes by their potential impact.
Technical SEO sits at the foundation of any organic search programme. You can publish excellent content and earn quality backlinks, but if search engines cannot reliably crawl and index your pages, that investment is wasted. A thorough technical SEO audit checklist ensures you identify the issues that prevent the rest of your SEO work from reaching its potential.
I use this technical SEO checklist as the starting point for every new engagement. It is structured by category rather than by priority, because priority depends on the specific site. A site with severe crawlability problems needs those fixed before anything else. A technically clean site might have its biggest opportunity in internal linking or structured data. Work through each section and score the severity of issues as you find them.

A complete technical SEO audit requires a combination of free tools and at least one crawler. The essential free tools are Google Search Console, Google Analytics 4, and Bing Webmaster Tools. For crawling, Screaming Frog SEO Spider is the industry standard for small to medium sites, and Sitebulb is a strong alternative with a better visualisation of site architecture.
You do not need to pay for every tool in the audit stack. Many of the most critical data points come from Google Search Console, which is free and gives you real-world data from Google's crawling of your site. Screaming Frog has a free tier that crawls up to 500 URLs, which is sufficient for small sites. For larger sites, the paid version at around £200 per year is indispensable.
For more full site audits, tools like Semrush Site Audit and Ahrefs Site Audit automate much of the checklist work and present issues with severity ratings. They are useful for ongoing monitoring but should not replace a manual audit of the most important pages. Automated tools miss context that only a human reviewer can evaluate: whether a noindex directive is intentional, whether a redirect chain was deliberate, whether duplicate content is actually causing a problem given the site's specific architecture.
Crawlability refers to how effectively search engine bots can access and navigate your site. Issues here are among the most consequential in SEO because they can prevent otherwise good pages from ever appearing in search results. The crawlability section of your technical SEO audit checklist covers robots.txt, crawl directives, redirect chains, and crawl depth.
The robots.txt file tells crawlers which pages they are and are not allowed to crawl. Errors here can accidentally block Google from crawling important sections of your site. Check the following:
A common error I find is a site that was put into maintenance mode during a development period with a blanket Disallow rule, and the rule was never removed after launch. This can cause the entire site to disappear from search results within days of going live. Check robots.txt on any new engagement before doing anything else.
Crawl budget is the number of pages Googlebot is willing to crawl on your site within a given time period. For most sites under a few thousand pages, crawl budget is not a limiting factor. For larger sites, particularly those with faceted navigation, user-generated content, or URL parameters generating large numbers of near-duplicate pages, crawl budget waste can prevent important pages from being crawled frequently enough to stay indexed.
Each redirect in a chain adds latency and dilutes link equity. Chains of three or more redirects are common on sites that have been through multiple redesigns without cleaning up the redirect map. A Screaming Frog crawl will identify all redirects and show you which are chains versus single-hop redirects.
Indexing issues prevent pages from appearing in search results even when they have been crawled. The most common causes are noindex directives, canonicalisation problems, thin content flags, and manual actions in Google Search Console. Checking your indexed pages against your expected page count is one of the fastest ways to identify whether you have a systemic indexing problem.
Use the Google Search Console Coverage (now called Indexing) report to understand the current indexing status of your site. The report breaks pages into indexed, not indexed, and various subcategories explaining why pages were not indexed. Cross-reference the total indexed page count with a Screaming Frog crawl to identify discrepancies.
XML sitemaps help Google discover and prioritise your pages. A poorly maintained sitemap can include redirected URLs, noindexed pages, or pages that no longer exist, all of which signal sloppiness and waste crawl budget.
Automated crawl tools are useful for identifying issues at scale, but they cannot evaluate whether a noindex directive is intentional, whether a redirect chain was deliberate, or whether duplicate content is actually causing a problem. Always combine automated audits with manual review of the most important pages.
Site architecture affects how efficiently crawlers can navigate your site and how well link equity distributes to important pages. Good architecture puts all important pages within three to four clicks of the homepage, uses a logical URL structure that reflects the site's topic hierarchy, and avoids orphan pages that are not reachable through normal navigation.
Use a site crawler to map your current architecture. Screaming Frog's visualisation and Sitebulb's treemap view both help you see whether your site's depth matches what you intend. Pages buried at five or more clicks from the homepage are at risk of being crawled infrequently and ranking poorly.
Internal linking is one of the most underused levers in technical SEO. It controls how link equity flows through your site, signals to Google which pages are most important, and helps crawlers discover new or infrequently visited pages. A poor internal linking structure can leave your most important pages with far less authority than they should have.
A Screaming Frog crawl shows you how many internal links each page receives. Cross-reference this with your commercial priority pages. If your most commercially important pages have very few internal links compared to less important pages, you have an internal linking imbalance that is likely suppressing rankings.
Click depth, also called page depth, measures how many clicks it takes to reach a page from the homepage. Google has indicated that pages deeper in the site structure tend to receive less PageRank. For large sites, this means structuring your internal links so that the highest-priority pages have the shortest click depth and the most internal links pointing to them.
A common internal linking issue I encounter is a site with a strong blog section that never links to the commercial service pages. The blog might rank well for informational queries and accumulate backlinks, but all of that authority stays within the blog section rather than flowing to the pages that drive revenue. Adding contextually relevant links from popular blog posts to commercial pages is often the quickest win available in a technical SEO audit.
On-page technical elements include title tags, meta descriptions, heading structure, image optimisation, and canonical tags. These are the basic building blocks of a well-optimised page. Errors here are common even on established sites because CMSs make it easy to create duplicate or missing tags at scale without a systematic review.
Duplicate content occurs when the same or substantially similar content is accessible via multiple URLs. It confuses search engines about which version to index and rank, and can dilute the link equity and ranking potential of the canonical version. Common causes include URL parameter variations, HTTP vs HTTPS versions, www vs non-www, and CMS-generated duplicate pages for categories, tags, and author archives.
Canonical tags are often incorrectly implemented. The three most common errors are: canonical tags pointing to redirected URLs rather than the final destination, canonical tags implemented in the wrong section of the HTML (they must be in the head element), and self-referencing canonicals missing from paginated pages where they are needed to prevent the first page being treated as canonical of all paginated versions.
Broken links, also called dead links, are outgoing or internal links that point to pages returning a 4xx or 5xx status code. They create a poor user experience and waste crawl budget by sending bots to non-existent destinations. Server errors return 5xx status codes and indicate server-side problems that prevent pages from loading reliably.
Not all 404s need to be redirected. If a page was removed because it was low-quality, thin, or no longer relevant, a 404 is appropriate. Pages that were removed but had backlinks pointing to them should be redirected to the most relevant current page, so the link equity is not permanently lost. Use Ahrefs or Semrush to identify which 404 pages have backlinks and prioritise those for redirects.
Pages that have been permanently removed with no relevant redirect destination should return a 410 Gone status rather than 404. This tells Google the page is intentionally gone and helps it remove it from the index faster than a standard 404.

Page speed and Core Web Vitals are part of Google's Page Experience ranking signals. Google uses real user data from the Chrome User Experience Report to assess whether pages deliver a good user experience. Failing Core Web Vitals thresholds does not cause dramatic ranking drops but contributes to a weaker page experience signal that can affect rankings in competitive positions.
For a full technical breakdown of how to diagnose and fix Core Web Vitals issues, see my Core Web Vitals guide.
Google uses mobile-first indexing for all sites, which means the mobile version of your site is the primary version Google crawls and indexes. If your mobile and desktop sites serve different content, the mobile version must contain all of the content, structured data, and links that you want indexed. Any mobile-specific issues will affect rankings for desktop users too.
Structured data uses schema.org vocabulary to give search engines explicit information about the content on your pages. Correctly implemented schema markup can unlock rich results in search (star ratings, FAQ panels, sitelinks search boxes, and more), which improve click-through rates. Incorrect implementation can trigger manual actions or rich result disqualification.
For sites targeting multiple languages or regions, hreflang tags tell Google which version of a page is intended for which language and country combination. Incorrectly implemented hreflang is one of the most common technical SEO issues on multinational sites and can result in the wrong language version appearing in search results for the wrong market.
This section only applies to sites with international deployments. If your site targets only UK English speakers, skip to the next section.
HTTPS is a confirmed Google ranking signal and a basic requirement for any modern website. Browser security warnings on non-HTTPS pages create a significant trust barrier that deters visitors and signals poor maintenance to search engines. Beyond HTTPS, security headers improve protection against common web attacks and are increasingly checked by technical SEO tools as part of a technical audit.
A thorough technical SEO audit will almost always surface more issues than you can fix at once. The right prioritisation framework weighs impact (how much will fixing this improve organic visibility or traffic) against effort (how much developer or resource time does this require).
Crawlability and indexing issues are always priority one, because they prevent everything else from working. Broken links and server errors causing 5xx responses are priority two. Duplicate content and canonicalisation issues are priority three on most sites. Internal linking improvements and structured data typically fall later in the priority list unless they are causing specific measurable problems.
Document every issue you find, record the evidence (screenshot, URL, tool output), and assign an estimated impact and effort score. Present findings with a recommended fix order rather than a flat list. This makes it easier to get developer resource allocated and track progress over time.
A technical SEO audit should check crawlability (robots.txt, redirect chains, crawl depth), indexing status (noindex directives, sitemap health, Coverage report in Search Console), site architecture (URL structure, internal linking, orphan pages), on-page elements (title tags, meta descriptions, heading structure), duplicate content and canonicalisation, broken links and server errors, page speed and Core Web Vitals, mobile usability, structured data implementation, and HTTPS and security headers. The priority order depends on which issues have the most material impact on the specific site.
A technical SEO checklist is a structured list of items to review when auditing the technical health of a website. It covers all the technical factors that affect how search engines crawl, index, and rank web pages. A good technical SEO checklist includes sections for crawlability, indexation, site architecture, internal linking, on-page elements, duplicate content, broken links, page speed, mobile SEO, structured data, and security. This page contains a full technical SEO audit checklist organised by category, free to use.
A technical SEO audit is a systematic evaluation of a website's technical infrastructure to identify issues that are preventing or limiting its visibility in organic search. Unlike a content audit or link audit, a technical SEO audit focuses on factors like how crawlers access pages, how pages are indexed, how the site architecture distributes link equity, and how the pages perform for users. It is typically the first step in any new SEO engagement and the foundation for all other SEO work.
The four pillars of SEO are technical SEO (site structure, crawlability, indexation, page speed), on-page SEO (content quality, keyword optimisation, page structure, meta data), off-page SEO (backlinks, brand mentions, external authority), and user experience (Core Web Vitals, mobile usability, page design and navigation). Technical SEO is foundational because it enables the other three pillars to function. A technically broken site limits what on-page and off-page work can achieve, regardless of how good the content and link profile are.
I conduct detailed technical SEO audits that identify the specific issues holding your site back, with prioritised fix recommendations your development team can act on.
Get In Touch