SEO Terms
SEO Terms and short explication/ description:
- Errors – Identify client issues such as broken links and server errors, including no responses, 4XX client errors, and 5XX server errors.
• Redirects – Analyze all types of redirects, including permanent, temporary, JavaScript redirects, and meta refreshes.
• Blocked URLs – Audit URLs restricted by the robots.txt protocol to ensure proper accessibility.
• Blocked Resources – Assess blocked resources in rendering mode to optimize site functionality and performance.
• External Links – Examine all external links, their status codes, and the pages they originate from.
• Security – Uncover vulnerabilities like insecure pages, mixed content, insecure forms, and missing security headers.
• URL Issues – Spot problems such as non-ASCII characters, underscores, uppercase letters, parameters, or excessively long URLs.
• Duplicate Pages – Detect exact and near-duplicate pages with advanced algorithmic checks.
• Page Titles – Highlight missing, duplicate, long, short, or multiple title tags.
• Meta Description – Find missing, duplicate, long, short, or multiple meta descriptions for optimization.
• Meta Keywords – Analyze keywords primarily for reference or use in regional search engines.
• File Size – Evaluate the size of URLs and images to enhance site speed.
• Response Time – Monitor page response times to optimize loading speeds.
• Last-Modified Header – Review the HTTP header to see when pages were last updated.
• Crawl Depth – Determine how deeply a URL is buried within the site’s structure.
• Word Count – Measure the content length of every page for SEO improvements.
• H1 – Identify missing, duplicate, long, short, or multiple H1 heading tags.
• H2 – Review H2 headings for any missing, duplicate, or poorly formatted tags.
• Meta Robots – Audit directives like index, noindex, follow, nofollow, noarchive, and nosnippet.
• Meta Refresh – Review refresh delays and target pages for enhanced usability.
• Canonicals – Check canonical link elements and HTTP headers to avoid duplicate content.
• X-Robots-Tag – Examine directives issued in the HTTP header for proper indexing.
• Pagination – Validate attributes like rel=“next” and rel=“prev” to improve paginated content.
• Follow & Nofollow – Inspect meta nofollow tags and link attributes.
• Redirect Chains – Identify and address redirect chains and loops.
• hreflang Attributes – Audit language codes, missing confirmation links, and canonical issues for multilingual SEO.
• Inlinks – View all internal links to a URL, including anchor text and follow/nofollow attributes.
• Outlinks – Review all external links from a URL, including linked resources.
• Anchor Text – Audit all link text, including alt text from linked images.
• Rendering – Analyze JavaScript frameworks like AngularJS and React by crawling rendered HTML.
• AJAX – Assess adherence to Google’s deprecated AJAX Crawling Scheme.
• Images – Identify images with missing alt text, oversized files, or alt text exceeding 100 characters.
• User-Agent Switcher – Crawl as various bots or custom user agents, such as Googlebot or Bingbot.
• Custom HTTP Headers – Configure header values for requests, like Accept-Language or cookie settings.
• Custom Source Code Search – Locate specific elements in a website’s source code, such as tracking codes or text.
• Custom Extraction – Scrape data from HTML using XPath, CSS selectors, or regex.
• Google Analytics Integration – Sync with the Google Analytics API to access user and conversion data during crawls.
• Google Search Console Integration – Pull performance and index status data via the Search Analytics and URL Inspection APIs.
• PageSpeed Insights Integration – Retrieve Lighthouse metrics, diagnostics, and Chrome User Experience Report data at scale.
• External Link Metrics – Incorporate metrics from Majestic, Ahrefs, and Moz APIs for content audits or link profiling.
• XML Sitemap Generation – Create comprehensive XML and image sitemaps for site indexing.
• Custom robots.txt – Download, edit, and test robots.txt files directly.
• Rendered Screenshots – Capture and analyze rendered pages during a crawl.
• Store & View HTML & Rendered HTML – Examine the DOM and compare raw vs. rendered HTML.
• AMP Crawling & Validation – Crawl AMP pages and validate them with the integrated AMP Validator.
• XML Sitemap Analysis – Audit XML sitemaps to identify non-indexable, orphaned, or missing pages.
• Visualizations – Map internal linking and URL structure using tree graphs and force-directed diagrams.
• Structured Data & Validation – Extract and validate structured data against Schema.org standards.
• Spelling & Grammar – Check website content for spelling and grammar issues in over 25 languages.
• Crawl Comparison – Compare crawl data to detect structural changes, track progress, and benchmark performance.