Skip to main content
    Foresera
    ← Resources
    March 2026 · 8 min read

    Why Your CMS Being Accessible Doesn't Mean Your Content Is

    M

    Matt

    Founder, Foresera

    CMS vendors have done a genuinely solid job on accessibility over the past decade. Modern platforms generate semantic HTML, handle keyboard navigation, implement ARIA roles on interactive components, and maintain contrast ratios in their default themes. If you're running a government website on a major CMS, you've probably been told — correctly — that the template is accessible. What that certification covers, though, is the container. It covers the navigation, the page structure, the search form, the footer. It does not cover the content your team publishes through it.

    This distinction becomes significant under WCAG 2.1 Level AA and the DOJ Final Rule (28 CFR Part 35), which applies to state and local governments. The rule covers web content and digital documents equally. A certified accessible CMS template plus a library of inaccessible linked PDFs is not a compliant website. It's half of one.

    What "accessible template" actually covers

    A CMS accessibility certification — whether from the vendor or a third-party audit firm — typically covers the structural HTML output of the platform itself. The focus is on:

    • The navigation bar and its keyboard operability
    • Page templates and their heading hierarchy
    • Interactive components: dropdowns, modals, accordions, search fields
    • Color contrast in the default theme
    • ARIA landmark regions and labeling
    • Focus management and skip links

    These are real accessibility concerns and the work to address them is not trivial. But notice what's absent from that list: the text content editors write into pages, the images staff upload without alt text, and — most significantly — the documents linked from those pages.

    WCAG 2.1 AA success criteria apply to all content delivered through a website, regardless of file format. SC 1.1.1 (Non-text Content), SC 1.3.1 (Info and Relationships), SC 1.3.2 (Meaningful Sequence), and SC 2.1.1 (Keyboard Accessible) are not HTML-only requirements. They apply equally to a PDF budget report, a scanned meeting agenda, an Excel spreadsheet of infrastructure spending, and a Word document posted as a public comment template.

    What happens when documents get published

    The typical path for a government document going from a staff member's desktop to a public website involves almost no accessibility intervention. Someone creates a report in Word or exports it from a financial system, saves it as PDF, and attaches it to a page through the CMS file manager. The CMS creates a link. The document is now public.

    What didn't happen: the document's tag structure wasn't checked. The reading order wasn't verified. Alt text wasn't written for any of the charts or photographs. The document title in the file metadata is still whatever the authoring application assigned by default — often a filename, a template name, or empty. Form fields, if any, aren't labeled.

    Each of those omissions is a WCAG failure that affects real people. Someone using a screen reader navigating a PDF budget report with no heading tags (SC 1.3.1) experiences a wall of text with no way to jump between sections. Someone using a braille display reading a scanned meeting agenda that lacks OCR can't access the content at all — the "document" is a raster image. Someone using keyboard-only navigation through an unlabeled permit application form (SC 4.1.2) may not know which field they're in or what it expects.

    These are not edge cases. They're standard outcomes for documents published through standard workflows.

    The scanned document problem

    Scanned documents deserve particular attention because they fail in a way that's invisible to a sighted reviewer. A scanned PDF looks like a document. It displays pages of text and graphics. But without optical character recognition (OCR) and subsequent tagging, the PDF contains no machine-readable text at all. It's a series of images of pages. Screen readers, text-to-speech tools, braille displays, and keyword search all find nothing — because there's nothing to find.

    Government archives are full of scanned documents. Meeting minutes from a decade ago, historical ordinances, permit records — many were digitized by scanning paper originals and uploading the resulting image PDFs. These documents fail SC 1.1.1 (the entire document is an untagged image), SC 1.3.1, SC 1.3.2, and effectively every other structural criterion. They require OCR as a prerequisite to remediation, not just structural repair.

    Website accessibility scanners don't catch this. A scanner evaluating a page that links to a scanned PDF sees a valid anchor element pointing to a PDF file. It may check whether the link text is descriptive. It doesn't open the PDF and test whether the content inside is accessible. That gap is not a limitation of any particular scanner — it's a category limitation. Scanners evaluate HTML, not the binary contents of linked files.

    The filename problem

    Related to scanned documents but worth naming separately: hashed or machine-generated filenames create accessibility and usability problems of their own. A link that reads "Download Document" pointing to a file named 5f3e9a2c8b14d.pdf fails SC 2.4.4 (Link Purpose in Context) and provides no useful information to someone using a screen reader navigating by links. The document title stored in the PDF metadata — often auto-populated by the authoring system — is equally unhelpful.

    This pattern is common in systems that auto-generate files: financial reporting platforms, permitting systems, agenda management software. The software produces a valid PDF, but accessibility metadata is either absent or carries the database key rather than a human-readable title.

    Scale: what a typical city website actually contains

    It's easy to underestimate how many documents accumulate on a government website over years of operation. A mid-sized city website — one with 50,000 to 250,000 residents — commonly links to several hundred to several thousand PDF documents across departments: public works, planning, finance, city council, parks and recreation, utilities, human resources.

    Each department has its own publication cadence. Planning publishes environmental impact reports, variance notices, general plan amendments. Finance publishes budget documents, audit reports, financial statements. City council publishes agendas, minutes, staff reports for each meeting. Multiply the number of meetings per year by the number of years of archives that remain online, and the document inventory grows quickly.

    The website scanner that your IT department runs on a quarterly basis — the one that produces a report with your overall accessibility score — is not touching any of those documents. It's evaluating the HTML of each page it crawls. If the page renders without errors, the linked PDF is not examined. The document inventory remains, for practical purposes, completely outside the scope of the scan.

    Closing the gap: where to start

    Understanding the scope of the problem doesn't require addressing all of it at once. The most productive approach is to close the gap methodically, starting where impact is highest.

    The five WCAG disability categories — visual, auditory, motor, cognitive, and speech — map to different assistive technologies: screen readers and braille displays for visual, text-to-speech and captioning for auditory, keyboard navigation and switch access for motor, plain language and structure for cognitive, and voice input software for speech. Document remediation that addresses all five categories requires attention to structure (SC 1.3.1, SC 1.3.2), text alternatives (SC 1.1.1), contrast (SC 1.4.3), form labeling (SC 4.1.2), and language identification (SC 3.1.1). No single fix covers all categories — which is why a genuine remediation process is more involved than running a single automated pass.

    A practical starting point for most organizations is to identify the ten to fifteen documents that carry the highest public traffic and the most significant barriers to participation. These are often:

    • Permit and license application forms
    • Public comment forms and templates
    • Current-year budget summaries
    • Meeting agendas for upcoming or recent council meetings
    • Notices that carry legal or service implications

    Remediating these documents fully — not just marking them as reviewed, but producing audit reports and conformance documentation — creates a foundation. It demonstrates a systematic process. It also builds internal familiarity with what remediation actually involves, which makes it easier to scale the process to a broader backlog over time.

    The accessible CMS template is a real achievement. It's just not the whole job. The documents your team publishes through it carry the same legal requirements as the template itself — and in many cases, they represent the most significant accessibility barriers on the entire site.

    Are you an early adopter?

    We're looking for compliance teams, IT directors, and accessibility leads who want to shape the product.

    Get Early Access

    Do you use assistive technology?

    Your feedback directly improves how we remediate documents for people who rely on screen readers, magnifiers, and braille displays.

    Share Your Experience
    Compliance Analysis

    See what we find

    Run a full WCAG compliance audit on your documents. Upload a PDF and get results in minutes.

    112
    WCAG Issues
    118
    Auto Fixes
    15
    Doc Templates

    By submitting, you agree to receive email communications from Foresera regarding your results. Privacy Policy