Why AI personalisation breaks accessibility

Published 23 February 2026

Why AI personalisation breaks accessibility

Video transcript

There’s a peculiar pattern that happens when someone becomes a big name in a field. They reach thought leader status, and suddenly everything is broken—and they have the bold new fix.

Cast your mind back to 2024. Usability pioneer Jakob Nielsen published a deliberately provocative post claiming accessibility has failed, and that generative UI is the way forward.

If you’ve been anywhere near accessibility, you’ll know why that landed badly. But with more than a year of distance, the question isn’t whether the post was provocative.

It’s whether he was right about the underlying problem, even if the framing—and the solution—were off.

Nielsen’s core claim is deliberately provocative: after thirty years of accessibility work, disabled users still don’t have a usable experience.

Not merely a technically accessible one, but “usable” in the classic sense—easy to learn and productive.

He says that accessibility has failed as a method of making computers usable for disabled users by the same standards applied to any user.

His argument becomes even sharper when he focuses on blind users. He describes the screen reader experience as a forced linearisation of a two-dimensional interface: sighted users can scan the page visually, pick out elements of interest, and move directly to them; blind users, by contrast, must listen to a stream of audio in sequence.

It’s this indirection he says that guarantees an experience that will always be worse than visual interaction.

The problem is that it treats accessibility itself as the failure, rather than the way organisations consistently fail to implement accessibility well. Accessibility is not the thing that failed.

What fails, repeatedly, is adoption: building without accessibility in mind, bolting it on late, and treating it as a compliance scramble rather than a design and engineering discipline grounded in real usage.

This is precisely why the accessibility community keeps emphasising “shift left”—doing accessibility earlier in design and development, where structure, semantics and interaction patterns are still malleable. When teams build first and panic later, accessibility becomes an emergency patch.

Nielsen suggests that doing accessibility properly would require testing with every type of disability, which he says is effectively impossible. Again, critiquing screen readers as an inherently inferior mode of interaction. Coming from a UX pioneer, these claims are jarring—not because they acknowledge genuine usability problems, but because they imply those problems are inevitable and intrinsic to accessibility.

But beneath the inflammatory post there is a thread worth discussing: could interfaces be generated to better match individual requirements? That potential is what makes the conversation interesting.

Nielsen’s proposed solution is generative UI: instead of one interface for everyone, the system would generate interfaces tailored to each user and potentially regenerate them dynamically over time. For blind users in particular, he imagines an audio-first interface designed for the person, rather than a screen reader translating a visual UI.

The appeal is obvious. But the moment you say “tailored to your needs,” you run into a question that can’t be avoided: how does the system know what you need? It isn’t as simple as “detect assistive tech and adapt.” Assistive technologies are not consistently detectable.

Many disabled people do not use assistive technology, some use it only sometimes, and many use combinations. Needs also change with context: fatigue, stress, environment, device constraints, temporary injury, or the specific task at hand.

So a generative system faces a choice. It can ask users to disclose personal information about disability and access needs—something many people won’t want to share, and something they should not be forced to reveal.

Or it can guess from behavioural signals: keyboard usage, mouse movement, scrolling patterns, pauses, and other interaction traces.

That is where it gets dicey. Guessing introduces privacy risks, and it creates a practical failure mode that is easy to overlook: if the system guesses wrong, it doesn’t just miss; it actively makes the experience worse.

Consider a simple example. The system decides that heavy keyboard use indicates a preference—or need—for an audio-first interface.

But the user isn’t blind; they’re a power user, or they’re using a laptop with the trackpad disabled, or they have a temporary injury.

The system has now taken away an interface the person chose and replaced it with one they didn’t ask for. The core risk is that personalisation becomes something that happens to the user rather than something the user controls.

There is another risk in the promise of generative UI. Much of the hype assumes AI will generate better experiences for disabled users simply because an algorithm decided it was “more appropriate.”

But AI models learn patterns from the web as it exists today—and the web is messy. It’s full of repeated accessibility failures at massive scale: missing labels, broken focus order, inaccessible custom components, and interfaces built without semantic structure. If generative UI draws from the same ecosystem of patterns, it risks automating those failures faster and distributing them more widely.

This is why user choice is the missing ingredient in most discussions about AI and accessibility. AI can help accessibility, and it will get better at helping.

But letting AI decide what experience someone receives based on weak signals is the wrong direction. Some users will want customised interfaces; others won’t.

Many will want them in some contexts and not others. A one-size-fits-all personalisation engine simply replaces one accessibility problem with another.

A safer approach treats AI as optional enhancement rather than replacement. Changes should be reversible.

Users should be able to control preferences. Systems should be transparent about what changed and why.

And guardrails should ensure the basics never regress: semantic structure, keyboard support, predictable interaction patterns, and testing with real users.

The biggest risk is not that people explore AI. It’s the narrative “AI will solve accessibility” becomes a permission slip for leadership: accessibility is too hard, so we’ll wait for automation to fix it.

That thinking degrades real accessibility work, stalls momentum, and shifts the cost of experimentation onto disabled users yet again.

Accessibility is not a failed project. Accessibility is what happens when products are built with structure, predictability, testing, and real users in mind.

The uncomfortable truth is simpler we already know what to do. We just keep not doing it.

Contact us

We’re here to help bring your ideas to life. Whether you need expert support on a project, guidance to solve an accessibility challenge, or just want to explore an idea, we’d love to hear from you.

Contact us

Sign up to our newsletter

Sign up for our occasional emails packed with insights, tips, and updates we think you'll find valuable and inspiring.