I am very curious and want to help to make Linux more accessible.
I wrote with some people and got some insights:
- everything text, like a read-mode-only browser or a plain Terminal is best for TTS engines.
- TTS engines are difficult, some are really good but need many resources, some are worse but save resources
- TTS needs to be optimized to be really fast in some cases, to keep up with the speed
- some apps are better, some are worse, but probably most apps dont really suit blind people, as the whole GUI concept makes no sense
I am really curious. How would it be best for you, braille vs. voice, voice input vs braille vs. gestures?
What apps do you find best, how do you browse the web, find media to listen, how do you use Document editors and what purpose do they have for you?
Thanks a lot!
Or perhaps, better to rephrase as “first priority should be to have a system that’s as reliable for blind people as it is for sighted people”. In practice, that means that whenever text is printed to the screen, there needs to be a way for a blind person to know about it. Text to speech systems like espeak can run in kilobytes of memory and storage. The primary problem is sound support.
The second problem is maintaining this system. Right now, Linux is caught in a vicious circle. The system isn’t accessible enough for a blind person to use, so why would a blind person put in a bunch of work on it? The NVDA screen reader on Windows is an open source screen reader entirely created by blind users. But that only works because Windows is accessible enough that the tools blind people need to create and maintain software are accessible enough for us to use. What are the tools for creating these types of systems like on Linux? You mentioned CI tools. Currently, the leading providers of these tools don’t provide decent screen reader access to them, as far as I am aware. So now the tools for blind people on Linux need to be built and maintained by sighted people. From a practical standpoint, this just isn’t going to happen. Open source only works when people are scratching their own itches. It’s power is when people can build solutions for themselves. In the long term, an accessible Linux built for blind people by sighted people just isn’t sustainable.
This is a very good point. So the basic issues should be fixed, creating the foundations for blind people to easily improve it.
For example espeak would need actually understandable voices, sound would need to work always even if everything breaks, Wayland support would need to work somehow.
But to be honest, I have no idea how a blind person works on translating something to speech/braille that is not accessible? Like, how do they know its even there?
It happens in a few ways. First, by examining the sourcecode when available. However, blind programmers talented enough to do this generally have paid, closed source work they’re busy with. Second, when a platform has accessibility API’s, it’s at least easy to get the outline of a system, and determine what’s not working. Third, of course, commercial grants for paid work. In the case of Windows, many corporations pay a lot of money to make sure Windows is accessible, so it can be used in schools, governments, and workplaces. This kind of money just hasn’t been invested in the Linux desktop ecosystem. As well, in a centralized closed-source system, it’s easier to force everyone to follow various coding requirements. In the case of Linux, who has the power to push through the infrastructural changes we’re talking about? Oracle/Gnome, I guess? But there’s a bunch of work the distros also need to do. Unlike in Apple or Microsoft, it’s not just a matter of getting the “CEO of Linux” (not a real position) convinced that accessibility matters and she should invest in it.
Very true.