Mudlet & screenreaders

User avatar
Heiko
Site Admin
Posts: 1548
Joined: Wed Mar 11, 2009 6:26 pm

Re: Mudlet & screenreaders

Post by Heiko »

Don't press any keys until SAPI is ready and Mudlet starts to read the game text. Then you can use the cursor up and down keys on the left side of the num pad to scroll through the game text, skip lines etc.. control+return forces the cursor to the end of the buffer immediately. control+cursor down reads everything automatically from the current line until the end of the buffer.

important: The focus stays in the game text windows all the time unless you switch windows with f1, f2 or f3 to get to the script editor console or the text editor. Note that you currently must use these keys to switch windows. If you switch windows by other means, Mudlet doesn't work properly as this is an alpha version. Every window has 2 text entry fields, a text input field to display the game text and a command entry field to display the user command, but these 2 fields are treaded as a single entry field. Note that the game text field doesn't display any text. This is an audio only program. You can control the audio read cursor and you can type text and send it to the game as a command by pressing return. There's command completion also, but this isn't complete as far as I remember. NVDA is only needed for the menu bar and the text editor in case you want to write any scripts. NVDA is not needed in actual game play.

SAPI is being used because NVDA doesn't have a proper API. It only offers a function to add text to a speech buffer or to clear the speech buffer, but no functions to get any information on the read cursor position within the speech buffer. Consquently, you can jam all newly arriving text to NVDA's speech buffer like Mushclient does . This makes it impossible to skip individual lines. Either you hear everything or nothing i.e. you can only skip text in Mushclient if you jump to the end of the buffer and thus force a read stop and a NVDA speech buffer reset and then move the cursor backwards thus effectively reading in bottom to top direction. This may be OK for long time hardcore players who only need to read the first line in a paragraph, but not enjoyable for anybody else. This also leads to a myriad of other issues like line duplications etc. because Mushclient essentially doesn't interact with NVDA in any other way than posting newly arring lines and flushing the speech buffer.

parham
Posts: 6
Joined: Tue Aug 28, 2012 9:29 am

Re: Mudlet & screenreaders

Post by parham »

Hi,

Thanks a lot for the explanation. It works perfectly as described.

First of all, I'm sorry for asking the question again (I have read the numerous times you've responded to the "why SAPI and not NVDA?" debate) but the most important thing for me was the not providing a cursor in the buffer problem.

I can think of two solutions. One is, as far as I know, cross-platform, and the other is more based on the capabilities of a particular screen reader:

1. If the only skipping you allow is using up/down (like you said in the previous message), handle the queue yourself. Gather it, then send it to the screen reader line by line. When the user wants to skip a line, stop speech, and continue to the next loop (I.E. next/previous line based on what key the user has pressed).

2. I can create a ticket asking NVDA developers to provide a way of querying the cursor position in the speech queue and modifying it. However, this would eventually lead to "Mudlet works better with NVDA because it provides the functionality we need, but not with Orca/Voiceover/JFW/etc".

Any thoughts? Am I totally talking nonsense here?

User avatar
Heiko
Site Admin
Posts: 1548
Joined: Wed Mar 11, 2009 6:26 pm

Re: Mudlet & screenreaders

Post by Heiko »

parham wrote: 1. If the only skipping you allow is using up/down (like you said in the previous message), handle the queue yourself. Gather it, then send it to the screen reader line by line. When the user wants to skip a line, stop speech, and continue to the next loop (I.E. next/previous line based on what key the user has pressed).
Impossible because we have no way to find out when NVDA has finnished reading the line.
2. I can create a ticket asking NVDA developers to provide a way of querying the cursor position in the speech queue and modifying it. However, this would eventually lead to "Mudlet works better with NVDA because it provides the functionality we need, but not with Orca/Voiceover/JFW/etc".
NVDA is a GPL project so I have no problems with Mudlet only supporting NVDA. I don't know about orca, but very few blind users will use Linux anyways.

parham
Posts: 6
Joined: Tue Aug 28, 2012 9:29 am

Re: Mudlet & screenreaders

Post by parham »

Heiko wrote: Impossible because we have no way to find out when NVDA has finnished reading the line.
If NVDA developers implement this one, will this be possible? Also, is this the preferred way? For me it is, because the less functionality you require from a screen reader, the easier it will be to implement the same functionality in other screen readers.
NVDA is a GPL project so I have no problems with Mudlet only supporting NVDA. I don't know about orca, but very few blind users will use Linux anyways.
True, not many Linux users, but there are a lot of blind Mac users. Plus, it is less trouble in the future not to rely too much on a particular screen reader to begin with.

User avatar
Heiko
Site Admin
Posts: 1548
Joined: Wed Mar 11, 2009 6:26 pm

Re: Mudlet & screenreaders

Post by Heiko »

SAPI provides callback funtions to let Mudlet know when a line, word or even phoneme has been read. Thus, Mudlet can feed the speech engine directly with the necessary data. I'd say that a word or line based callback would be all we need.

parham
Posts: 6
Joined: Tue Aug 28, 2012 9:29 am

Re: Mudlet & screenreaders

Post by parham »

Heiko wrote:I'd say that a word or line based callback would be all we need.
Excellent! Two more questions and then I'll be off requesting these callbacks:

1. Isn't it possible to pass the text line by line to NVDA and then query NVDA to see if it is done speaking? That way you can pass text in every unit you want like words or lines, and then keep querying NVDA to see when it has finished speaking the passed word/line.
2. To give NVDA devs a better idea of what we need, can you please provide the name of the functions you are talking about (the two that give you status on whether a word/line has been spoken)?

Thanks a lot for being patient with me.

LexTheSame
Posts: 5
Joined: Fri Jan 11, 2013 11:25 am

Re: Mudlet & screenreaders

Post by LexTheSame »

I'd say the possibility that NVDA devs accept this request is very unlikely. It was stated a bunch of times that NVDA controller API is not mean to be a full-fledged speech interface and never will be so. Application developers should utilize accessibility interfaces (such as IAccessible2 or UIAutomation), this will allow different assistive technology to work unifely with the app and provide user with consistent behavior.

parham
Posts: 6
Joined: Tue Aug 28, 2012 9:29 am

Re: Mudlet & screenreaders

Post by parham »

LexTheSame wrote:I'd say the possibility that NVDA devs accept this request is very unlikely. It was stated a bunch of times that NVDA controller API is not mean to be a full-fledged speech interface and never will be so. Application developers should utilize accessibility interfaces (such as IAccessible2 or UIAutomation), this will allow different assistive technology to work unifely with the app and provide user with consistent behavior.
True. However, I can't see a way for these features to be implemented using these interfaces. If someone can come up with a way though, perfect.

The biggest problem is the issue of skipping forward/backward in text. If we can solve that using, say, IAccessible2, then we can say that this should be done that way.

User avatar
Heiko
Site Admin
Posts: 1548
Joined: Wed Mar 11, 2009 6:26 pm

Re: Mudlet & screenreaders

Post by Heiko »

NVDA's IA2 implementation doesn't help in our case because we're dealing with constantly changing page contend i.e. the displayed text is just a small part of a large fast growing buffer. Updating the displayed text screws NVDA's cursor badly.

LexTheSame
Posts: 5
Joined: Fri Jan 11, 2013 11:25 am

Re: Mudlet & screenreaders

Post by LexTheSame »

It should be fixed on the NVDA end then. If you can post a link to mudlet build with IA2 implementation which shows broken behavior to bug tracker, devs will look at it and hopefully find the cause. That's open source colaboration!

Also, did you try to test it with demo of commercial screen reader like jaws? it's also possible we have an implementation bug on our end.

Regarding skipping sentences, I don't see a way to do that without (a) resorting to self-voicing (using platform-specific speech APIs) or (B) writing customization modules for specific screen readers. I myself doupt usefulness of such a feature though I play muds 8 years, but it seems you have enough user requests for doing it.

Post Reply