Career    

PR/IR

News

News

[News] [nvaccess] CSUN
Date
2023.03.24 10:00

Last week, the CSUN Assistive Technology Conference was held in Anaheim, California. The NV Access team were there to share information about NVDA. We delivered our session to a packed room of eager listeners. There were insightful questions and some fruitful discussions with people after the session. You can find our session notes and info on our NV Access CSUN page. During the conference, we caught up with RNIB’s Dave Williams for a chat about NVDA on his podcast. We also met with others in industry and attended many sessions on new and emerging technology. 

One big theme this year was tactile graphics and multi-line Braille. Dot showed off their DotPad reading an Excel spreadsheet via touch, and with NVDA. APH & Humanware launched the Monarch. This premium display has 10 lines of 32-cell braille which can also display graphics. Orbit were demonstrating their Slate and Graphiti displays. And the Canute by Bristol Braille is a nine line, 360 cell machine. Bristol Braille are also about to release the Canute Console. This turns the Canute into a full PC workstation, which includes a keyboard and screen.

Just before moving on, I do compare every photo I post from CSUN against Mick driving a speedboat at CSUN 2017. They all come up short against that. But, one highlight this year (for your food obsessed correspondent at least) was the rows of food trucks outside the hotel on Friday and Saturday when California dusted off the sun and put on the nice weather for us. I captured the scene in the photo below.


 Now we are back at our desks, the main news this week is the release of NVDA 2023.1 RC2. This release candidate includes a fix which will be very popular with Kindle users. “NVDA no longer fails to continue reading in Kindle for PC after turning the page.”

We encourage everyone to download and test NVDA 2023.1 RC2. The release announcement also contains the full what’s new text with all the fixes, features and changes.

And an update on add-ons. The Add-on compatibility page was last updated on Monday 20th March. At that time 60% of add-ons had been updated to work with NVDA 2023.1. We know of at least several others since then, as well as external projects such as Acapela’s voices. Everything is gearing up for the stable release!

If your favourite add-on hasn’t yet been updated, you can still test out the RC. Run NVDA 2023.1 RC2 as a temporary copy by choosing “Continue running” from the launcher screen. This won’t affect your installed version or add-ons and lets you test out new features and fixes.

Here’s a game for those of you who need something to occupy your mind while you wait for NVDA 2023.1 to come out. Guy Barker has made an accessible Sudoku game. What is sudoku? Make a grid of 3 x 3 squares. Now make a grid of 3×3 of those squares. The goal of Sudoku is to fill in each of those 81 squares. You need to have exactly 1 each of the numbers 1 – 9 across every line, down every column and in each 3×3 block. I think I’ve made it sound more complicated than it is! If that hasn’t put you off, you can read more from Guy Barker. You can also try out “Grid Game” from the Microsoft Store.


 Web Accessibility Global Usage Survey

 

We previously mentioned the survey being run by Reddit’s r/Blind community. We were very pleased to meet Rumster at CSUN where the initial data from the survey was presented. There were many interesting things to come out of this early data. Two we noted were:

1) Of the survey responses so far, there are more who use NVDA with Chrome as their primary browser than Firefox. 2) There appear to be a larger than previously expected number of NVDA users who also use the mouse.

It is worth reiterating that our keyboard users and Firefox users are all still very important! There are many things which can affect results like these, but they are definitely interesting. They confirm the importance for any organisation to learn about your users. They may not be accessing your product in the way you expect. The survey is still open, and you can take it now at webaccessibilitysurvey.com.

 

Task Aware Browsing

 

Another session we attended at CSUN was presented by researcher and technologist David Cane. Working with Massachusetts Institute of Technology, David has studied how people use online shopping sites. He has documented the amount of effort it takes blind users to access commonly used features. David has created a model for a browser extension to make shopping dramatically easier and faster. He has tested this on a number of popular sites with very promising results. David is looking for collaborators who can take this the next step and develop the product itself. Please see David’s summary below or access the “Presentation Link” on CSUN’s Task Aware Browsing session info. If you have any questions or are interested, please contact David Cane directly.

“The project is to implement a new design approach for an Internet browser for the blind. It uses Artificial Intelligence to convert the spatial location problem of finding active controls and the results from invoking them on a web page to a linear dialog model.

The new approach is called Task Aware Browsing, TAB for short.

 

Background

 

The IBM PC was released in 1981 with a command line operating system. In 1986 Jim Thatcher wrote the first screen reader for the blind Screen readers made the PC generally accessible for blind users who could touch type.

The advent of graphical user interfaces (GUI), while generally a boon to sighted users, was a challenge for blind users. Commands were now found in two dimensions with a pointing device, and output appeared in a variety of places on the screen. The location of the output sometimes influenced its meaning. Screen readers were developed to deal with GUI’s.

Web based applications made life for the blind user appreciably more difficult. There is no longer a consistent design approach for different applications, even those in the same domain. The display from the result of a command is often filled with clutter, making it necessary for the user to find the wheat amidst the chaff.

 

The TAB Concept

 

Consider for a moment how we all bought airplane tickets before the advent of the Internet. We picked up the phone and had a dialog with an agent about where and when we wanted to go. The agent read back the choices and we made a selection. The process was identical for a blind person. A linear conversation with a series of commands and responses. TAB aims to use AI to enable that process on a travel web site, and a similar process on all web sites for which its AI can succeed in mapping a command to that particular site.

We categorize web sites into a collection of vertical domains. Each domain contains web sites that provide the same kind of capabilities; e.g. shopping, travel, banking, Customer Relationship Management, research, etc. For each domain, a set of commands is designed to enable the operations that a user would need to complete all (or most) of the tasks that a user is likely to perform on a site in that domain.

It is the job of TAB to take each command, and perform that operation on the web site. If a user types “search eggs” on a shopping site, TAB must locate the search box, enter “eggs” and invoke the search. After the server responds with the new page, TAB must locate the results of the search amidst everything else on the page, and provide only that text to the speech generator.

It is not expected that TAB will initially work on any given web site within a domain that it has been coded for. Each site needs to scanned so that a machine learning based model can learn that site.

How does TAB relate to speech assistants such as Alexa? Speech assistants are a valuable tool for the blind, as well as the sighted community. However, they require that an application be written for each targeted web site in order for them to operate with that web site. TAB will enable a single speech application to work on all web sites that TAB supports and so greatly expand their capabilities to a much greater set of sites.

I have been a member of a volunteer research group of retired engineers loosely affiliated with the Perkins School for the Blind. Work has been done to provide proof of concept of the ideas described here. A prototype extension for Firefox was developed to test the usability of the concept. Test results from experiments with a blind participant were encouraging.

Volunteers with software skills are sought to move this from a proof of concept to a usable product. In addition, people with connections to enterprises that might be interested in bringing this to completion (e.g. Microsoft) are also invited to contact me. I have a couple of white papers that describe TAB and the work done to date in greater detail.”

If you are interested in furthering the TAB project, please reach out to David Cane.

That’s all for this week. Do try out NVDA 2023.1 RC 2 and let us know of any major issues. We’ll be back with another edition of In-Process around Easter!