App-controlled Appliances Could Solve Accessibility

Appliances controlled by an app? Really? Do we need that? I am sure we have all heard something along these lines and thought “oh surely that’s a gimmick, how hard is it to push the button on the oven?” Don’t laugh just yet. The Consumer Electronics Show (CES) 2015 has been full of reports of smart appliances and apps. GE has announced a new line of app-controlled appliances. Not to be outdone, the Google-owned Nest Labs is connecting more everyday devices to the internet. Dacor’s Android oven with voice activation can be controlled with a Siri-like app that lets you talk to it. LG’s smart appliances now respond to your voice and are connected to the internet. And you know, there is opportunity here.

For a long time everyday appliances such as washing machines, dryers, microwaves, ovens and even some stoves have been becoming less accessible, especially to the blind and visually impaired. Washing machines used to have knobs and dials on them. While the markings were not accessible out of the box, it was easy enough to place tactile markings on the important settings and arrows and make them line up and use it. Now they come with buttons and lights, or even worse, touchscreens with no feedback whatsoever. Microwaves and ovens are especially guilty of the touchscreen phenomena. Been noted that in many cases it’d really not be that difficult to add some feedback to these machines such as a voice that says “water setting not on hot” for the washing machine. Even a beep generator that’d generate different patterns would be of help. Easy or not, manufacturers haven’t done it.

If the reports from CES 2015 are any indication we are on the brink of a new wave of “smart” appliances, even things we maybe didn’t know could or even should be so smart. Instead of laughing at it, I would like to encourage the manufacturers, designers and marketers to consider the opportunity here to add accessibility. If an appliance is going to use an app, then let’s make the app accessible. For example, an app for iOS should work with the VoiceOver screen reader. Likewise an Android app should support TalkBack. Simple things such as is the oven on? Is it set to 350 degrees? Is the washing machine using cold water? Are all mundane but critical pieces of information that would make life a lot easier, and perhaps less smoky. What, the oven switched itself to broil instead of bake?

I have noticed a few reports focusing on voice recognition where the appliance will respond to a voice command. This is fine and can be convenient, but I would like to remind that the first piece for accessibility should be the output. So let’s not forget that communicating the current settings of devices is just as important, if not more so, than commanding them via our voices.

I would stil like at least some appliances to have more accessibility built right in without having to use an app on my phone. But if manufacturers are only focusing on apps now, and they are willing to add accessibility, it’d help a lot.


Multiple Braille Displays and One Screen Reader: A Proposal

Long suggested the “holy grail” of Braille displays is a multiple line display, that is, one with two or more lines of refreshable braille. At this time, due to cost, design complications, etc., etc., there are no multi-line Braille displays on the market. The advantages of such a device are numerous. Just a few that would be helpful for me:

  • The ability to read more text in context at one time
  • Ability to “look at” two points of interest in a document at once
  • Assign different lines to different windows or applications (comparable to previous item)
  • Rudimentary tactile graphics
  • Assign different lines to different devices (e.g., a PC and a smartphone)

At the time of this writing the closest that I am aware of to achieving any of these capabilities is Optelec’s Braille Controller Series Alva BC680 which allows for the connecting of two devices and dividing the line of braille between them. Using this feature it’s possible to get the effect of accessing two applications simultaneously by using two PCs. It is still not as good as a multi-line display has the potential to be because you’re giving up cell real estate from one application for the other, and you need two PCs. However, it’s better than nothing.

You may be wondering, why do you want to look at multiple points of interest at once? It is a case of efficiency. For example, one can look at a spreadsheet of transactions while simultaneously entering them into an accounting package and double checking that values are entered correctly, without having to switch between the applications. While this is easy to do visually, placing windows side by side and so forth, at this time using a screen reader with speech or Braille it is not possible. Instead the user must first work in one application, then switch to another application. And to just double check what something says in the first application he must switch back to it again. This uses valuable time and effort. And in some cases, switching to and from an application can have other undesirable results such as the application moving the focus to where it thinks is a more convenient place. For instance, each time I switch from a data entry page in QuickBooks Online and then back it highlights the entire contents of the field I was typing in which can easily lead to deleting all the contents when all I wanted to do was check something and append more information.

The Proposal

Many users now have access to two or more Braille displays. One is typically a larger model used with a PC such as 24, 32, 40 or even 80 cells. And a second display is a small portable display used with a smartphone or other portable note-taking device. These displays range in size from 12 cells to 18 cells. Recently the cost of Braille displays especially in the pocket-sized category has been coming down, to as low as $995 for the Braille Pen 12 and $1,795 for a Refreshabraille 18 with 18 cells. With this in mind it would seem the next logical step is to combine the use of these displays to get closer to a multi-line display.

With displays now connecting via Bluetooth® and USB it is not an issue to connect multiple devices to a PC. Hence it is up to the screen reader to take advantage of this ability and drive these displays. As discussed there are several opportunities for presenting more information to the user such as monitoring one application while simultaneously working in another. The only screen reader that currently does drive multiple displays is Apple’s VoiceOver screen reader for the Mac. It does not, however, have the feature to convey information from multiple points of interest. It is billed as a training tool allowing for multiple users to read the same information in a classroom setting. This is a start, now we need to take this a step further and empower the end user to be more efficient.

Monitoring Web Page Dynamic Updates With JAWS Step-By-Step

Author’s note: This post is being republished due to a blog crash. This post was originally published in mid 2012.


Recently I wanted to have JAWS monitor a webpage in the background while I was doing other things and report to me the latest data in a certain area when I requested it regardless of where else I was working on the PC. In case you can’t imagine this, the specific scenario is to monitor a quote from a trading website that refreshes. I am interested in being able to access information from other sources regardless of where the PC’s and JAWS’s actual focus is. Possibly it is an inspiration from The Qube and the ideas that came before it.

At present this solution is not portable, meaning each website and piece of information that you want to monitor has to be specifically scripted. It’d be nice to have a more portable easier to use solution. I am writing this up in part so anyone else who wants to try this can, and possibly someone will have ideas on how to improve it. For example, is there a way to use XPath to access specific webpage elements from JAWS script?

Background and Requirements

The trading website has a quote frame that can be set to monitor a ticker symbol and refresh approximately every 30 seconds. I wanted to access the latest quote on the page from anywhere, such as while checking email in Outlook.

Looking at the source, the quote was inside a table which was inside a frame. It therefore seemed possible to use JavaScript to extract the value of the quote. In order to make the script callable from anywhere the scripting would have to reside at the Default level of JAWS scripting. The scripts may either be included directly in the default.jss script source or in whatever other favorite means you may have for adding scripts to the default, such as by adding a use statement to call a compiled version.

Finding the Quote Value

The quote value is in a table cell with ID of “tdLast” and the amount changed is in another table cell with ID of “tdChange”. The table is in a frame with a ID of “footer. Since the frame already has an ID it is easier to find using JavaScript. To locate the TD element:

    dom = IEGetCurrentDocument () ; JAWS function to get pointer to currently open and focused page

    frame = dom.frames ;retrieves the array of frames on the page

    footer = frame(“footer”) ;only need this one

    doc = footer.document

quote = doc.getElementById(“tdLast”); the table cell element with the quote


You may notice that some of this code seems like it could be combined into one line, such as to go directly to the document of the frame: frame(“footer”).document. However, this does not work; it is a limitation of JAWS script calling COM objects that does not allow for this level of nesting. My advice is to stick to the single statements, it is a bit more typing but makes JAWS happy. And it is easier to debug.

Now that we have the table cell element it is a simple case to access the text by using the DOM property innerText:


This last bit of using getElementById() can be used to retrieve the other needed table cell as well that shows the change in the quote’s value.

Depending on your preferences you can parse the data in these cells to read what you want. For example, the “tdQuote” cell in this example has contents such as “25.36 last”. Since I don’t always want to hear “last” I parse it out:

q = StringSegment (quote.innerTEXT, ” \t\r”, 1) ;a return character is between the value and the string last

Setting Up Global Access

In order to have access to the webpage of interest regardless of where focus is we need to store the pointer for the page in a global variable. Then reference the variable instead of using IEGetCurrentDocument() for subsequent requests for the data. This does require that the user do a one-time initialization routine each time she wants to monitor the page. We also have to find an object that will continue to exist, at least as long as the browser is running with the webpage loaded.

I first tried to just cache the pointer to the “footer” frame as this would be the most efficient and not need to continually navigate the DOM to find it. However, the frame does a complete refresh and this destroys the pointer to the frame. Therefore its necessary to store the pointer to the top level webpage instead, and then navigate to the frame and find the table cells with each request for the latest quote.

I created a global variable of type object at the default script level, and then created ascript to store the pointer to the webpage. This requires that the user first navigate to the page, then run the script to set up the initialization. As this is a secured website where the user must first log in, this makes the most sense. (Though ideas on how to automate this are encouraged.) The script that I used to do this:


        object webpage

Script SetQuotePage()

;find and store the object in webpage

Webpage = IEGetCurrentDocument ()

        if webpage then

            say(“quote ready”, OT_NO_DISABLE)




I used default-level scripts and a global variable to access data from a webpage regardless of where the current focus is. I wanted this to be obtainable via a keystroke, and I did not want to use the JAWS Research-It tool as I did not want a window opening and closing when I just wanted the information quickly on the fly without affecting anything else that was being worked on. There may be a more efficient way to parse the webpage and I hope to find one. It would also be nice if this approach could be modified to make it more easily scalable, such as to set points of interest on any webpage with a keystroke and have the ability to automatically determine how to reference the underlying object. This would make it more usable for those who are not developers, and for when webpages change.


Future Direction of User Interfaces, Software and Technology


Ahtor’s note: This post was originally published in late 2012. Due to a blog crash its original timestamp has changed. Some information may be newer since the original publication, such as Windows 8 has been in the wild now. Still some points remain so it is being republished.


Recently I was asked “What do you think of windows 8, and the new direction of Microsoft?”


While I am not entirely sure to which direction the questioner was referring to I thought of a few things that may be worth considering in the accessibility and user interface areas.

From an accessibility standpoint I am glad to see Narrator expanded a bit even though it is still to little, e.g., no braille support. It will be nice and helpful for blind visually impaired users who need a screen reader can install and reinstall Windows without assistance if that comes to pruition. I also have heard they’re tweaking the display driver situation, which will restrict some screen readers who use this technology to build off screen models (OSM) a bit. I do not think this will have a large impact on most users’ day-to-day use of these screen readers.


What I am more interested in is the direction of touch. Hopefully we see commercial screen readers for the Windows OS take advantage of this, and get some features like VoiceOver on iOS and Mac have. Touch interaction has the potential to help with several things in particular spacial understanding and information. In addition, it can speed up the interaction. You may have experienced this on the iPhone: if the user knows where the Mail icon is on the home screen she can get there very quickly, not tabbing, not typing. Likewise, on a Mac a user can use touch to jump right to the interesting part of a web page if he’s familiar with it.

The new JAWS’ Flexible Web is a good idea and will do the same thing in some situations. I think the combining of the two would be outstanding.


Also along the lines of spacial information, I think that there should be some way to use touch to explore and get a basic idea of charts and graphs. One thing I feel is sorely lacking still is on-demand access to graphical information such as charts. So far tactile graphics is the best, but you need someone to make them by hand to be effective generally. There is a lot of graphical information out there though in a real-time context that we don’t’ have the time or resources to make accessible in this manual method. I am mainly thinking charts, graphs. You may think of a K-12 setting and mathematics. Or for the investor being able to analyze information from day/week/month charts to get a quick idea of trends, highs, lows, etc. Or if you prefer the enterprise application market, the “dashboard” concept is very popular, where there is bar charts, pie charts, etc., to give an overview of systems that are running in a data center, how many intrusion threats the network has sustained in the last 24 hours, etc. For now I have to advise my clients to provide a tabular data view of this information to provide accessibility. Even when they do this it isn’t ideal as sometimes there is so much information, that just cannot be understood quickly and easily when you have a table with 100, 1000 rows…


The next step will be tactile feedback to go along with the touch, but we will need a little more advancement in technology. There is research being done on haptics where you can simulate feedback on a touchscreen, a virtual button if you will. It involves electric currents, etc. It was rumored it’d be on the “new iPad” though obviously it isn’t. It probably isn’t quite that commercially available yet. But once it is, this same technology could have a lot of uses, imagine a virtual braille display or a tactile diagram right on your iPhone.


Haptics will go further than just assisting those with disabilities. There is currently research being done on how haptics can provide another source of information, such as to a driver. Research has shown that a driver who receives haptic feedback on what direction to take will respond quicker than receiving the same information via audio (the current GPS systems that call out “turn right!”).

An article recently appeared in The science Daily “Designing for the sense of touch: A new frontier for design” which highlighted a researcher who is studying how to make touch “feel right.” It is good to see this type of research reaching the mainstream and this will improve the availability to all.