Transcripts of Living Blindfully are made possible by Pneuma Solutions, a global leader in accessible cloud technologies. On the web at http://PneumaSolutions.com.
Welcome to Episode 230
Voiceover: From Wellington, New Zealand, to the world, it’s the Living Blindfully podcast living your best life with blindness or low vision. Here is your host, Jonathan Mosen.
Hello! I’ll talk Apple’s new accessibility features, and thoughts on their engagement with the blind community. I’ll demonstrate Google’s experimental music LM, transcribing audio using Microsoft Word, catching up with Hable One, and how do we handle offers of help we don’t want?
Welcome to episode 230. There is no area code 230 in the United States at the moment. I think 230 would be a great area code.
However, I do have a consolation prize. I think that [2:30] is a very good time for you to go to the dentist. Why? Because what better time to go to the dentist than tooth-hurty?
[Don’t be a bozo sound effect]
There is, however, a country code 230, and it belongs to Mauritius. So hello to you if you are listening to us from Mauritius. They’ve just done a census, just like we have here in New Zealand, actually in 2023. And they found that 1.3 million people live in the great country of Mauritius. And if one of them happens to be you, that is epic! I look forward to hearing about life in Mauritius.
Oh, what’s that?
Someone knocking at the door. There’s somebody wanting to tell us about life in Mauritius. We better let them in.
Voice message: Ki mani e ban Mauritia. Or to say it in English, hi all Mauritians! My name is Laurent. I’m originally from Mauritius.
And with today being episode 230, and this being the international dialing code for Mauritius, I just wanted to give a shoutout to all my Mauritian compatriots. If there are any of us listening to the podcast apart from myself, it would be really good to know.
Mauritius is a very small island off the east coast of Africa in the Indian Ocean. We’re about 2000 kilometers off the coast of Africa, and we have a population of about 1 and a quarter million.
We tend to punch a little bit above our weight in terms of being one of the countries in Africa with the highest socio-economic standards. That’s definitely something to be proud of.
I don’t sound very Mauritian. I sound more South African, which is because I lived in South Africa for most of my life and now live in the UK, but keep in touch with Mauritius frequently, often go to visit family. So very much Mauritian at heart, and very proud to be so.
Jonathan, I also wanted to say thanks so much for the podcast. Such a good idea also to have started the Living Blindfully plus subscription. And thanks for giving us the opportunity to contribute to your work in doing that.
And just wanted to wish you all the best, and I really hope the show goes from strength to strength.
Bye for now, everybody! Or as we say in Mauritius, Salam!
Jonathan: How fantastic was that? Great to get the message, Jerome. Thank you for sending it.
You must have been a Boy Scout or something because you were prepared. You got that message in good time for me to play it on episode 230.
Gratitude for Our Accessibility Progress
The third Thursday of May has become Global Accessibility Awareness Day. This is a really cool initiative that has gained a lot of traction over the years.
Many of us get invited to events to talk to media on Global Accessibility Awareness Day, and it reaches a very wide audience. It reaches people who perhaps don’t give a lot of consideration to accessibility for the rest of the year.
It’s also a time, I think, for us to reflect. And I was doing that reflecting when I was sending a post to Mastodon on this subject. Because when I think of what it’s like to be a blind person in 2023 compared to 1993 even, we have come a long way. When you look at awareness of accessibility from mainstream technology companies, mainstream devices, when you look at the societal change that we’ve all benefited from in terms of offices no longer being filled with row upon row of paper files that we couldn’t access, we’ve made a lot of progress.
That doesn’t mean that we should rest on our laurels, of course. That’s an interesting expression, actually. What does it even mean? [laughs] I’ll ask the drinker. It’ll help, I’m sure.
Soup drinker, what does it mean to rest on your laurels?
Soup drinker: The origin of this expression comes from ancient Greece, where laurel leaves were awarded as a mark of distinction and victory.
Jonathan: There you are. So Greece is the word on this occasion. Don’t rest on your laurels, of course. We mustn’t do that.
But it’s also important to savor the fruits of our victories, I think. Because if we don’t realize the progress we’re making, we get discouraged. And we are making a lot of progress.
So thank you and congratulations to everyone who, in any capacity whatsoever, helps to make the promise of this technology real. It can sometimes be a difficult and thankless task, but there has been so much incredible innovation in the accessibility space over the years, and we are all the better for it.
Summary of, and Commentary on Apple’s Accessibility Announcements
The work, of course, continues, which leads me to Apple’s Global Accessibility Awareness Day announcement which happened a little earlier than Global Accessibility Awareness Day this year. I guess Apple’s media people decided that they would prefer not to get lost in the clamor of Global Accessibility Awareness Day releases that tend to come out now at this time of year. So they went a couple of days early. And we’ve come to expect this now.
And I know there are two schools of thought about this. Some disability activists say, “Why can’t we get some of that prestigious WWDC keynote time? Why is accessibility talk relegated to something else?”
But there’s a counter argument, too, which is that often, disabled people are last all the time. Maybe we’ll get a little bit of mention towards the end of something, or they’ll do a keynote like Google IO did. And then, they will do a follow-up accessibility presentation after the keynote.
And of course, Apple does this too, to be fair. They do their big keynote. And then, there’ll be a range of accessibility presentations.
In this instance, though, by going so early before WWDC, the first announcements we get about the next version of iOS, or macOS, or watchOS (although I don’t think watchOS really figured it all in the announcements this year), are accessibility-related. It’s refreshing to be first for a change.
You will no doubt have heard a lot about these announcements, but let’s just recap them. And I might offer a little commentary on those accessibility announcements this year.
I was struck by a comment in the opening paragraphs of the release, even before we got to specific new features, which read: “Apple works in deep collaboration with community groups representing a broad spectrum of users with disabilities to develop accessibility features that make a real impact on people’s lives.”
I wonder what’s prompting them to say this, because there has been a feeling for a while that Apple’s the kind of company that thinks it knows what’s best for people. And we’ve seen Steve Jobs in his era commenting on this, that you don’t go and ask people what they want, you go and give them something that delights them and surprises them, and all that typical kind of Apple language.
And it sounds like there’s a signal that Apple’s trying to send, that they understand the nothing about us without us culture of the disability sector.
So with Apple releases, you do have to do a bit of decoding. And if this represents a bit of a cultural shift, then it’s a very welcome shift.
Or should I say a reversion? Because I remember when the iPhone 3GS was released with VoiceOver, Apple was, (particularly in I believe 2010), very engaged with the disability community. They attended CSUN, and had a significant presence there. I do remember them being on the NFB agenda and having presentations that you could attend in 2010, so I presume they were at ACB as well. I don’t have any recollections of that. I remember the ACB convention in Phoenix that year, but I can’t recall if Apple were there or not.
And then, it kind of dried up. That kind of engagement with our community largely halted.
There are people that Apple talks to behind the scenes. And you know, you’ll get tapped on the shoulder every so often. It’s different, though, from actually engaging with the community in open session. That doesn’t seem to happen anymore.
Now, I don’t subscribe to this view. But if I don’t mention it, I know that somebody’s going to. So I may as well be up front with this.
Some people will say that one of the reasons for that disengagement was resolutions passed by NFB pertaining to the serious bugs in Apple products. I simply cannot believe that Apple would be that churlish.
As purchasers, we absolutely have the right to say when a product is not of the same standard for blind people than it is for sighted people. That is a fundamental act of discrimination. Discrimination doesn’t have to be deliberate. It just has to occur, and it is incumbent upon any consumer organization worth its creds to point those things out. And I just can’t believe that Apple would be so vindictive as to withdraw for that reason.
I am forever the optimist. But I’d like to hope that there will be some meaningful engagement with the wider blind community, because that is the culture that we expect when you participate in this market.
When Apple goes to certain geographical markets, they’re very aware of the culture there. You would not, for example, market in China or engage with China the same way you would in the United States.
And the disability community and its various important subsets are the same. Appropriate engagement is essential.
And I think, that’s one of the things that frustrates so many people that people volunteer their time for Apple, one of the most wealthy companies on the planet. And yet, so many blind people volunteer their time and write down meticulous, extensive bug reports, only to find that the bugs persist, their ability to do their job is affected, their productivity is impacted.
We are not talking about trivial things here. We’re talking about the ability to use these products as advertised. They’re talking about bugs that were they present for sighted people, would be headline news. I’m not talking about headline news in the tech press. I’m talking headline news in the general press because they would have such a debilitating impact on people’s ability to use the product.
You know what would happen then? Their shares would tank. And when their shares start to tank, then you get some repercussions at management level.
But because it’s blind people, we will get this flurry of, “Isn’t Apple marvelous at this time of year?”, but the media will not follow up if the bugs persist. So this little comment really intrigued me when I read it.
I note also, though, that when they went on to quote various people from the disability sector, there was no blindness organization quoted in the release at all.
The signature feature that was announced for blind people is called Point and Speak. And it’s important to note that Point and Speak relies on LiDAR. At present, the only iPhones in the range that have LiDAR are the iPhone Pro models, which can be too expensive for some.
We’ve had this discussion on the podcast over the years. The new iPhones come out, people say, “Is the LiDAR worth it?” And the general consensus has been, “Well, maybe not at the moment.”
You know, the door detection is cool. Just a nice proof of concept. The people detection is cool. We did have a really interesting comment from somebody who said that they used people detection during the height of the pandemic, when social distancing was so important, and they were trying to find a seat on a bus, or a train, or something like that. And people detection really does help with that. That’s an amazingly cool use case.
But unless the product differentiation strategy changes with the next iPhones that should be shipping in September, all being well, then you will need a pro iPhone to take advantage of this Point and Speak feature.
Quoting from Apple’s release, they say, “Point and speak in Magnifier makes it easier for users with vision disabilities to interact with physical objects that have several text labels. For example, while using a household appliance such as a microwave, Point and Speak combines input from the camera app, the LiDAR scanner, and on-device machine learning to announce the text on each button as users move their finger across the keypad.
Point and speak is built into the Magnifier app on iPhone and iPad, works great with VoiceOver, and can be used with other Magnifier features such as people detection, door detection, and image detection to help users navigate their physical environment.”
Now obviously, this is a super cool-sounding feature. The idea that you can walk up to any appliance and find out what the buttons do is hugely impactful. I mean, this solves a real world problem that many of us face regularly when we’re dealing with appliances, perhaps that we haven’t had the opportunity to label in our chosen way.
I’m really looking forward to finding out how well this works. It’s not so much the case with the detection of appliances thing, which you’re likely to be doing inside a kitchen somewhere, or a laundry room, or the TV room, or something like that where it’s a little bit more private.
But with some of these other features like people detection and door detection, I know some people do feel a bit self-conscious and potentially unsafe holding their phone out in front of them to make the most of these features, especially if it’s an expensive iPhone Pro device.
So I do wonder whether some of these LiDAR features have been a glorious test transmission for the headset, which we are expecting Apple to announce at WWDC this year. If you had a headset on and you were getting information about people who were coming into your vicinity, if you could find an empty seat on a bus with that headset on, if you could find doors with that headset on, and to some lesser extent, just work out what buttons on appliances do with that headset on, that is pretty exciting. I will be most interested to see the degree of relevance that the headset has to blind users.
I think we’ve long gone past the stage with Apple where we don’t have to worry anymore about whether this new product is going to have VoiceOver or its equivalent on it. I trust Apple in this regard. I think there will be an accessibility suite of features on this new headset product. But if it’s got the LiDAR and it’s doing those things, then some of these features which we can kind of take or leave on the phone could become really useful on this new device.
I do feel compelled to observe, though, that this is a little bit of a Sherlocking going on in the disability space. If you’re not familiar with Sherlocking, why don’t I bring the drinker back and ask it?
Soup drinker, what is Sherlocking?
Soup drinker: Sherlocking is the present participle of Sherlock.
I have a few definitions for the verb Sherlock.
- A detective, from Sherlock Holmes, especially used ironically to address somebody who has stated the obvious.
- To deduce, to figure out, to solve.
- To search, to hunt, to seek.
- To obsolete a unique feature in third-party software by introducing a similar or identical feature to the OS or a first-party program/app.
Jonathan: Yep, and it has a long history.
It goes back to 2001, when a product called Watson shipped for the Mac. And then Apple essentially made it obsolete by putting it in its own operating system, doing it itself.
And every year, when WWDC rolls around, third-party app developers sit there trembling, wondering if their app is going to be Sherlocked.
For example, there’s going to be, by all accounts, a new journaling app in the next version of iOS. And the Day 1 Journal people are saying, “We’ve been Sherlocked.” It looks that way, that Day One Journal may have a lot of its features not only copied, but bettered with the journaling app that Apple is producing because they can do certain things that other mere mortal developers cannot.
So anyway, this VizLens program is seeking to do the same thing and help you to use your appliances. So we’ll see how this all goes with the Magnifier versus VizLens.
If you have low vision, you’ll be delighted to know that text size is going to be easier to adjust on Mac apps. That’ll be very welcome.
Another good feature for iOS users is that the Siri voices that VoiceOver uses will be the real deal the high quality Siri voices. And we talked about this on the podcast as well that you can use the Siri voices for your VoiceOver voice, but they’re not the really super expressive, high quality ones. I suspect the reason for that may have been latency. But perhaps with a combination of the voices becoming more efficient and computers becoming faster, that’s deemed no longer necessary. So that’s going to be a great addition for those people who like using text-to-speech engines that sound more natural.
If you are a hearing aid wearer (and a number of blind people are), then you’ll be pleased to know that made for iPhone hearing aids will pair directly with Mac. This is a great addition.
Let’s hope that handoff between your iPhone and your Mac is going to be seamless, that that works really well. Because what I found with iPad, (and it was actually the reason why I sold my iPad and don’t have one anymore), is that for my hearing aids at least, handoff between my iPhone and my iPad just wasn’t happening. Whichever device claimed it first was the one that kept it, even though I had everything set up correctly.
And that is another bug that has been around for years, which at least in my experience, has not been addressed despite repeated reports about it for many, many years. So if you are a made for iPhone hearing aid wearer and you’re looking forward to this feature, I would implore you to install the beta of Mac OS when that comes out, and make sure that the handoff is working properly.
And if it’s not, let Apple know about it in good time to hopefully address it. Because wouldn’t it be nice if a by-product of all the work on this is that they solve it for iPad as well?
No mention of made for iPhone hearing aids on Apple Watch, though. And that is disappointing. I would love to see that come to Apple Watch with the caveat that the handoff issues are sorted there.
Some people have said, “Ooo, a little bit slim pickings.”, in terms of new accessibility features for blind people this year. You know what? I’m pretty relaxed about that, if they are spending a lot of time on bug fixes.
Apple’s engineers are such smart people, aren’t they? I mean, they’re clearly thinking about the problems that disabled people face. And then, thinking about how do we actually make an impactful difference. All of the accessibility features (and I’ll talk about some of the non-blindness-related ones briefly in a minute), improve lives. And it must be so cool to witness the brainstorming that goes on to building them, designing them, where I believe that Apple, as a company, is letting the side down is quality control and bug fixes.
It is so common in the disability sector for there to be so much promise, only to have promise not met because of bugs and unreliability. Apple is not unique in this regard. But they have, for many years, I think, under-resourced bug fixing quality control.
Perhaps it’s a process problem. I believe there is, to some degree at least, a process problem in terms of getting the word to the people who are capable of fixing the issues.
So I’d be delighted if, for example, some of the Braille issues were resolved if Braille screen input became reliable again. Because now, it’s popping up these weird bubbles. I don’t know what the bubbles are doing. And you regularly find that you have to turn contracted Braille off and back on again to make it work.
In Braille screen input, you have people who are reporting their speech just disappears. I mean, that is a fundamental bug. There are lots of people reporting this. They can be using their phone, and speech just stops talking. Sometimes it comes back within a few seconds. Sometimes it does not, and you have to reset your device.
There’s the one that we’ve talked about extensively here on this podcast, about made for iPhone hearing aid users having very quiet VoiceOver on a call.
There used to be a workaround for this. You used to go to the rotor where you could change audio destination. That rotor option has disappeared.
And so many people You remember we had this discussion? It was a long, lengthy, lively discussion. And together, we did quite a bit of troubleshooting on this, and narrowed down the issue. Because at first, I didn’t realize that it was made for iPhone-specific until we got a lot of people saying when it wasn’t a problem for them, and when it was. And hearing aids were the common denominator.
I sent in some data. Apple actually gave me a lot of hope because they worked with me on this one, and got me to produce some data for them so they could see what was happening. That was really encouraging.
But the bug has not been addressed. And many of us who, in good faith, take time to report these things feel disillusioned.
You get people who’ve not been through this experience who say, “Oh, we just have to keep reporting things to Apple.” Many of us have bent over backwards spending time reporting these things to Apple. It’s the resolution that’s the issue. It’s not the lack of data about these bugs that Apple has at their end. If those things are addressed in significant number in the next version of iOS, well, I will be happy if we don’t get anything significantly new.
But also, keep in mind that the big feature that got everybody so excited with the current version of iOS was Eloquence. And Eloquence was not mentioned in the Global Accessibility Awareness Day press release last year. So we’re not getting the sum total of new things that VoiceOver can do. Or at least, we may not be getting that.
We may find some new surprises when people get to install the developer beta 1 right after WWDC on the 5th of June, which we will have a special episode with our panel to cover. It’ll be available immediately on Living Blindfully plus, and 3 days later to the general public.
So if you’d like to subscribe to Living Blindfully plus, you’ll get that right after WWDC.
Let’s have a look at some of the other features that were announced by Apple with respect to accessibility.
One benefits people with cognitive impairments, and it’s called Assistive Access. This can simplify the user interface considerably, and that’s what some people require.
Not only will this benefit those who are born with cognitive impairments. But also, it can help older people experiencing dementia.
Many of us get to a stage in our lives where we see people we care about deeply who might have been able to use an iPhone once, but are experiencing dementia. And over time, it becomes increasingly difficult to use an iPhone to contact family and friends. If you can simplify the user interface there, that is a very significant contribution that Apple is making to help people who are vulnerable who are going through a terrible experience to maintain contact with their loved ones.
If you have difficulty speaking, you can use Live Speech to type your responses on phone calls and in other situations where you want to say something, but you may not be able to say it.
Related to this is that there’s a feature coming from Apple called Personal Voice. This sounds quite similar in some ways to the acapella feature that was announced recently, where you can bank your voice.
If you’re in a situation where you may lose your ability to speak, then Personal Voice will allow you to record your voice that’s stored on device, and you’ll be able to use it for features like Live Speech.
To set this up, you’ll need to read sentences provided by Apple. You may be able to do that with a Braille display, I guess, but that remains to be seen.
There is, however, no indication that these Personal Voices can be used for VoiceOver use. That would be a very pleasant surprise if it were the case, But I don’t see any reference to that being a possibility. And you would think that if it were, Apple would make a bit of a song and dance about that. So this is very much being able to type things in, and have your voice speak these things back.
That’s not at all to undervalue what Apple is doing here. Imagine how differently we would remember Stephen Hawking if we could have heard his own voice as opposed to that trademark text-to-speech that he was known for.
So a really meaningful, excellent crop of accessibility features once again from Apple this year. Let’s hope that in addition to those, we also get some of the bug fixes many of us have been seeking for a while.
We are heading towards WWDC. What do you want to see? What do you think of these new accessibility features?
By all means, be in touch and share your views with the global community. email@example.com is my email address. Attach an audio clip if you want to that email, or you can just write it down.
You can also call the listener line, and that number is in the United States 864-60-Mosen. That’s 864-606-6736.
My sincere thanks to Pneuma Solutions for making transcripts of Living Blindfully possible.
Advertisement: RIM is free for all to use for up to 30 minutes every day.
Need more time? Well, there’s a range of monthly price plans available, and you can buy a RIM pass to assist one person as much as you like for a 24-hour period.
And there’s yet another way to use RIM. The RIM Community Support Package is designed for those who occasionally need to provide or receive remote technical support, but don’t want a pro plan. With the RIM Community Support Package, you get an additional 3 hours every single day. That’s on top of the 30 minutes RIM gives everyone free.
Normally, this package sells for just $149 a year. And it’s a steal at that price.
But if you sign up now, you can get it at the special price of just $99. That is one epic deal.
Check out the RIM Community Support Package at getrim.app. That’s G-E-T-R-I-M.app.
Demonstration of Google Music LM
[up-tempo piano music]
AI is everywhere. We’ve all heard about ChatGPT and Google Bard. We’ve heard on this show about ElevenLabs and the way that artificial intelligence can emulate voices, sometimes with stunning accuracy.
And what you just heard was artificial intelligence-generated music. I kid you not. It’s coming from Google’s AI Test Kitchen, and it’s a product called Music LM.
You may recall that when I was recapping Google IO last week, where there were over 100 mentions of artificial intelligence in that one keynote, I mentioned Music LM and how intrigued I was by it. But Google was only giving access to certain people, and I pleaded with the powers that be at Google to give me some access so I could show it to you. And I’m delighted to say that my pleas to Google appear to have been heard because they emailed me and gave me access to this so I can show you what it’s all about.
You know probably that you can ask ChatGPT and Google Bard to create an image based on a description that you type in. Well, this is the same thing, but for music. You can give it a description, and music comes back.
You can download the music if you want, but they are quite short snippets at this stage. And I think Google is treading very carefully on the whole question of copyright infringement. They have been working with the musician community on questions like this. So this is very much experimental.
It’s behind all sorts of locks that you have to get the keys to. So I’m delighted to have been able to do that, and we’ll put it through its paces today. Thanks to Google for allowing this to happen.
I have my Windows PC running JAWS, and I’m at the correct page. I’ll check the window title.
JAWS: AI Test Kitchen – Microsoft Edge.
Jonathan: And I’m at the AI Test Kitchen page. I’m going to navigate to the edit field on the page by pressing E.
JAWS: What do you want to create? Edit.
Jonathan: And we’ll turn forms mode on.
JAWS: What do you want to create? Edit. Contains text.
Jonathan: And that’s the key question. What do we want to create? The more descriptive you can be, the more happy you are likely to be with the product that you get back.
For example, to generate the music at the beginning of the segment that you’ve already heard, I told it that I wanted up-tempo piano ragtime music. And I think, it accurately reflected what it was that I was after.
Let’s try something completely different. And we’ll say Disco Up-Tempo 1970s, and we’ll just leave it at that.
I’ll press Enter.
JAWS: AI Test Kitchen Document.
Jonathan: And it will play 2 samples. The first sample is Camelot.
Here’s the second sample.
So as you hear, these are very much samples, kind of like the samples you used to get in the iTunes Store. They start quickly. They stop quickly. They’re not complete pieces. This is really meant at this stage for testing purposes, to give you an example of what’s possible. And to keep things in a test vein, it is true, these are mono. I’m not failing to mix things correctly. They are coming back in pretty low quality mono. So very much a sample of what might be possible in future at the moment.
Now that we have these two samples, though, we can rate the one that we think most sounds like what it is we wanted. Actually, I think both of these are very good. They sound like something that came out of the 1970s, very disco-y.
If you attempted to guide it by giving an artist name, it won’t be happy with you, usually. So if you wanted, for example, something that sounded a bit like the Bee Gees or Donna Summer while we’re staying with this disco vein, if you type in an artist name, it is most likely not going to respond at all. It does not want to go anywhere near plagiarism.
Let’s see if we can get a big band feel going here.
JAWS: What do you want to create? Edit. Contains text.
Jonathan: I’ll just stop that and start typing in something new. So we’ll type big band, swing, up-tempo, happy, bouncy, 1940s. So kind of get a Glenn Miller feel going here without actually mentioning the name.
JAWS: AI Test Kitchen.
Jonathan: Press enter, and wait for it to come up with its magic.
Here’s the second sample.
That’s pretty impressive, isn’t it?
So let’s go and type something completely different. We’ll go into the field, and I will type synth pop, 1980s drums. What else should we have in there? Samples. We’ll try that. Let’s just see what we get back.
And the second sample.
Very similar to the first.
All right. How versatile is this thing?
So let us go into the edit field again. And we’ll try something that isn’t Western. Let’s see. Indian, sitar. Should we put meditation at the end of it? We’ll just see what happens.
JAWS: AI Test Kitchen document Notification. Oops! Can’t generate audio for that.
Jonathan: All right. Can we do anything else? Hang on.
Let’s just type sitar in here. And maybe Indian.
JAWS: AI Test Kitchen document Notification. Oops! Can’t generate audio for that.
Jonathan: All right. So it doesn’t seem to be happy doing a non-Western thing like that.
Let us try something like classical, orchestral, romantic. And we’ll try and we’ll press enter.
JAWS: AI Test Kitchen document.
Jonathan: It’s going to do this one. I’m not getting a notification.
It’s a fun thing to play with, isn’t it?
Let’s go back in here. We will try something completely different, and do techno, dance, club, upbeat, and see what we get.
JAWS: AI Test Kitchen document. Play audio. Track 01.
Jonathan: It’s going to do this one.
You can have plenty of fun with this.
And this is the experimental Google Music LM from Google’s AI Test Kitchen. I guess it could be quite useful for generating little bits of background music that are royalty-free for videos and even podcasts.
If you get access to this, one thing I have found with JAWS at the moment is that if you want to perform more options on a specific track like download as an MP3 file to your local device, you do have to turn the virtual cursor off. For whatever reason, when the virtual cursor is on in JAWS, the controls are not visible. So that had me rummaging around the screen looking for a download option for a while because they said you could download it, but I couldn’t find it anywhere. I found that when turning the virtual cursor off, all was revealed.
And so that is the Google Music LM. I will leave the service itself to give us a big finish.
Advertisement: Mastodon is the social network where the Living Blindfully community is most active. Join us there for conversation about the most recent episode, and items of news to help you live your best life with blindness or low vision.
Try ElevenLabs Professional Voice Cloning, Free!
This next announcement may only be relevant to Living Blindfully plus subscribers because by the time this podcast goes public, I think the deadline for submitting will have expired. However, I have also put this out on Mastodon, and via our Living Blindfully announcements list which I hope you’re on.
You may remember the demonstration of, and discussion with ElevenLabs, the voice cloning company that we featured in episode 215.
ElevenLabs has reached out to say that they’re inviting their blind users to try their new professional voice cloning feature, along with access to their creator tier for free. This will create a high quality voice which ElevenLabs says is indistinguishable from your own.
Access to the tier and the voice will be free for the first 3 months. If you find it useful, you can retain access by subscribing to one of the paid plans.
I will put a link to this form (because it’s quite a complex URL) in the show notes, so you can go and check that out.
Now, just to be clear about what this voice can and can’t do. You can’t use this as a screen reader voice, but you can feed it text and have your AI voice read it back.
This could potentially be very useful for those people who don’t read Braille at all, or who read Braille but not at a speed that allows them to read fluently. If you can have something read in your own voice, this could be very useful for a variety of applications.
So it’s a generous offer from ElevenLabs. I thank them for contacting Living Blindfully with it, and we’ll obviously be discussing this in future episodes.
The Value of Notetakers
We’ve talked about Optima. We’ve talked about BrailleSense. And that has started a discussion about the value, if any, of notetakers in 2023.
Kelby Carlson has a view on this. He says:
I wanted to offer my own thoughts about why notetakers are not obsolete, even though I only use mine on a limited basis.
There is a key thing notetakers do that most Braille displays do not. They allow easy reading of computer-created files with Braille translation in a portable format with top-down file management capabilities.
If I want to read a Word document on my phone, I must read it in something like Voice Dream Reader. I don’t have the option of simply opening it immediately in a Word processor. For example, it takes much more time and effort to open a document in Voice Dream Reader than it does on my BrailleNote Apex. It is also much more cumbersome to find a place in a document, and even to move through it.
There are some braille displays where file management is allowed, but it is on a much more limited basis. For example, the Focus 40 only displays text files in computer braille, and only allows writing in text files that are smaller than 32 megabytes.
The portability of a notetaker combined with file management capabilities is why I believe they are still necessary for some people. You can’t take a computer everywhere, and using a Braille display with a phone comes with its own limitations.
That being said, most of the new notetakers that run on Android don’t seem to have the advantages of old notetakers, nor do they offer much that simply connecting a Braille display to a phone would also not offer.”
Thanks so much for sharing your perspective there, Kelby. People need to use the devices that work for them.
Not at all to argue with your point, but just to fill the picture in a little bit. The notetaker functions on the Focus 40 Blue are probably some of the most basic of the range of Braille displays.
For example, if you get the Humanware products like the Brailliant and the Mantis (which is fundamentally Humanware with APH’s input), you do get a file manager. You can open all sorts of documents. They will back-translate for you. So those displays are a wee bit more capable than the Focus is at this time.
Transcribing Audio Using Microsoft Word
This next contribution comes from John Gasman.
Voice message: Hello there, Jonathan!
I experienced today a brand new tech feature that I had read about and listened to from Kelly Ford, and that is the ability to take an MP3 file that I had recorded in the past, or you could also do it with something that you record on your computer, and literally transcribe it from that MP3 file into a Word document.
I was at my doctor’s appointment. I just got my gallbladder removed here a couple of weeks ago, and I was curious to see how it would work. And I record my conversations so that I can go back later on and take notes.
So I took the recording today and followed Kelly’s instructions, and it worked out pretty well. It does get the basics of the conversation recorded from that MP3 file to a Word document. Or you can record it to a text document.
It’s not perfect. One of the things I found was that if you talk right up against the finished sentence of your doctor or whoever, it does get confused sometimes. Maybe if you wait a beat or two and then respond, it does a better job of recognition.
I’ll have to play with it some more and just see how it goes. Maybe Larry and I can yell at each other and we’ll transcribe it, and see what happens.
You can only use MP3 and WAV files, and there might be one other, I think. I don’t believe that you can use a FLAC file yet.
I’m sure Microsoft will eventually be upgrading this so that the recognition is better. But I really found it interesting, and it’s kind of nice to be able to use this.
It’s not as good as the transcription that you would do for your files, or that we used to all do with FSCast. Those are a lot better, and people are actually Well, I know in the case of FSCast, you know, there’s an actual individual transcribing, but even the services are better than what this is right now. But I hope it will improve. I think it’s a nice little feature that people can play with a little bit and enjoy.
Jonathan: It’s a pretty busy space, this, isn’t it? Because you’ve got tools like Otter AI. You’ve also got Whisper, which is a phenomenally accurate text-to-speech transcription option in terms of these sorts of tools anyway.
If people haven’t played with Whisper yet, I can’t recommend it enough. We really should demo that at some point.
But John, thank you.
Since you raised this, let’s give it a go. I’ve recorded a short file, which I will send to the Microsoft transcription thingy.
Let’s just play that file, so you know what I’ll be sending.
Demo: Hello and welcome! This is Jonathan Mosen speaking at a fairly regular speed for me.
I’m not doing anything special, but I just want to make a quick voice recording so I can send it to the transcription in Microsoft Word. We’ll see what it makes of this, and how accurate the transcription is.
This will be the first time that I’ve actually tried this. But in looking at it, I see that you can choose the New Zealand accent. So it is supposed to be optimized for that. And the fact that some are not optimized for my particular accent can prove a problem when working with this sort of automated transcription. So let’s see how well this does.
I have saved that audio, and I want to send it to Microsoft’s transcription service through Word.
I happen to be running my handy dandy Microsoft Word on my computer. And I find that when I’m wanting to search for a specific thing, the easiest way to do it is to press Alt Q and then type what it is that I’m looking for. So I’m going to press Alt Q, and then type the word transcribe.
Keep in mind that if this doesn’t work for you, you do need to be running a fairly new version of Microsoft Word. And if you’re not on the Insider track for Office, you may not have this yet.
JAWS: Ribbon. Type to search, and use the up and down arrow keys to navigate. Submenu. Microsoft search. Edit.
Jonathan: I’ll type transcribe, and press down arrow.
JAWS: Action group. Transcribe, 1 of 1.
Jonathan: That’s what I want, so I’ll press enter.
JAWS: Transcription pane add in. Transcribe. Transcription pane add in web content page. Start recording button.
Jonathan: I’ll press tab.
JAWS: Link learn more. Select language combo box. English New Zealand.
Jonathan: That’s correct.
JAWS: Upload audio button.
Jonathan: And there’s the upload audio button. When I press this, we’ll have a standard open file dialog that pops up.
JAWS: Open dialog. File name. Edit combo.
Jonathan: And I will browse to the file that I want to upload. To get there quickly, I’m just going to type a lot of the path into the edit field, which is way quicker than browsing around.
JAWS: Living Blindfully.
Jonathan: And then, contributions.
JAWS: Items view list box. Contributions.
Jonathan: And it’s right there.
JAWS: Transcription recording.wav.
Jonathan: I’ll press enter to upload it.
JAWS: Alert. This may take a while. Leave this window open and check back in a bit. Edit. 20%, 38%, 42%, 43%, 46%. Transcribing audio file 47%, 48%, 49%, 50%, 51%, 52%, 53%.
Jonathan: Let’s just pause that. I’m just tapping the control key and we’re still getting updates about the percentage, which I will mute from this recording.
This is a fairly short recording. It was only maybe 30 seconds long or so. So it does take a while, but let’s see what’s here now.
JAWS: Blank, blank, blank.
Jonathan: Nothing at this point. And the reason for that is that it has returned focus to the document, which is empty. So I need to press F6 to get back to the transcription pane.
JAWS: Transcription pane add in. transcribe, transcription pane add in web content page. Your transcript is saved. You can close this pane in the document and come back to it later.
JAWS: Document 1 – Word dialogue. Close button. Transcription pane add in document, transcription recording.wav link. Playback speed combo box, 1X. Rewind button. Play button. Forward button.
Jonathan: You can play the recording within the transcription pane.
JAWS: Volume button. List with 3 items. 0 seconds link. Edit transcript section button. Add section to document button. Add to document button menu.
Jonathan: And now, let’s choose add to document.
JAWS: Expanded context menu. Add to document context menu. Just text, 1 of 4. With speakers, 2 of 4. With timestamps. With speakers and timestamps.
Jonathan: Because there’s just me, I’ll go back up.
JAWS: Just text.
Jonathan: And choose just text.
JAWS: Leaving menus. Add to document button menu. New transcription button. Transcription pane add in web content page.
Jonathan: I’m going to close this.
JAWS: Upper ribbon. Blank. Top.
Jonathan: Right. Let’s go to the top.
JAWS: Audio file. Link transcription recording.wav.
Heading level one transcript, hello and welcome. This is Jonathan Mosen speaking at a fairly regular speed for me.
I’m not doing anything special, but I just want to make a quick voice recording so I can send it to the transcription in Microsoft Word. We’ll see what it makes of this, and how accurate the transcription is.
This will be the first time that I’ve actually tried this. But in looking at it, I see that you can choose the New Zealand accent. So it is supposed to be optimized for that. And the fact that some are not optimized for my particular accent can prove a problem when working with this sort of automated transcription. So let’s see how well this does.
In reviewing this with the Braille display, the whole thing is hyperlinked. Which means that if you choose that hyperlink, you will go to a OneDrive page where what you uploaded is available for playback. It is very, very accurate. That’s quite impressive.
There were 1 or 2 extraneous punctuation marks that shouldn’t have been there. I noticed that reading on the Braille display, Microsoft Word is not capitalized, but it’s not bad at all.
It’s not going to replace Hannah, the wonderful transcriber who knows all our terminology and our lingo, and copes with all the different accents from people all around the world.
But for certain tasks, it can be very handy, particularly given that it does recognize multiple speakers and it can separate those out.
And then, you can do a global search and replace. So I’ve not tried this yet. But if it’s saying Speaker 1 and Speaker 2, then you can give those speakers names just by doing a global search and replace.
So a very nice new feature. I guess we’d call this AI as well. On my word, it’s everywhere.
Voiceover: Living Blindfully is the podcast with big dreams, and no marketing budget. Oh, dear, dear, dear.
This is where you come in.
If living blindfully is something you value, we really would appreciate you taking a few minutes to write a positive review of the podcast. Positive testimonials help spread the word and encourage new listeners to give us a try.
If you have a moment to give us a 5-star review, we really would appreciate that. Thank you for supporting Living Blindfully.
Time to open iOS
A timely subject, given how close we are to Apple’s Worldwide Developers Conference. Lachlan Thomas writes:
For some years now, I’ve been thinking that it’s well past time Apple opened the iOS app platform up. By this, I mean that I think it’s time Apple allows software developers to distribute and sell their apps wherever they want, not just on the App Store.
I understand that Apple have put in place a lot of limitations of what app developers can do with the software they’ve developed. I understand many of these restrictions are for the good of the user. You don’t want someone to run a software program that’s going to corrupt system data, expose sensitive information to malicious parties over the Internet, or interact with hardware or other software that could be detrimental to the operation of an iPhone or iPad. But in my opinion, some of the limitations just go a bit too far.
For example, you can now buy audiobooks from Spotify, but you can’t buy them within the iPhone app. You must use the website to buy them.
I sometimes use Audible, and I’m not subscribed to Audible’s subscription program. I understand that if you are subscribed to Audible, you can buy books within the iPhone app, but I can’t do that.
Please correct me if I’m wrong about any of this. I don’t know about all the limitations Apple has put in place for app developers. But if I understand it correctly, one of the reasons we had this unfortunate disagreement between Apple and FlickType several years ago is because of how Apple limits what developers can do when they’re writing their apps.
Don’t get me wrong. I think the App Store is a great place to get apps on the iPhone and iPad. But why shouldn’t Apple allow software developers to sell and distribute software from their own websites or other online resources?
I particularly think of iPad users who may choose to replace a laptop computer with an iPad. They should be able to get what software they want from, wherever they want.
Imagine if Apple made it so Mac users could only download apps from the App Store and nowhere else. People would be furious.
In the early days of iOS, I can understand the restrictions Apple put in place. But the iPhone and iPad are mature products now. They’ve been on the market for quite a long time and are very ingrained into daily life for people around the world. Imagine how much better the iPhone would be if some of these restrictions were lifted.
Windows has never had such restrictions in place. And as long as you download software from a reputable source, you’ll be okay. If Apple want to combat threats such as malware (maliciously written software) and viruses, all they need to do is implement good protection and security technologies, just as Microsoft has done.”
Thanks for writing in, Lachlan. I think we may see a little bit of progress at WWDC on this point in some markets. Whether Apple’s going to draw any attention to it remains to be seen. But there’s increasing regulatory pressure, particularly in Europe, on this point.
And word is that Apple may quietly allow what’s called side-loading in certain markets. Side-loading is a possibility on Android and has been all along. It allows you to install what you want from any source.
Apple has been vociferously opposed to this, and they’ve put all sorts of messages, opinion pieces, and lobbying strategies out there saying how destructive it is. Essentially, what they’re saying is users cannot be trusted to do the right thing.
As you rightly say, this has been a thing on Windows for a long time, and from Android since the beginning. The thing is, though, that both of those platforms have a lot more malware and users have their data compromised a lot more often than is the case on iOS.
The question then becomes, so if side-loading were allowed, does that mean that iOS would suddenly become a cest pool of viruses and malware? And the answer to that is I don’t think that it would, because side-loading wouldn’t change the fact that there are some very robust technical constraints put in place in the operating system. This is known as sandboxing. That means that you can only talk to parts of the operating system outside your app (if you’re an app developer making one), that Apple allows you to talk to through application programming interfaces (API), and other hooks.
There are many more safeguards built into the infrastructure of iOS itself than there are in Windows, for example, and I believe that this will, for the most part, keep people safe.
The other point, too, is that even with the App Store model, there are dodgy apps coming through. And that’s even in an environment where Apple is supposedly manually reviewing and then manually approving each app that goes in the App Store. And there are apps being approved for the App Store that would meet the definition (the legal definition) of the term passing off.
I think this is where FlickType was so badly damaged that while Apple was having an argument with the developer of FlickType, very similar apps were being developed and released that essentially impersonated FlickType and tried to take his market share away. And the contention of the FlickType developer, which is a matter I believe is still pending legal resolution, so I have to say this is an allegation, is that there was some punitive tactics deployed on FlickType because the developer wouldn’t sell it to Apple. We covered this way back when, when this became a thing and work on FlickType stopped. So it is a problem.
Now then, you were talking about not being able to purchase subscriptions from certain apps or products from certain apps. And the reason for that is that there’s an argument over revenue. Apple is contending that because it delivers so many users to other companies so readily, it should take a cut. It’s essentially a broker. It’s an intermediary, and it wants a handling fee.
And that’s why when Netflix, or Spotify, or a number of those other companies that have now withdrawn the ability to pay from within the app charge a subscription fee to you, 30 odd percent of it (unless there’s been some sort of special negotiation), goes to Apple, just simply for being a processor of the subscription and running the store that got you the app in the first place. It’s money for jam. Apple’s raking it in.
And some of the companies have pushed back and said, “Hell no! We’re not giving you this money for jam, Apple.” And some are taking legal action. And that’s why I think you will see some movement in this area.
This is also a reason why at the moment, I don’t offer Living Blindfully plus through Apple podcasts. Because for every subscription that someone paid to keep this podcast going, Apple would take 30%. So to make those subscriptions meaningful, I’d have to set them at a reasonably high number.
I’m very grateful for people who subscribe, and I like to set up something that makes sure that the money is actually making the difference people want it to make to keeping this podcast going, and helping with the costs of production, or those things that we’ve talked about.
So I far prefer to work with a provider that is honestly trying to give me the revenue and not take nearly a third of it for doing nothing at all. [laughs] It’s an extraordinary situation.
So I think we’ll see some change. Whether we will see it in all markets, I guess, remains to be seen.
On the other hand, though, I do remember getting my first iPhone. I was going from the Nokia Symbian platform. And I remember the days where you’d hear about an app and you’d have to search for it on the Web, and there would be no single place that was at least accessible.
Maybe 1 or 2 companies tried in the latter part to create an app store-like experience. But largely, you would go to the Website created by the app developer, and then you would download and install the app. And it was a really tedious, cumbersome process. Does anybody remember Nokia PC suite and all of the hoops that we used to have to go through to install apps?
And one of the first impressions that I got when I did move to my phone, when there had been some browser port introduced and Bluetooth keyboard support, I thought, “Yeah, I can use this as a viable tool now.”
I moved to it, and I thought this app store thing is just so seamless. There was one single place to go. You search for an app if it exists, you download it, you install it. It’s so frictionless to buy it, all of those things. So I think there’s a lot to be said for the user experience of the app store and minimizing complexity in terms of where do you find an app. But this whole business of revenue sharing for in-app purchases, subscriptions, that kind of thing, it really is dodgy.
So I’d be interested to hear what others think about this. Drop me an email firstname.lastname@example.org. Attach an audio clip if you want, or you can write the email down. The phone number in the US 864-60-Mosen, 864-606-6736.
What’s New With Hable One
Last year, we discussed the Hable One, a small controller for use with an iPhone or an Android device. And there are updates.
And to tell us what is new with Hable One, I’m joined by Ayushman Talwar from Hable. Welcome to the show. It’s good to have you here.
Ayushman: Hi, Jonathan. Nice to meet you.
Jonathan: You are one of Hable’s founders, is that right?
Ayushman: Yeah, that’s true.
So my granddad was visually impaired. I come from India. And growing up with him, I have seen him as a person with visual impairment.
When I moved to Netherlands and I met Freek, we wanted to solve a problem around accessibility. And I had a personal problem, so I shared with Freek. And that’s where the idea of Hable was born, actually.
Jonathan: Hable then has really taken off, hasn’t it? I know you’ve sold quite a few thousand units, and people really appreciate it.
Ayushman: I guess that’s true. And I think the metrics for us is not about It’s more about how many lives we have impacted.
I just take an example of my granddad. And I feel like okay, if I was able to make an impact on his life, he could do so much more. He was truly independent in his way of functioning or using apps to do things on his own.
And if I kind of extrapolate that to the lives of other people that we have touched, It’s amazing that we’ve been able to.
And the feedback we hear from people is also nice. They really feel satisfied, and they really feel happy. We have a lot of people whose lives we have changed or transformed, as they say it. So I’m pretty happy with how far we’ve gone with the community. But there’s a lot more to come, and we’re looking forward to that.
Jonathan: And for those who didn’t hear the Hable episode we did last year, (people can go back into the archives and hear that), I was demonstrating how to use it.
It is a very small device. You can easily pop it into a pocket.
And I know a lot of people do struggle with touch screen devices. They just find them a bit cumbersome, especially when it comes to actually putting data onto the devices. And so that’s the problem that you are fundamentally seeking to solve?
Ayushman: That’s true. I think we’ve been associated as a Braille keyboard. This is a very hard one to change in people’s mind.
It is not a Braille keyboard. It’s actually a controller to the smartphone. So it’s like a remote for the smartphone, for individuals who feel that the touch screen is maybe difficult to learn when they’re experiencing sight loss, or they feel that they’re not as productive with the screen and they would like to do a bit more.
So Hable can help in 2 ways. It has, first, the option to help with navigation. And navigation would mean everything from opening an app, and navigating or scrolling on an app, to closing an app, doing a task on the app.
The second role that a Hable can play in somebody’s life is typing. A lot of individuals or customers, they actually type 3 ways. They could use speech to text. So they speak out a sentence and it’s transformed as a text input. The second is they send voice messages on WhatsApp or Messenger. So that can be done just by a touch of a Hable. It has physical keys. And when you press on them, you can send a message. And the third is for individuals who might know Braille, or want to practice their braille skills. They could use the controller as well.
Jonathan: Right. I see the distinction you’re making there. So you really want to emphasize that even if you don’t know Braille, Hable still is for you because of the degree to which you can control your device with the Hable.
Ayushman: That’s true. And I think the example for me is again, my granddad. He turned visually impaired later on in his life. So he was 45 to 50, and he had to take early retirement. And then, learning Braille was not an option for him, or was not an easy task to do.
And for those reasons, I feel we’ve met so many other individuals as well that they actually don’t associate themselves with Braille, and that’s completely fine. And we see Hable actually making an impact to those people’s life first as well.
Jonathan: And I mentioned this last year too, when we were talking about Hable. I have to compliment you on the very elegant user interface because it’s super logical. And so once you understand the concepts of how to use these keys to move forward, to move back, it’s really consistent and quite elegant. So that’s great stuff.
But the reason why we got you back on is not to simply rehash what we’ve already talked about. But there are new things to talk about.
And could we talk first about the iOS app for Hable? This sounds like a pretty significant breakthrough.
Ayushman: Yeah. And I think the story for this, Like you mentioned previously about the interface, I think I have to credit that to the community back. I think the community was very helpful in giving the feedbacks in what they wanted, what they expected. And by listening to them, we actually made the interface how it is now.
And that connecting to the new features like the app, again, this was the most requested feature by the community.
Since Hable is a smartphone controller, we wanted to be smartphone first so smartphone or smart devices like tablets.
And right now, to update on Hable, whenever there’s a significant update from iOS or an Android, Hable also needs an update. Or if a new user interface has been suggested by the community and we want that to be accessible for everybody, we would like that to be updated on everybody’s devices.
But the current system is not as accessible because it requires a PC and a wired connection with the Hable. And we think that this can be improved. And the community actually asked for this because they were like, “Yeah, we want an easier method to do this.”
So we started to change the step by first launching an app on iPhone and iPad. The app can be found on the Play Store or on our website. There’s a link for it. And through the app, there’s a big button or the first button which says update. And when you click on it, it should update your Hable.
We wanted to kind of pass over the simplicity of the Hable as well onto the Hable app. And that’s something new that we have come up with.
Jonathan: And that’s available for iOS only, correct? So it’s available in the iOS App Store for now?
Ayushman: Yes. So it’s been a work of around 4 to 6 months. And we don’t have the resources to develop the app for both Android and iPhones, so we first made it possible for the iPhone and iPad. And we are working on it again for Android.
Looking at the feedback again, and I think by summer this year, June 2023, we have an app for Android as well for the Android users.
Jonathan: How long does it typically take to update the keyboard?
Ayushman: So do you mean when somebody clicks on?
Jonathan: Right, because presumably it’s using Bluetooth. I mean, it’s probably downloading the update first to the phone or iPad. And then, it’s obviously got to update the keyboard’s firmware.
Ayushman: That’s true. So the update process from the time somebody clicks on an update button, to the time that the update has been finished on the application should be 60 seconds to 90 seconds.
Jonathan: All right. Okay. That’s for a little time at all.
Is there any other information you can get about your keyboard at this stage or configure it in any way? Or is it purely for updating right now?
Ayushman: For now, it’s purely for updating.
But we’ve been listening to a lot of users, and we will be releasing some more things in the Hable. For example, the ability to customize their own keys.
We hear a lot of people who don’t use the Hable for typing, but mainly for navigation, that they would like it to be more simpler. So they don’t want multiple key combinations. And they want even a more simpler Hable, if you may.
So we see there are possibilities by giving users or the smartphone trainers the option to be able to configure the keys on their own. That is something which is under the wraps, and we hope to launch that as well with the Hable app.
Jonathan: And even though the Android app will take a wee while, now that you can turn your resources to that, Android users haven’t been left out in the cold. You’ve made some improvements to the way that Hable is working with Android devices, I understand.
Ayushman: That’s true. Again, the credit goes to the community and a lot of super users who’ve been behind this. So I do have to give the credit back to them.
When we started with Hable, we saw that there were some features on the TalkBack for Android which were a bit different than the VoiceOver on the iOS.
We don’t have favorites. And we do think that the users with the phone that they’re using, they shouldn’t need to switch the phones that they have.
So we’ve tried to bring that experience and make it uniform on the Hable, that users don’t have to do the difficult swipe gestures, which they are sometimes for Android phones, I would say. And also for auto functions. So they don’t have to do those difficult gestures, and it’s more intuitive to use TalkBack for Android on the Hable. The words are from users, not mine.
The features include reading controls. The reading controls are some of the features that could enable you to read a word, or a character, or a sentence. But also, give you more options or tools for when you’re reading a text, or when you’re on a web browser and you want to navigate by headings, by links, or enter a text field.
This was possible on the iOS through something called a rotor function. And it’s essentially a similar feature on the Android, which we’ve also made it possible on the Hable now.
Some things like on the iOS devices, for example, the missing home button. So a lot of smartphones nowadays do not have a physical home button. And I think that’s something sometimes the community misses, one button where they could reset their phone or go to the home screen wherever they are on the phone. And they could just reset and go to the home screen. And that is possible with the Hable.
A shortcut for notifications. Wherever they are on any app and they want to go for notifications or the latest message that they want to read, that’s possible on the Hable just through a shortcut.
So those are some of the functions which makes it more usable than the TalkBack or the VoiceOver.
Jonathan: For those people who do know Braille, one of the comments I got back when people had a look at the Hable was, “This is a really cool device. But because I have used Braille all my life, what I would like to do,” they say, “is have a version of the Hable that is lined up like a Perkins Brailler-style keyboard.”
So at the moment, you’ve got 3 keys aligned vertically on each side, kind of like a Braille cell, whereas a Perkins Brailler device has six keys aligned horizontally.
Have you considered alternative iterations of the hardware? I appreciate that manufacturing costs are expensive, and the market may not be there for that. But I wondered if that’s something you had looked at.
Ayushman: I think to understand that, we really have to look at why the Hable exists. So we do understand that there are a lot of controllers or keyboards like Perkins-style. And for us, we wanted it to be a more mobile device, a device that easily fits into a pocket or a purse for a person, something that they can carry easily, which also meant that it can be used with the smartphone.
We are looking at more options. And the future of Hable is actually to integrate the Hable and the phone together. We don’t want it to be 2 different devices, but we essentially want to see a controller for the phone so they can really use the phone using some physical keys, adding more accessibility options to the phone.
So going into the future, we don’t see ourselves aligning towards Perkins, but adding more support to the more innovative design that we have, and seeing the future go in that direction.
Jonathan: Good to know. And how can people find out more information about Hable One? You’ve got a pretty good network, I think, around the world of distributors now.
Ayushman: That’s true. So when we started Hable, we developed a product with the community. And then, we started shipping them out.
We soon realized that our goals are not only to ship you the Hable, but really to help you in learning the smartphone, or help you in the journey of discovering a smartphone with sight loss.
And that’s where Hable is right now. So we have a lot of tutorials coming up on our website, where there are tips and tricks. These tips and tricks can be helpful with a guardian, or with a friend, or with a technology trainer to be able to use those tips and tricks and basically, advance on your skills, or learn new things on how to use apps like WhatsApp, email, calling app, or social networking app.
And we want to offer this as a service. So we want to offer this free service to all who are part of the Hable community. And we do this through our website.
And we also have community groups, which participants can ask to join by emailing us or contacting us on our website. And we have hundreds of users actually helping each other out with the new problems that they face with the smartphone.
Jonathan: And what is the URL for that website if people want to visit?
Ayushman: The URL is IAmHable.com. So it spells out as I-A-M-H-A-B-L-E, which means I’m speaking, because Hable word essentially comes from the word hable, which is a Spanish word. So our website is called I-A-M-H-A-B-L-E.com.
Jonathan: That’s brilliant! Well, it’s great to see Hable continuing to advance and that you’re finding your niche in this market, which is a really important one.
So I appreciate you coming on the podcast to give us all the updates.
Ayushman: Thanks, Jonathan. Thanks for having me.
Jonathan: And for our American listeners, Dennis Long has drawn this to my attention.
For the USA only, if you purchase directly from Hable, they’re having a summer sale during which the Hable is 43% off. Not too long to run now for the sale.
Voiceover: Stay informed about Living Blindfully by joining our announcements email list. You’ll receive a maximum of a couple of emails a week, and youll be the first to learn about upcoming shows and how to have your say. You can opt out anytime you want.
Join today by sending a blank email to email@example.com.
Why not join now? Thats firstname.lastname@example.org, and be in the know.
Netflix, Audio Description and Dolby Atmos
Rod Carne is pursuing his relentless quest for audio-described titles with Dolby Atmos. I get that, and I congratulate him for it. Because he’s spent all this money on the gear, and he wants to make use of it. And I completely understand because I’m in a very similar position. Here is his latest missive.
After my glowing reports” Yeah, they were glowing. “of Netflix delivering AD with Dolby Atmos, I now have to report that at present, AD is reduced to surround sound, which is selected by choosing English 5.1 with audio description, or similar with other languages like French, or Polish, etc.
From what I read, many streaming services are focusing on using Dolby Digital Plus in order to release bandwidth to accommodate 8K video.
How about a system without video?“, says Rod.
Well, how about it indeed? That might not work all the time if you’ve got sighted family members around. In our household, when the kids aren’t here (and they’re all grown up now), we’re a 2-person all blind household.
And I’m with you. I would gladly just play the audio description soundtrack in the glory of Dolby Atmos without the video.
I do hope that we keep the pressure on the studios who are making these absurd decisions. Why would you take an amazing surround sound experience away from the people who may, arguably, appreciate it most?
I know that people who are advocating on audio description are well aware of this. You may remember we talked to Joel Snyder about it as well.
So I congratulate everybody who’s continuing to push this along.
If anybody does have any updates on this from the studios, how is advocacy going on this year, where we are getting an inferior audio experience by virtue of our requirement or our preference to have audio description? Please keep us in the loop, as they say.
Focus 40 Blue Reliability and Repairs
Caller: Hello, Jonathan. This is Reginald George calling in from Washington State.
And I had 2 comments. One of them is about the Focus Braille displays.
I have it on pretty good authority from someone that is working from the I Can Connect program who sends in Focus displays to Freedom Scientific. Freedom Scientific is now sending those displays over to the Netherlands to get repaired when they were being repaired in California.
If this is the case, it’s really unfortunate. They said that they’re taking twice as long to get back, and their consumers are not happy. I’m very surprised. They’ve been hearing the same things about the quality control issues with the repair department, and it’s just kind of unbelievable that things have gotten to this state.
And that’s all I really know about that. It could be a rumor. It’s just something that we need more information about.
The other comment I have is about subscription software.
And the difference between subscription and a software maintenance agreement is that at least when the maintenance agreement runs out, your software continues to work, to a point. But if your Microsoft 365 software runs out, don’t they make it to where you can’t continue to save your files or access your OneDrive, or you actually don’t have most of your functionality? So it’s almost like you’re being forced into continuing to pay for another year of access, whether you can afford it or not, once you’re locked in.
And that’s the problem that I have with everything being on these renewable, rotating, revolving plans is that you’re locked in, and it’s not just on a few things anymore. It’s practically everything you own. That’s why I do not like subscription software.
I want a perpetual license. Yeah, it might get out of date. But at least, I’ve got something that I can use. And then, it’s up to me when I want to buy a new version.
Jonathan: Thank you. Good to hear from you, Russ.
Let me work my way backwards through your contribution. Or perhaps, I should say, [Let me work my way backwards through your contribution, in reverse].
Xavier agrees with you. He says:
“Giving my opinion about subscriptions.
I do not like them. Put it this way. Many people don’t have a lot of money to just drop on things. A subscription closes off access because sure, I can pay you right now, but what about a month from now? I still want the service, but everyone’s situation will differ month to month.
A one-time purchase, while more expensive, means it’s yours forever. That way you don’t have to worry about your needs when it comes to money.”
Now, returning to the question of Focus 40 Blue Braille displays. I tried hard to get a response from Freedom Scientific on this before I edit. I have contacted a couple of people now. The key person who can give me an answer to this has not responded at the time that I stopped recording this episode.
If we do get a response subsequently, you can be sure that I will air that. We obviously want to be passing on accurate information.
So I don’t know if it’s true that Freedom Scientific is sending products off to the Netherlands for repair. Obviously, the ongoing firsthand experiences we’re hearing of people having problems with the Focus 40 Blue 5th generation display on multiple occasions is really concerning.
Integrating Living Blindfully Plus With SensePlayer
Voice message: Hey, everyone. This is Christopher Wright.
I usually don’t contribute like this to the podcast because of my stutter, because I find it’s quicker to just type, and I have more time to think about the things I’m trying to say.
But this is an exception because this is going to be a quick demo of importing the plus version of this podcast into the SensePlayer.
Yes, I got my SensePlayer. I know I said I was going to wait. Well, that didn’t happen. And well, you know, boys and their toys, right?
There are a couple of ways that you can do this. If you export your podcast feeds from an app, you can import that OPML file into the player directly. Or you can make a file and paste links into it, and you can copy that file. I’m going to show you how to do this method as opposed to using an OPML.
Also, do be aware from my testing, if you do import OPMLs, it will import standard RSS feeds, so the player will let you see those. But if you try to download things, it won’t let you download because obviously, it’s not a podcast feed.
So I’m going to start this by assuming that you already have the link for the podcast copied to your clipboard. This is the private link that you get after you set up your plus plan. Now I actually don’t have it on my clipboard right now, so I will pause the recording and resume when I have it.
Okay. I have the podcast link copied to my clipboard. Let’s begin. I’m going to go to the start menu,
NVDA: Start window.
Christopher: and go to notepad.
NVDA: Untitled – notepad.
Christopher: And basically, you’re going to paste this link into notepad. It’s a very long link.
If you want to add multiple podcasts this way, you put their links on individual lines.
So now that that’s done, I can save the file. I’ll hit control S.
NVDA: Save as dialog. File name.
Christopher: And it asks me for the file name. The file name that this uses is podcast.url, so we’ll do podcast.url.
NVDA: Save as type. Text documents.
Christopher: And it’s important here to change the type to all files.
NVDA: Expanded. Save as. List. All files. .. Save button. Alt+S.
Christopher: And I can save it. I think, the default save location is my documents.
NVDA: Podcast – notepad.
Christopher: Once you have this file, you can copy it to the player.
As of right now, using the latest version of the firmware, it doesn’t seem to want to copy the file directly if you connect the player to your computer. So the best way to do this is using a flash drive or an SD card. To speed this up, I have the file on a flash drive that’s already plugged into the player.
NVDA: File manager.
Christopher: Let’s go into the file manager.
SensePlayer: File manager. USB 2 2.
Christopher: I’m going to go into the flash drive here.
SensePlayer: System volume information. 1 2.
Christopher: And I’ll down arrow.
SensePlayer: Podcast.url. Unknown file type. 2 2.
Christopher: Here’s the file with the link inside it. I’ll hit the menu key.
SensePlayer: Menu opened. Copy.
Christopher: And I’ll choose copy.
SensePlayer: Copying Podcast.url. Unknown file type. 2 2.
Christopher: And I’ll left arrow.
SensePlayer: USB 2 2.
Christopher: And go up to the flash disk, and then navigate to the podcasts folder.
SensePlayer: Flash disk. Daisy, 1 6. Podcast, 5 6.
Christopher: And I’ll go to the right to go in here.
SensePlayer: Podcast. Unknown file type. 1 2.
Christopher: And let’s open the menu.
SensePlayer: Menu opened. Paste.
Christopher: And paste this in here.
SensePlayer: 100. 1 objects copied. Podcast.url. Unknown file type. 1 3.
Christopher: Now the last step, (let’s go back to the home screen), is to go into the podcast player.
SensePlayer: Daisy. Documents. FM radio. Web radio. Podcasts. Podcasts. Creating feed list. Feed list creation complete. Feed. Living Blindfully. 1 2.
Christopher: And there we go. It has imported the feed. If I hit the play key.
SensePlayer: Feed. Living Blindfully. Episode, a tutorial on Mona for Mastodon, the most powerful, accessible way to do Mastodon on your iPhone, iPad, and Mac. Not downloaded. 1 228.
Christopher: Now let’s give you a demo here of how quickly this thing downloads because it is fast. So this is a pretty big file. I think it’s over 120 megs. This is the demo on the iPhone Mastodon client. And I’ll just hit the play key, and watch how fast this downloads.
SensePlayer: Start content download. 5. 10. 15. 20. 25. 30. 35. 40. 45. 50. 55. 60. 65. 70. 75. 80. 85. 90. 95. [08:30] AM.
Christopher: And it just announced the time, but that’s okay.
So now the file has been downloaded. If I go
SensePlayer: A tutorial of Mona for Mastodon. Checking the download file. Episode, a tutorial on Mona for Mastodon. File manager.
Christopher: All right. So let’s go back to the home screen, and let me talk about some stuff real quick.
The podcast app is slightly buggy. If you choose to just download files instead of downloading and playing them, you can’t play them once they’re downloaded. It just says checking file. That’s a bug.
So if you try to delete a file, it won’t delete the file even though it acts like it does.
And the way I found to play podcasts is to go in here.
SensePlayer: File manager. Flash disk 1 2.
Christopher: Back to the flash disk.
SensePlayer: Daisy. Data. Documents. Music. Podcasts, 5 6.
Christopher: And go into podcasts.
SensePlayer: Living Blindfully, 1 3.
Christopher: And here’s the folder that it creates for each feed. If I go to the right,
And this is the other bug. It’s not tagging the file names with the titles of the episodes. It’s using some internal coding. So if you want to play your podcast manually like this, this is not ideal right now.
Just to show you how fast this took, let’s show you how big this is. So I’m going to hit the * key.
SensePlayer: Information dialog. Type: mp3 file.
Christopher: And down arrow.
SensePlayer: Size: 137.04MB.
Christopher: So that’s a pretty decently big file, and it took about 10 seconds to download. Now granted I have a very fast network connection, but the point still stands.
The Stream 2 would take like 10 minutes to download this.
From what I’ve heard, the Stream 3 is a little faster, but not nearly as fast as this is. So yeah, this thing rocks.
And then of course, if I hit cancel.
Christopher: And there’s the file. So that’s it. And I am going to put the player to sleep.
And just as a last little tidbit, there was a person asking on the podcast if you could hold the keys to check the time and battery.
As of right now, the answer is no. So if the player is turned off or asleep, the answer is no.
Anyway, I hope that has been helpful.
You guys take care.
Jonathan: It’s really good of you to put that together for us, Christopher. Thanks for that.
And I’ve also put notes up on the Living Blindfully plus page, where we have some information on subscribing to Living Blindfully plus in many commonly used podcast apps and devices.
Advertisement: Transcripts of Living Blindfully are brought to you by Pneuma Solutions, a global leader in accessible cloud technologies. On the web at PneumaSolutions.com. Thats P-N-E-U-M-A solutions dot com.
How Do We, and How Should We, Respond to Offers of Help We Don’t Want?
I enjoy getting most of our Living Blindfully contributions, but this one is particularly good because it makes an important point to think about, and it spins, as we say over here, a very good yarn. I do like it when someone can tell a good story.
So with that preamble, let’s read it. It’s from the Netherlands, and it’s from Mohammed. He says:
I wanted to share with you, and all the Living Blindfully listeners, one of the strangest things that ever happened to me.
I’m sure every blind person, or low vision, for that matter, person has encountered a situation where someone offered them help in a clumsy or downright rude way.
I live in Amsterdam. And sometimes, if you’re moving through a crowded place, people will grab you and start pulling you along.
In that case, you have to either make them stop if they’re pulling the wrong way, ask them to not do that, or tell them where you’re going so they take you to the correct place, even if you don’t need it.
I’d like to take a peek into the mind of a person that does that. How could you possibly guide someone if you don’t even know where they’re going? What does it matter that you can see if you don’t know where you need to be? This happens fairly often, though. So as inexplicable as it is, it’s not that weird.
So let’s amp up the weirdness.
Here I was last year, walking down the street after getting some groceries, minding my own business, when I heard someone running my way while breathing heavily. Now this is Amsterdam; it’s a busy city. Someone might be running or something, so I stepped aside to let them pass.
There was a row of parked cars on my right side, and a row of houses on my left. I stepped to the left, so towards the houses.
Instead of passing me as I expected he would, this gentleman gave me a little tap on my left shoulder, something similar to the way you give a ball just a little tap with your foot to keep it from rolling on after when it rolls your way. I guess he wanted to stop me from walking into the wall. But I wasn’t moving towards the wall, I just stepped aside.
Anyway, this man stopped running and I wasn’t really in the mood to engage with him and ask what he was doing, so I moved on.
And what do you know? A couple of seconds later, I hear the same footsteps, the same heavy breathing. And now, I receive just a little tap on my right shoulder.
At this point, I’m thinking, is he seriously trying to keep me in my lane? What is he doing? Playing human lane assist? This can’t be good for him. He sounds extremely stressed.
So I stepped slightly to the left. And what do you know? I got a little tap on my left shoulder.
At this point, my inner cruel demon was tempted to lead him on a merry trip through the city, zig-zagging wildly and turning unexpectedly. I was wondering what would happen if he got near water, or a busy road. What fun we could have together!
Instead, I turned around to him and called out. He immediately came to me.
I explained that what he was doing was completely unnecessary. I showed him how I used my cane, that I could detect obstacles with it, and that I’d walked this route very often and knew where I was going.
He asked me if I was sure, and I told him yes, I’m all right. I thanked him for his attempt to be helpful. He said goodbye, we shook hands, and he left.
About a year later, the strangest thing yet happened.
I was on my way to the tram after a medical appointment I had. I just crossed a bridge crossing a canal and turned left to walk alongside it. There was a thin strip of grass between the sidewalk and the canal on my left, and a bicycle path and busy road to my right.
I angled slightly left to the grass in order to make sure I stayed on the pavement and avoided all the bicycles whizzing past, so I was moving towards the water ever so slightly.
At that point, I heard screaming from behind me. This was pretty far off, but clearly audible, as the man was shouting at the top of his lungs.
I thought, well, Amsterdam is a busy city. All manner of things happen here, so I can look over my shoulder. But that won’t tell me anything. I’ll keep my ears peeled, and see if I need to take any action.
The screaming became louder and louder as the shouting man came closer and closer. He was moving fast. He wasn’t walking.
When he was a couple of meters behind me, I realized he was talking to me. “Stop!”, he shouted. “Stop! Stop! Stop!”
So I stopped, turned around, and asked him what was wrong.
He stepped off his bicycle, breathing heavily, and said, “Water! There’s water on your left. Be careful!”
So I launch into my normal, and at this point, well-practiced blind person explains to sighted person how they navigate spiel. I’m waving around my cane to show him how I can detect lampposts and other hazards, how I can use it to detect different types of ground such as pavement and grass, how I know that there was water long before I fell in unless I was extremely uncareful, how I know the route when someone else showed up.
“What’s wrong?”, he shouted. “Where are you going? The tram, you say? I’ll take you there.”
I respond by urging everyone to calm down, and I launch into my blind person explains to a sighted person how they navigate spiel. I’m waving around my cane to show him how I can detect lampposts and other hazards, how I can use it to detect different types of ground such as pavement and grass, how I know that there was water long before I fell in unless I was extremely uncareful, how I know the route.
In the middle of my spiel, the first person that stopped me, the one who kept shouting stop, took a couple of steps backwards and burst into tears. This is not subtle, either. He was standing there, sobbing. The second person helpfully explained that the man was crying, even though it was clearly audible. But when he told me he had tears in his eyes too, I decided to not launch into my blind man explains to sighted people that he can use their voice and sounds that they make to gauge their mood spiel.
I thanked both men for their help, asked them if they were okay, gave the first man who was still sobbing lightly a hug, and left after reassuring them that I was okay and could make the journey myself.
This final situation did make me think, though. We as blind people often talk about the misconceptions sighted people have of us in terms of how it affects us.
To be clear, we should do that. We are the ones most affected by their attitudes, and almost never positively. But this type of attitude has an effect on them as well.
I don’t exactly know why these two men started crying, whether it was fear, relief, or some other emotion. I just know that the emotion wasn’t a happy one.
It was a reminder to me that even though I’m sometimes impatient with people offering me help when I don’t need it, or start “helping” without asking me if I need it, or compliment me for some mundane thing, thereby showing extremely low expectations, or any of the things we regularly complain about in our community, there’s usually a well-meaning person doing that, and perhaps a bit of patient explaining will set them straight. I, for one, have locked away my inner demon in a slightly more secure location where he can rail and plot, but only escape when something truly horrible happens, such as outright discrimination or hate.”
That is a brilliant message, Mohammed. Thank you so much for sending it.
I was going to craft a response to it. But then, I remembered that back in 2014 on my personal blog, I wrote a post called When Should We React? When Should We Let It Go? And it’s still out there.
It did generate a bit of controversy, and I’ll talk about that controversy in a little bit.
But I will read an abridged version of that post. If you want the full thing, you can go and look it up on the mosen.org website. But it goes like this:
I’m not discussing clear breaches of the law, such as when you’re denied a table at a restaurant because you have a guide dog. That one, to me at least, requires you to take a stand.
In this instance, I’m talking about how we, as blind people, react to individual sighted people who are not an authority, who may approach us or say things to us in a manner we find objectionable.
I could give you many examples from my own life experience to try and get my point across, but here are a couple that are uppermost in my mind as I write this.
A couple of years ago, I was walking to a restaurant with another blind person whom I didn’t know particularly well.
We were about to cross a busy intersection, and a member of the public came up to us and said, “Let me help you.”, grabbing my companion by the shoulder.
My companion angrily shook off the woman offering assistance and said, “Don’t you dare put your hands on me.”
The member of the public was clearly quite distressed, saying she meant no harm and was just trying to be helpful.
At that point, I introduced myself by name, thanked her for her help, and said that if she wouldn’t mind assisting us, that I could take her arm. That way, she could walk ahead of me.
My companion knew the area, and we probably didn’t need the help. But what was to be gained by offending a well-meaning member of the public?
We got across the road. She asked me the usual questions I get in the States about where I got that funny accent. And she left, apologizing once again for upsetting my companion, but saying she knew better what to do next time.
I’m not oblivious to the argument that there comes a point where you just get sick and tired of being pushed and shoved, that you just get fed up with having to be some sort of goodwill ambassador for the blindness cause all the time. I’ve been there, having had days when I’ve handled situations poorly for no other reason than I’m human and was having an off day.
But let’s face it. Blindness is a very low incidence population, and the majority of blind people are senior citizens who may not be seen out and about unaccompanied very often. That makes the numbers of independent mobile blind people an even lower incidence population.
I’m all for blind people being taught how to make the most of their lawful rights and how to advocate to change the law where necessary. But I submit we also need to learn how to be well-mannered self-advocates with a sense of perspective.
Of course, there will be jerks who don’t take no for an answer. But we should be sure that that’s the type of person we’re dealing with before we turn on the verbal fire hose.
Let me give you one more example of an issue where I think the best response is to just let it go.
I once encountered a member of the public who told me, quite unsolicited, that blind people have no business being parents. I was sitting with one of my kids at the burger joint, and this guy just came up to me out of the blue and told me this.
I can’t deny, it really pushed a button internally. Yeah, okay, I found it incredibly offensive, actually. But what kept me together was that I knew my highest priority in this encounter was my son.
So I simply thanked the guy for his input, and respectfully requested that we be able to get on with our lunch in peace.
My son and I then had a discussion about how some people didn’t know how cool it was to have a blind dad, and that satisfied him.
What would have happened, had I chewed the guy’s ear off as I was so tempted to do? Would it have changed his mind?
I very much doubt it. The guy didn’t approach in curiosity with a series of questions. He approached with his mind made up that I was incompetent. It would have upset my son, and it would probably have caused a public scene.
And in the end, for what? In the wider scheme of things, what does it really matter what this ignoramus, who was clearly rude for thinking he had the right to come up to me and say such a thing unsolicited, thinks?
By contrast, let me give you one example where I did take a very public stand, and then, try to articulate a principle that defines my different attitude to these three occurrences.
When my oldest daughter was quite small, I was required to speak at a conference as the then president of New Zealand’s consumer organization, and my daughter came with me for the trip.
When we got on the plane and I’d ensured she was securely strapped into her seat, a male flight attendant came up to me and said, “Mr. Mosen, because you’re blind, we’re going to have to sit your daughter with someone who can help her in the event of an emergency.”
I was absolutely flabbergasted! We’d flown together before, and I’d never been challenged in this way.
I told the flight attendant that in an emergency, there’s no one in the world on this plane who’s going to be more concerned about my daughter’s welfare than me, that she’d clearly be upset sitting next to a stranger, and that there was absolutely no way he was going to move her. I asked him to name the regulation which he felt entitled him to do this. I told him we’d get off the plane rather than be separated.
The whole plane had fallen silent by this point. And to my surprise, the passengers started applauding. At that point, a female flight attendant who said she was a parent herself intervened, told the guy he was out of line, and the plane went on its way. I was really touched by all the kind words from the passengers as we disembarked, commenting on what a lovely father-daughter pair we were.
In that case, I followed up with a written complaint. The person concerned was disciplined and given further education.
So why did I choose to make an issue of that one?
First, because as a parent, my daughter was my responsibility.
Second, because this was in relation to a public service where the individual had no legal grounds to do what he was doing. To let this guy get away with it would have set a shocking precedent.
Third, there was a whole plane load of people who needed to understand that blindness didn’t equate to being disqualified from exercising one’s obligations as a parent.
We’re not, in this instance, talking about one guy without influence, as in the previous example. We’re talking about a guy in a position of authority and a plane load of passengers, at least most of whom, I would like to think, were reasonable people.
If I had to condense the way I try to look at these things into a principle, it would be this. Is this thing I would prefer not to be happening, a challenge to my civil rights, or an annoying affront to my sense of self, my ego, if you will? Is the intention to be hostile and to discriminate, or is someone seeking to be kind, with that kindness perhaps a little misplaced? Was offence intended, or have I simply chosen to take offence? If offence was intended, will it matter beyond this moment?
When we deal with entities providing us with public services, I think it’s reasonable for us to expect a certain quality of service. Although even then, the front-line staff often aren’t responsible for their lack of education.
But when it comes to individual members of the public, most of them want to be helpful. And really, the small number who are deliberately obnoxious are of no consequence. We may get some sort of temporary satisfaction from humiliating them, but there’s a longer-term dignity that comes from being the bigger person. More important, if you let people get under your skin, in the end, you’re the one who suffers the most.
I’ve seen blind people who almost seem in a constant state of warfare with society in general, and sighted people in particular. When you view the world like that, you’re expecting confrontation. You’re primed for it. Therefore, you attract it.
For those interested in further reading on this topic, Kenneth Jernigan wrote a brilliant piece called Don’t Throw the Nickel.
I’ll stop reading the post at this point. But Don’t Throw the Nickel was originally part of one of the kernel book series that NFB used to put together. They are great books. And it also formed the part of Kenneth Jernigan’s, I believe, final banquet speech in 1997, which was called The Day After Civil Rights, and that was at the convention in New Orleans, which I had the privilege to attend.
I’d be interested in your take on this. Whether you think that sometimes we can be too angry, or whether it’s just an encroachment, an infringement, we shouldn’t have to put up with it under any circumstances.
One criticism that I got from that article that I just read you, or paraphrased for you, (that may have some validity), is that I’m coming at this from a male perspective, and I completely accept that. If you’re a blind woman, and someone’s walking up to you and putting their hands on your shoulder or whatever, that’s a whole different level of danger, intimidation, and worry. And I do accept that. And perhaps, the post that I wrote back in 2014 did not adequately reflect this.
So this is a very thoughtful topic. You might like to share some stories of help that was offered to you when you didn’t need it that was over the top, how you handle these situations yourself. Do you find yourself getting angry, and then wishing you didn’t? Do you just have very firm boundaries that you never allow anyone to cross? And that means that if somebody puts their hand on you without consent, you let them have it. You let them know that that is totally unacceptable. And in this day and age, I completely understand why you might do that as well.
It would be great to have a discussion about this. opinion@LivingBlindfully.com is my email address. You can record something, or you can just write it down. And my phone number in the US 864-60-Mosen, 864-606-6736.
Just before I leave this, I should say that I’ve had the bursting into tears thing once, to the best of my knowledge. And I’m pretty sure this was also in 2014, actually.
It was when Bonnie and the 4 kids and I were in a tourist destination, and we had a motel thing. It was quite a big one, obviously, with 4 kids and two adults. And they were really nice, very nice people running this motel.
We had the big unit. I think it had a couple of bedrooms, and you could roll out the big sofa in the lounge and turn it into a sofa bed, and all that kind of stuff.
One day, we were getting ready to do some tourist attraction or other, and the woman who ran the motel came up and just burst into tears, and gave this, I think it was a box of chocolates. It was some sort of gift to Heidi. And she was sobbing. And we said, “Is everything okay?” And she said, “You’re just such a lovely family.”
I didn’t know how to really respond to that. She was in tears and gave us this nice gift. And all I could really think to say was, thank you so much. You haven’t seen the kids when they’re acting up, but thank you so much.
So it is interesting. It’s hard to know how to respond when something like that happens.
Advertisement: If you’re on Twitter, well, so are we.
Follow LivingBlindfully to get information about future episodes, and to be alerted the moment we publish a new episode.
Because of Twitter’s short usernames, we’re @LiveBlindfully. That’s all one word. LiveBlindfully on Twitter.
Old VoiceNote Gives a No Flash Disk Error
People do enjoy talking about the old technology. And what’s interesting is it’s not just the people who used that old technology when it was around. It’s also younger people who are intrigued by the old technology. You know, “Tell us what it was like in the old days, old man. When you used to do Main Menu, what was it like?”
“Get off my lawn.” That’s what I say. [laughs]
Anyway, Richard’s in touch. He says:
“I am in the United States, and trying to assist another blind person with an original VoiceNote QWERTY keyboard notetaker.
Here are my issues.
When it powers up, it says Keysoft version 5.X. And then, says flash disk not available. And then, to do a reset.
I performed what I believe to be a reset by trying keys S, D and F, with and without the spacebar, with no success. I tried holding in the reset button as well. I tried with keys J, K, L as well with no results.
Also, is there somewhere I could get hold of the Keysoft software to reload the thing ?
Thought I would run this past you since you have so much knowledge of prior technology.”
Wow, Rich. I was the product manager for that thing.
And you have gone through all the things that I would remember to do, holding down those different key combinations.
The flash disk unavailable message suggests that the chip on the board might have failed. And after all this time, I guess I’m not surprised. We can’t really complain about it having lasted all this time because that product would be about 20 years old now, I think, maybe a fraction less, but not too much less. So I think you’ve done all that you might do.
But I will put this out there in case I have forgotten something. It was a very, very long time ago, you know, that I was the product manager for that. [laughs]
So if you’ve got any ideas for Rich, be in touch. opinion@LivingBlindfully.com or 864-60Mosen.
And of course, since that was released, Humanware, or Pulse Data International, as it would have been when that was released, has been through so much change because it was a New Zealand based company. Then, it’s now based out of Canada. Man, I would be surprised if they have those original builds, but you never know. Could be worth contacting them and finding out.
Jingles for podcasts
As you may know if you’ve been listening to the podcast for a while before we changed from Mosen at Large to Living Blindfully, we had a Jam Jingle package. It was good. And it’s something that you don’t hear in podcasts that often. So it was a point of difference when we did this.
Now, Matthew Whitaker is writing in on this subject and says:
“Hello, Jonathan, and all Living Blindfully listeners,
First of all, I just wanted to say congrats, Jonathan, on a switch over to Living Blindfully. I’m super happy and proud of you.”
Well thank you, Matthew. Glad you’re still here.
“Loving the new name.”, he says.
“I have a few questions regarding choosing jingles. Would love to hear what everyone thinks about this.
Do you think it’s necessary to keep the melody of your jingle the same, or vary it up?
I’m thinking about starting a podcast when my schedule is less crazy and was thinking about this, and thought I would ask all of you.
I’m planning on making jingles of my own in Logic Pro, and getting some custom ones from Jam Creative Productions. Totally recommend checking them out for those who haven’t already.
Also, for those who have used Jam Creative Productions before, what was the process like when choosing pre-made jingles to redo the vocals over? Does it matter if you choose jingles from a package, or choose ones from various packages?
Hope to hear from all of you soon.
Keep up the amazing work.”, says Matthew.
Thank you very much, Matthew. I appreciate you writing in.
Jam will happily do resyncs for you of anything. If you want to focus on a particular package, they will do that.
When I did the Mosen Explosion jingles back in 2009, I wanted a pick of various jingle packages that resonated with me over the years, and that I thought would fit with the kind of music we play on the Mosen Explosion. now on MushroomFM, still going strong.
And they were happy to do that. They were just happy to do individual resyncs of different cuts from different series. They are amazing to work with, and they really are keen to make sure that you get satisfaction.
Sometimes, they’ll come back to you and ask you how you want certain things pronounced because obviously, it’s a pretty expensive process to resync these jingles, and they only want to do it once and ensure that you’re happy.
So good luck, if you do that.
Voices for Apple Watch
Byron Sykes is in touch. He says:
“Christie is getting into rowing and fitness. This is her second year in the Local Adaptive Rowing Club in Louisville.
Today, she downloaded an app to help use her ERG.
She’s thinking about getting an Apple Watch, but wants more than a grumpy Samantha voice.”
Look. Isn’t it interesting that voices are so subjective? I completely agree with you. Samantha does sound grumpy. She makes good morning sound like a declaration of war. And yet, there are so many people who use that Samantha voice, including Bonnie. Drives me batty. [laughs]
Anyway, Byron says:
“Are there more voices on the watch like the iPhone?”
Yes, there are now, Byron. You used to have 1 voice per language. But now, you can choose from a range of voices. You can even have Alex on your Apple Watch, if you want.
So I feel confident that Christie will find something on her Apple Watch that will suit her preferences.
Our time’s almost up, but we’ve just got one contribution that’s coming in now.
Let’s just get that into the system here. Hot off the press, actually. Just came in while I was recording.
Samantha: Hi, Jonathan and Living Blindfully listeners. It’s Samantha speaking.
I heard what you said on your podcast about me. I am not grumpy. I am never grumpy. You’re a big meanie. That’s what you are. And I’m gonna sue you. Just you watch.
But for now, goodbye. Grumpy? Me? Huh!
Jonathan: Oh my goodness! I seem to have touched a nerve there. I’d better go.
Closing and contact info
Thank you so much for your great contributions this week, for your subscriptions to Living Blindfully plus, and for just generally being awesome.
Remember that when you’re out there working your guide dog, you’ve harnessed success. And with your cane, you’re able.
Voiceover: If you’ve enjoyed this episode of Living Blindfully, please tell your friends and give us a 5 star review. That helps a lot.
If you’d like to submit a comment for possible inclusion in future episodes, be in touch via email,. Write it down, or send an audio attachment: email@example.com. Or phone us. The number in the United States is 864-60-Mosen. Thats 864-606-6736.