Transcripts of Living Blindfully are made possible by Pneuma Solutions, a global leader in accessible cloud technologies. On the web at http://PneumaSolutions.com.
You can read the full transcript below, download the transcript in Microsoft Word format, or download the transcript as an accessible PDF file.
Voiceover: From Wellington, New Zealand, to the world, it’s the Living Blindfully podcast – living your best life with blindness or low vision. Here is your host, Jonathan Mosen.
This week: more on the Victor Reader Stream 3rd generation, a call to boycott Apple’s feedback assistant until Apple lifts its game, and with tech bargain season nearly here, I bring you a look at the powerful, incredibly accessible Evo 16 audio interface from Audient.
Welcome! Episode 257 is where we’re up to.
And so let’s have a look at area code 257, which seems to me a bit ambiguous.
When I consult some repositories of this information, it tells me that 257 isn’t assigned, and that may well be true.
But I am also led to believe that 257 is ready for assignment in BC. No, we’re not talking about assigning it to something that happened a very long time ago. In this case, we are talking about British Columbia in Canada. And from what research I’ve been able to conduct, it seems that area code 257 might spin up for bits of British Columbia starting in 2025.
So if you know more about this, you are welcome to let me know.
It seems that these area codes can be a bit controversial sometimes. Who’d have thought? Yeah, it’s nothing straightforward anymore.
Country code 257 belongs to Burundi. And again, there is some ambiguity here because according to some census data that was taken recently, there are 13 million people in Burundi. But the United Nations thinks there are 12. That’s million, by the way, not 12 people in the whole of Burundi. So I don’t know who’s right.
But the most important thing is if you happen to be listening from there, a warm welcome to you, and enjoy your moment of Living Blindfully glory.
Let’s catch up with a couple of things happening on Mastodon.
And there’s been a bit of discussion regarding Lachlan’s comments about the Victor Reader Stream.
Brian Hartgen. You might remember that Brian bought a SensePlayer. And then, he wasn’t particularly happy with that. So he sent it back, and he’s got the stream three now.
And he said on Mastodon regarding the Stream V3:
“I’m looking forward to the update, whenever that comes along.
I really don’t have a problem with my Stream. I’m not finding I have a lot of the difficulties some people are reporting. But of course, it does depend on what you use it for.
I’m certainly very glad to have it. And I understand to some extent, the problems will only become apparent when you use it for certain functions.
The one problem I do have is in relation to the order of books in the various bookshelves. They are in no order that I can determine whatsoever. I can copy a book to the SD card and I will find it in the middle of the list somewhere. It’s completely random. It’s not alphabetically arranged, and it’s not even in the date order in terms of when you add the book. It’s just wherever it feels like placing it at the time.
This will not stop me from using it, but it does take some effort to find the book you want.
Other than that, I use it every day and I’m happy. The battery is long-lasting, and the performance is good, especially at faster reading rates.”
Christina agrees. She replied to Brian on Mastodon saying:
“The file sorting on the Stream 3 is the thing that is driving me absolutely bonkers!
They want to add a feature where we can choose how we want the files sorted. However, I would love it if in the meantime, they would fix it so that the files are sorted by name, just like on previous versions of the Streams. That would be awesome, and I would be absolutely over the moon.”
Tom Grant has emailed about this direct from the UK. He says:
“I haven’t had half the bugs mentioned in episode 256, but I have had an internet radio restarting the book reader error.
Then, it says internet radio stopped, and that bookshelf is unavailable. This only happens on the new Lily voice used with Will.
I then have to go to diagnostics. And I lose everything by doing a factory reset, as clearing user data does nothing.”
Gosh! That does sound quite severe.
“Otherwise, I am experiencing some sluggishness at boot up.
I’m also glad that the Stream 3 can play BBC stations, as this is a great need for me – listening to their content, especially 5 Live and 5 Live Sports extra. As you can guess, I’m into sport.”
If you have any comments on how you’re finding the Stream 3, if you’ve invested in it, you are of course very welcome to chime in with your own thoughts. opinion@LivingBlindfully.com. Or you can call the listener line. That number is in the US – 864-60-Mosen. 864-606-6736.
Advertisement: We can make transcripts of the show available, thanks to the support of Pneuma Solutions.
Remote Incident Manager is a way to connect to another computer, or have another computer connect to yours.
If one computer is a Windows machine and the other is a Mac, that’s absolutely no problem. RIM is available on both platforms.
Some of us, like me, use it daily. But for others, it’s one of those technologies where you don’t need it until you really need it. And when that happens, it’s often an urgent, stressful situation.
Buying a RIM pass today means you’ll have access to RIM the moment you need it, with no ongoing commitments. Passes start at just $10 for 24 hours of access to a single machine. And once you’ve purchased it, it’ll be there until you need it. It’s a great way to get RIM in your toolbox without a monthly commitment.
So don’t wait for the next crisis. Pick up a RIM day pass today by going to getRIM.app. That’s G-E-T-R-I-M.app, and make sure it’s installed on your PC or Mac for when the time comes.
Yes. Be prepared like a good scout, that’s what we say.
Now, I want to come back to the Beatles because it’s been a stellar couple of weeks for Beatles fans, first, with the release of Now and Then. And now, of course, we have all these remixes. I counted them, and I think there are 38 new stereo mixes, thanks to Peter Jackson’s technology, mainly of material from 1962 to 1966. But there are one or two new mixes from 1967 to 1970. We’ve got a lot of that stuff already, though, because of the mixing that’s been done in the last 6 years or so, so that is wonderful.
And it seems that overall, you know, … You can’t please everybody, of course. But Now and Then, the Beatles new single (gosh, it gives me just chills being able to say that – the Beatles new single), is being pretty well received as well.
That was accompanied, the day after the audio was released, with a video directed by Sir Peter Jackson. And there was so much talk about this.
I’m such a Beatles buff, that I really wanted a detailed description of that video. And you can have it too, if you like.
The amazing Heidi Taylor, (Now, I’m a bit biased, because she’s my daughter, you know.), but I do believe she thinks like a blind person thinks. She has an amazing way of explaining stuff. And she has described this video in considerable detail.
I asked her if she would do this, and I was so proud and touched by what she did. Because not only does she describe the video, but she did considerable research into the images that Peter Jackson was using, and it really is amazing.
Thanks to the amazing Heidi for doing this. I hope you like it.
Caller: Good morning, Jonathan! At least, morning for me.
This is Dave in Oregon.
I wanted to bring up something I just discovered in iOS 17.1. I’m running a 14 Pro.
In the Weather app, I went to edit my list of cities. And in the edit mode, you can drag a city up or down to change the order.
It used to be that when you drag, it would announce where you are by announcing what city above or below that you’re moving.
It doesn’t do that anymore. It’s completely silent.
The only way you know is to drag for a while, lift your finger, and then search around.
So I wonder if this is something that’s been observed by other people. And obviously, if it has, well, then I consider that to be a bug.
Jonathan: I’m not a user of the iOS Weather app, Dave. But obviously, that will be frustrating for those who are.
So let’s see whether others can reproduce.
Let’s also have a talk about bugs in general from Apple, because there was a fascinating blog post that was published earlier in the week by a guy called Jeff Johnson.
Now, he is an app developer. He’s pretty well known in the iOS app development community.
And he has articulated mounting frustration within the developer community about Apple’s failure to resolve bugs that are reported.
And I’m sure that many in the blind community are nodding and saying, “Amen! We’ve been frustrated with this for a long time.”
But finally, it seems like there’s some cut through in terms of other market segments that are getting annoyed by the fact that Apple has this feedback app. You go through the process of reporting things, you generate logs, (Sometimes, you can write to Apple and they’ll tell you to generate logs.) You do this, you are essentially doing Apple’s work for them for free. And in fact, if you have a developer account, then you’re paying for that developer account.
So you send them all this data because you want iOS to be the best it can be. It may be a bug that is affecting your livelihood and nothing happens for a very long time, if at all.
This has got to the point where this guy, Jeff Johnson, is calling for a boycott of Apple’s feedback app.
I was really interested in this.
He’s suggesting that people do the following:
First, file a new feedback report about Feedback Assistant itself in the developer tools and resources that lists the issues below, and states that you’re boycotting Feedback Assistant until the issues are addressed.
The second step, don’t file any other new feedback reports until Apple addresses the issues.
Then the third step, if Apple requests a response to a previously filed feedback report, you reply only by saying that you’re boycotting Feedback Assistant, and refer to your feedback number from step 1.
In his blog post, Jeff says:
“Ideally, I think you should make your feedback from step 1 as unique as possible. The point is to flood Apple with new feedbacks about the boycott and force Apple to do some work to handle them, to take notice of the boycott and to recognize that we’re serious about it.”
“Boycotting Feedback Assistant does not preclude talking about your bugs on social media, on your blog, and on your podcasts.”
He continues that the Feedback Assistant is the worst bug reporting experience he’s ever seen, and he says why.
“First, Apple neglects or refuses to say whether or not they can reproduce reported bugs, even when we give them precise steps to reproduce, and sample Xcode projects. This is crucial for us to determine whether Apple is taking our feedbacks seriously or just lazily, bureaucratically stringing us along.”
Oh, boy! Can we relate to this. And this is precisely why I said in my NFB speech this year it is really important that Apple produce a transparent bug reporting system for accessibility that talks about whether these things are reproducible, and also, where they rank.
“2.”, he continues, “Apple closes feedback with the status ‘Investigation Complete’, unable to diagnose with current information without asking us for more information, or even notifying us that the feedback has been closed.
- Apple closes feedback, without the agreement of the person who filed the feedback. And apparently, it’s now a ‘feature’ of their bug reporting system that closed feedback cannot be reopened, even by Apple employees.”
He says: “It wasn’t always this way, I believe.
- When Apple mistakenly closes a feedback for a bug that isn’t fixed, Apple demands that we open a new feedback for the same bug, instead of just opening a new one themselves and giving us the new feedback number.
- Apple demands that developers ‘verify’ feedbacks with the latest meters, despite the fact that Apple has not fixed the bugs, attempted to fix the bug, or even attempted to reproduce the bugs with the steps given by us. This,” he says, “is a giant waste of our time, and Apple closes the feedback if we don’t ‘verify’ them.”
- Apple doesn’t always notify us of changes to the status of the original feedback when our feedbacks are closed as duplicates.
- Apple consistently demands invasive SysDiagnose responses (often unnecessarily), and refuses to look at feedbacks without them. Many developers work on their own personal devices, and SysDiagnoses are gross violations of our privacy, which Apple claims is a fundamental human right.
Apple has avoided or abandoned creating smaller, more targeted, and less intrusive methods of collecting information and diagnosing bugs.
- Feedbacks can no longer be filed from the web. Apple now requires that all feedback be filed from the native Feedback Assistant app on macOS or iOS.
This is a very recent setback. I’ve been filing feedbacks via the web app for years, the last one on October 26.”
And again, something that reconciles with what I said to NFB earlier in the year.
“9. We can’t search Feedback Assistant for bugs. Apple employees can search the database, but I can see only the feedbacks that I have personally filed.
Of course, we can acknowledge that some feedback needs to remain secret, especially for products that haven’t yet been announced by Apple. But countless feedbacks require no such protection, and an opt-in searchable bug database would help external developers immensely, improving the overall quality of the software on Apple’s platforms to the benefit of Apple, developers, and users alike.”
And of course, the accessibility ramifications would be enormous.
So that is what Jeff Johnson is suggesting. And I am so delighted to see this go beyond accessibility. The frustration has been mounting about Apple’s ignoring of accessibility defects and the reporting thereof for a very long time. Maybe now that one developer is trying to make a noise about this, we might get some attention.
Joe Danowsky is writing in. He says:
It is great how technology has enabled those of us who are visually impaired to access and use information.
Unfortunately, when it comes to appliances we need for our activities of daily living like cooking meals and washing our clothes, it appears to me that accessibility has gotten worse, not better.
My wife (who was sighted) and I just bought a new condominium, and decided to upgrade the appliances – dishwasher, ovens, and washing machine.
I could not find any of these appliances of the calibre, features, and sizes we wanted that were accessible for me in any way. The appliances of today have no physical buttons. And worse than that, the control panels are completely smooth and flat, and have no raised bumps to count or feel, or to stick a locator dot to. If you just touch a panel, you will invoke a control without knowing it.
I really think something needs to be done about this problem. I would have thought that the awareness of digital and physical accessibility would have made the appliance manufacturers aware of their need to do the same.
I am curious to know if you or your audience is aware of any group working to rectify this industry-wide problem.”
Shopping for accessible appliances is a bit of a chore, and there are so many potential pitfalls that you can walk into.
The good news is that particularly in your part of the world – in the United States, there are actually a range of appliances that can be controlled from a smartphone. And that would apply to ovens, dishwashers, and washing machines. And I think that’s the way to go if you are a smartphone user because you have a lot of control there, not only of what you’re inputting, but also what the thing is outputting.
In addition to smartphone access, there are quite a few appliances that can be controlled via an Amazon Echo.
We don’t have as many choices in that regard here in New Zealand, unfortunately. A lot of these appliances that are accessible are not made for the New Zealand market.
And I think it still does raise a very important philosophical point. If sighted people can use these things relatively well without any kind of additional device, surely, that’s what we should be striving for as well. Why can’t, in this day and age, these appliances talk back? And should there be a requirement for something as essential as a household appliance for there to be physical controls on these devices? It’s a really good point, and let’s hopefully start a conversation on this.
If you want to contribute, opinion@LivingBlindfully.com, or 864-60-Mosen is the phone number in the United States. That’s 864-606-6736. When you email, by the way, you’re welcome to attach an audio clip, or you can just write the email down.
And I would also be interested to know if there’s any group that’s working on this stuff.
Voice message: Hello Jonathan, and hello listeners of the Living Blindfully podcast! This is Lachlan Thomas from Melbourne, Australia, and this is my personal voice on my iPhone. I thought I’d give my thoughts and views on the personal voice feature.
I did not know about this feature until I upgraded my iPhone to iOS 17 around the time it was released to the public. I discovered the personal voice feature when just browsing through the accessibility settings. And when I learned about what it could do, I became very excited.
I’m not a person with a speech impediment, and hopefully I will never develop one. But I thought I’d give the feature a try.
I think it’s a great feature. There’s very little documentation about it provided by Apple.
I was initially hoping I would be able to use my personal voice as a text-to-speech voicing voiceover, and I was very disappointed when I discovered I could not do it. But nevertheless, I decided to try it out, and I think the results are pretty good.
With respect, Jonathan, when I heard your demonstration of personal voice, I really didn’t think it sounded that great. And I’m wondering if it’s because you were trying it out on a beta release of iOS, rather than the final version. I wonder if that could be the reason why it just didn’t seem to work out so well for you.
I was also very impressed by the demonstration that a contributor sent to you of their voice recreated using the ElevenLabs product Revoicer, I think they call it. Is that right? Revoicer? I believe that’s correct.
I am sure that in the future, personal voice will improve. It will sound more and more accurate in the future, and I would like to hope one day, it will be possible to use personal voice in VoiceOver and as an iOS system voice, and also in apps like Voice Dream Reader and EasyReader, if those apps haven’t already been updated to support it. I know Voice Dream Reader does not support personal voice.
Despite the fact that it’s obviously got a long way to go, I do think personal voice is actually quite robust because this was not the voice I was using.
But a couple of weeks ago, I created a second personal voice, and I deliberately recorded it in quite a noisy environment. I tried recording it in a few places in my home. I tried recording it upstairs in the bedroom, adjacent to my bathroom. And at that point in the bathroom, the clothes dryer was operating, and it makes quite a lot of noise. I also tried recording it in my living room, which is adjacent to my kitchen, and I had the microwave turned on, and the stereo system on in the background.
And yet, when I used that particular personal voice I created that day, I hear no evidence of unwanted or extraneous noise. So I’m quite impressed.
I think at some point, I will be game enough to try recording a personal voice in a very noisy environment, like in my bathroom with the washing machine or the dryer going. I’d be curious to see what that will do, and I’d be curious to know if anyone else has tried that.
I think as artificial intelligence continues to evolve, we’ll probably see more technology like this.
On a somewhat unrelated note, if you’re a Spotify user, you may know that Spotify has recently introduced a feature called DJ. When you invoke the DJ feature, it basically plays music based on what you’ve recently listened to, with a virtual DJ interjecting in between sets of songs to tell you about the songs coming up, much like a radio DJ would.
I believe the DJ is called DJX, and it’s based on the voice of a Spotify employee whose name is Xavier Jernigan. I’ve heard him speak in a YouTube video I watched the other night about behind the scenes about this new DJ feature, and the replication of his voice in the app is very accurate.
Jonathan: Thanks very much, Lachlan. Glad you’re enjoying the personal voice.
There’s been a lot of talk about this on Mastodon, and the general consensus is that people are rather underwhelmed by this. It really doesn’t sound particularly natural.
And when you compare it with the work that ElevenLabs are doing, it just pales into insignificance in terms of the accuracy, the natural sound of it. Even that little personal voice snippet that you played, the way it mispronounces Australia, for example. It’s not rocket science to have a name of a country pronounced properly, and it really is quite bad.
And analysts are starting to get a little bit concerned about Apple’s responsiveness to the AI bandwagon.
There is talk that iOS 18 is going to focus heavily on this area, because a perception is starting to emerge that Apple is just being left behind. Even before the latest AI craze that was really ushered in with ChatGPT, Siri was miles behind competitors like the Soup Drinker and Google Assistant, and it’s only become even more stark with some of these advances.
When you have a chat for example to ChatGPT, it is a very different experience from having a conversation with Siri.
You never know. I realize that there’s some pretty powerful computing going on on the ElevenLabs side.
I’ve recently been using their professional voice cloning feature, (This is at a different level from the general one that’s available.). I played a sample of that a few weeks ago on Living Blindfully, and it’s really not too bad. Quite a few people have commented on how good that sounded.
And I realize that what they can do at Eleven Labs with some very powerful computing resources means that you can’t really, I guess, compare what they’re doing on personal voice, which is stored securely on your device. But in terms of accuracy, it’s pretty mediocre, really.
That said, it doesn’t detract from the use case which is that if you are starting to lose your voice, and this is the only thing left to you, you can have it on your device and you can quickly type something in and have it speak back. That really is fantastic.
But it sounds like a computer, doesn’t it? it sounds robotic.
So it’s progress, but not the progress I guess some of these survivors of these diseases that take your voice away would ideally like. It’s a 1.0. It’s, I guess, the first attempt. We’ll have to see where it goes in the future, and hope that it improves.
Voiceover: Stay informed about Living Blindfully by joining our announcements email list. You’ll receive a maximum of a couple of emails a week, and you’ll be the first to learn about upcoming shows and how to have your say. You can opt out anytime you want.
Join today by sending a blank email to announcements-subscribe@LivingBlindfully.com.
Why not join now? That’s announcements-subscribe@LivingBlindfully.com, and be in the know.
Caller: Hello, Jonathan! This is Debra from Houston, Texas.
I enjoy your show, and listen every week.
Just wanted to let you know, in case you haven’t heard. Netflix is starting a new series on November the 2nd that sounds wonderful. It’s called All the Light We Cannot See, and it’s based on the book by the same name.
It’s a story about a young blind woman and her experience during the war (I’m assuming World War II.) And in one of the countries that Germany was occupying or starting to occupy, how she and other people navigate through living through that situation, and also falling in love and staying alive during that time.
What’s really exciting as well about the series is that the young woman who plays Marie, who is the main character in the series, is a blind woman. And the person playing the part as the actress is actually a blind woman who has never worked as an actress before. So it sounds tremendous.
It starts November the 4th. It’s a multi-part series on Netflix, and it appears to be audio-described. There’s a promotional preview on Netflix right now, so you can listen to it.
So hopefully, you can get the word out and other people can enjoy it.
I’m really excited about being able to enjoy the series that’s coming up.
Jonathan: Lovely to hear from you, and from Houston, Texas no less. We had a great time at the NFB convention in Houston just a few months ago.
And thank you for telling me about this. I didn’t know about it.
By the time this goes out, the series will be available on Netflix. Do let me know what you think.
I’ll be watching it. I’m sure many of us will.
It does sound very positive that they’ve got a blind person to play the role. I know that NFB were really focusing on this a few years ago – “Let us play us.”, and it’s absolutely true. There’s no reason why blind people can’t play other blind people on the screen.
And given that they’ve made sure that that is the case, that bodes well for a positive depiction of blind people in this series. Because it always makes me a bit nervous when there’s a blind character on these things. How is it going to be portrayed? But it does sound positive.
And after all of that effort, if they didn’t audio describe it, there’d probably be a riot. [laughs] It sounds like they’re trying to do everything right.
It’s a book I haven’t read. I’ve heard the title and I’ve heard it’s good, but I’ve not read it, and I really should do that.
I’ll certainly check out the Netflix, and I appreciate you letting us know.
This is from Elijah Massey. He says:
On the most recent episode, I heard that someone was trying to get voice feedback during workouts on Apple Watch and couldn’t get it to work.
I’m not sure if this is a bug or intended to be a feature, but I have found that the Apple Watch will not give voice feedback during workouts when VoiceOver is on, even though voiceover doesn’t automatically read workout notifications by itself.
When I’m on the treadmill, I start a workout on the watch with my AirPods Pro connected to it, activate the lock button to make sure I don’t accidentally tap something on the screen, and then turn off VoiceOver by pressing the digital crown 3 times.
If you have created a custom workout with different work and recovery intervals, it will announce when each period is over. Or if you’re doing just one time or distance goal, it will announce when you’re halfway done and when the workout ends. You can set heart rate zones, too, that should be announced as well.
After I’m finished, I first turn VoiceOver back on by pressing the digital crown 3 times, and then holding down the digital crown for a few seconds to unlock the watch.
If you’re playing music or other media from the watch, the volume will be lowered while an alert is being spoken. And then, it will return to normal.
Also, he was correct that this only works when a Bluetooth audio device is connected, not over the built-in speaker. I’m not sure which models are supported, though.”
Thanks, Elijah! That is useful info. I didn’t know that.
Sadly, the Apple Watch still does not directly support MFI hearing aids, which is a great shame because some of these features that people are using with AirPods are just not available to me yet.
I live in hope.
And at least finally, the carrier that I chose to move to for reasons that were well documented on this podcast at the time has finally got the Apple Watch cellular support. [laughs] So at least, I’ve got that back now.
Now, Elijah’s comments are backed up by Darren from Devon. Darren from Devon. What a great name. And he says:
“I wanted to drop a quick response to Steve’s bug relating to Siri reading out aloud whilst running. I have also logged this as a bug with Apple.
However, I have found a workaround, regardless of whether you wear headphones or not.
To turn the feature on, go to the Watch app on your iPhone and under Workout, scroll down and toggle on Voice Feedback. You should be all set.
The workaround I have found is to start the run or walk on your Apple Watch, then you can quickly toggle VoiceOver off using the digital crown on your watch.
After each kilometre or mile split, depending on what you have your preferences set, you will hear Siri announce split number and pace. Siri will also announce if the activity has been paused or resumed.
In addition, whilst completing a running workout, if you set custom workouts with alerts, you will also receive Siri feedback relating to the alert type set, as you transition from one segment into the next.
Remember, you will need to turn VoiceOver back on.
Frustratingly, as you have VoiceOver off, you’re solely dependent on your Siri alerts, which I really like. But if you get desperate, you can quickly toggle voiceover back on to read the screen before toggling back off.
Happy running!“, says Darren.
Thank you, Darren, and also to Elijah.
It’s great when the community can get together and comprehensively answer these questions.
Voice message: Hi, Jonathan and listeners! It’s Ramona mandy from Melbourne, Australia.
I was interested just to raise a couple of things here.
Firstly, I’ve just finished reading a book. The book is called And There Was Light, and it’s by a person called Jacques Lusseyran. I think you spell that Jacques is J-A-C-Q-U-E-S and Lusseyran is L-U-S-S-E-Y-R-A-N.
He was a French man who was blinded at the age of 8, and went on to play a significant leadership role in the French resistance movement during World War II.
He writes very well and quite in a metaphorical style, but I found it really interesting.
He actually founded a group called the Volunteers of Liberty, and then merged with another resistance movement called Defense de la France which is Defense of France, and he played a significant role in both of those groups during the French resistance.
He actually was put into a concentration camp, and was only one of 30 people that survived amongst thousands who didn’t.
The book can be quite grim at times because it talks about the terrible things done by the Nazis, but it was interesting to hear how he talked about the role that his blindness played in terms of the actions he did.
And he was obviously a very smart person. He went on to move to the US and become a professor there, and unfortunately was killed in a car crash in his 40s.
I listened to the book through Vision Australia Library. But if you can get it, (and I don’t know where it’s available), I would recommend it. Once again, it’s called And There Was Light by Jacques Lusseyran.
The other thing I wanted to bring up was about 20 years ago in 2003, I went to CSUN and I saw a multi-line Braille display there being shown by a Japanese company. It’s possible that that company was KGS, but I’m not sure.
But they were showing this multi-line Braille display, which was probably about 10 inches by 10 inches or so, and it was showing the hands of a clock, an analog clock going around in a circle, which was connected to a computer obviously showing that image.
I never found out whatever happened to that. I don’t know whatever came of it.
But I just thought it was interesting that 20 years ago, I did actually see a multi-line or multi-pixel type of Braille display, even though we’re just starting to see them in the last couple of years emerge.
So I was just wondering. If you or anyone else ever knew or saw that same device, you knew what came of that?
Thanks for listening, and bye for now.
Jonathan: Ramona, epic to hear from you.
I don’t remember that device, but somebody else may. And it did make me think, as I was listening to you talk about it, that there is lots of blindness technology that’s come and gone over the years that we may have forgotten about.
For some reason, what popped into my head was the miner computer. Does anybody remember the miner?
The thing that I remember about it was, it was at my first NFB convention, I’m pretty sure, which was in Chicago in 1995, and it was singing “New York, New York,”, and it was a DecTalk singing it.
So it was this tiny laptop computer. I remember being quite surprised by the size. And it had DecTalk built in.
I don’t know whether it was a miner corporation, I think, that made that thing.
So if anybody knows about the graphics array that Ramona’s talking about, and maybe you remember blindness technology that we don’t much hear of anymore, by all means, get in touch. That’s a fun one.
And thank you also for the book recommendation, Ramona. That really does sound good.
Henk Abmer’s writing in, and says:
Thank you for your reply in episode 255 of the podcast.
It appears that view finder description is one of those features that are only available in some languages. In my native language, which is Dutch, I only hear our local word for image when I tap the view finder.
When I switch to an English voice, I indeed get a basic description of what’s in view.
I think that some iOS versions ago, we would get the English description of pictures if no local version was available.
However, Apple has changed this behavior so you have to change to an English voice to get a description at all.
In the second part of your reply in episode 255 to my message, you mentioned that the P4 has no loopback functionality. I don’t think this is true.
In settings, USB, mix minus, you can choose to have the USB signal returned to the computer or not. So if mix minus is on, the input from the USB is excluded from what is sent to the computer.
However, if you turn this setting off, the output from the computer is echoed right back to it. So I have no problem recording screen reader demos in Reaper with the P4 as the sound interface.”
Thanks very much, Henk.
I certainly don’t like to give misinformation on the podcast, there’s enough of that around as it is.
I did check this out after you sent this email, and it does seem you were right. Not that I doubted you for a moment, but I’ve had even further confirmation that you’re right.
In most audio interfaces, this feature is called loopback. It’s under mix minus, as Henk says, and it does amount to the same thing.
So that’s a great catch, and I appreciate you correcting the misinformation there.
Howard Goldstein’s writing in. He says:
I’m looking for a way to determine the orientation of a printed page with an iPhone.
If I use the convenient OCR feature of JAWS, it will tell me if the page is right side up or upside down before it starts to read. But all the iPhone scanning/reading apps I’ve tried just read the page without telling me which way the page is facing.
I would want to have this type of information if, for example, I am placing pages into a folder and want to be sure they are all facing the same way.
I’d appreciate any ideas.”
Good question, Howard.
I wonder whether Be My AI might be the answer here, because I’m pretty sure that I’ve heard Be My AI telling me about the orientation of a page. And if it doesn’t tell you, you may be able to ask it.
That might be a bit of a long way around it because sometimes Be My AI can take a while to get the answer to you, but that might be worth a shot.
Does anyone else have any ideas on this question? Do be in touch. opinion@LivingBlindfully.com is how you get in touch. You can attach an audio clip to the email if you like, or just write the email down.
You can also call our listener line that is in the United States: 864-60-Mosen. 864-606-6736.
Voiceover: Like the show?
Then why not like us on Facebook?
Learn about upcoming episodes, and see when a new episode is published.
Voice message: Hi, Jonathan!
I have come across some accessibility ramifications with the Sharesies app. It’s been this way since version 15.
I’m on one of the plans that cover the transaction fees. But when I go to check my monthly plan status to see how much I’ve used and remaining, all VoiceOver says is progress view.
Similar, another issue that has been long outstanding is when you go to the activity, pending investments, the scrub gesture to move from pending investment to pending investment with the VoiceOver scrub move when you go from screen to screen with the 4 fingers doesn’t work. I’ve recorded some demos on the issue on my Mastodon profile.
And there’s another issue that I’ve also noted is that the back button is inconsistent with the scrub gesture. Sometimes, there are some parts of the app where the back scrub gesture does work, and there are some parts where it doesn’t. I’ve got that demo up on Mastodon as well.
If you go to my Mastodon, both demos are pinned to my profile if you want to listen to them.
I’ve emailed them to Rob from Sharesies. But he told me about a month ago that they’re still investigating it, which I’m getting a bit frustrated at the moment.
I think all of us here in New Zealand and Australia who have Sharesies app email in and let them know of the VoiceOver ramifications.
Jonathan: Thanks very much, David.
For those not familiar with it, Sharesies, as David says, operates in Australia and New Zealand, and it’s an online share brokerage app. You can buy shares, you can trade in all sorts of different indexed funds, and, oh, it’s really great when the stock market’s behaving itself.
By and large, I think it’s a pretty well-behaved app, but there are certainly a few idiosyncrasies, and David’s outlined some of those.
So it’s a timely reminder to just let them know that we’re out there, and we are using these apps.
Voice message: Hi, Jonathan! Mike May from Reno, Nevada.
Thank you for your in-depth interview and discussion with Lionel from Userway.
Coincidentally, I was at the M Enabling conference recently in DC and happened to sit across the table from him, and had a bit of a conversation.
I had had very little interaction with their products before, but the next product I’m going to talk about happened to be one that I discovered the Userway interface on.
It was MyBirdBuddy.com. I got the secondary voice that they have for accessibility enabled, and it was driving me nuts until I got it turned off.
I didn’t really see any value to any of the options they had, but I also may not know what I’m missing. So I am going to play around with it a bit more to try to understand a little bit better. Maybe there is some value here and there.
So the BirdBuddy product is something I think is pretty cool and innovative. I’ve been paying attention to it for a couple of years at the Consumer Electronics Show as they built the prototypes, and I volunteered to be a beta tester and never got taken up on that. But I did purchase the product when it first came out, and got it in May.
It’s a birdhouse, a smart birdhouse, if you will. It has a motion camera that detects the bird that lands on the feeder and takes a picture, which they call a postcard, and sends that postcard and a video clip to your phone. It identifies what kind of bird it is, and then you can listen to the video and hear the bird pecking the seed and the camera, and tweeting, or doing whatever birds do. So it’s pretty cool. You can find out what kind of bird it is, and you can hear it.
It doesn’t matter if you’re in the room or out of the country, you can follow the bird activity in your backyard. I think it’s a pretty cool thing for anybody bird enthusiasts, but certainly for blind people since we don’t know, unless the bird is making noise, that it’s around or hear it fluttering. But certainly, if it’s outside the window of your house, you’re not even going to know it’s there.
I know that I hear about birds all the time on our back fountain. I’d like to know that they’re there just for fun, and to know what they are.
That’s the good news. It’s about $300 when you’re all in with taxes, and so forth.
It has a solar roof. And if you buy that model, then you never really have to take the camera out, although it pops out quite easily if you need to remove it and charge it in the house.
It is, of course, connected by wifi, needs to be within range of your house wifi to be very useful.
Gina has one in Davis. I have one in Reno. I have some friends that have them now, and it’s really fun to share the postcards and the experiences about your birds.
That’s the good news. The app is what I would call 50% accessible. There are some buttons that aren’t labeled at all. Some of them are really wonky labels, but you can relabel them if you want, get some sighted help and figure that out. I’ve done that with Ira, and relabeled the number of the buttons.
It’ll be really great one of these days, when we’re able to share our labeling schemes with other users so that we don’t have to reinvent the wheel, so to speak, in relabeling when it’s 20, 30 different buttons.
The basic functionality is pretty accessible, and you can figure it out.
Of course, it is a very visual app. And so the way that you organize photos, and save them, and put them in a gallery, and delete them, and share them, that’s all a little bit difficult to manage.
There’s a settings section that is just a big graphic, and you can’t change that at all independently. You don’t really have to deal with that, other than when you set it up.
But I’d really like to see some improvement, so I went on a mission to contact the company. It took me several months to get through.
I finally had a really wonderful conversation with one of their head developers in Slovenia. He indicated that they would put this on the roadmap for October. And then a month later, he left the company.
He did reconnect me with their COO who called and spoke last week. He assures me that they’re very committed to accessibility, although they have limited resources.
I’m going to stick with it. They have set me up with a meeting with some other developers this coming week. And I’m going to reiterate the low hanging fruit, which is labeling the buttons properly. And then, the more challenging thing will be to deal with the visual aspects of how you select photos and deal with them. But it is figure-outable , and you can certainly label things and make it work.
And it’s a lot of fun. Check it out and stay tuned.
Love to hear from more people using it, so we can gang up on these guys and make sure that BirdBuddy is fully accessible.
This gives you a whole other way of sending out tweets.
Have a good one.
Well, our cup runneth over with goodness because we’ve got another email from David Goldfield. He says:
“I wanted to make a correction regarding my last comment about using hand gestures on the Apple watch lock screen when entering your pin.
In my last comment, I indicated that I wasn’t able to use hand gestures to perform a double tap on the screen. I wondered if it might be operator error, and I now realize this is the case.
After further experimentation, I can confirm that hand gestures do in fact work on the Apple Watch’s lock screen when entering your pin. I’m glad to know this is the case, and I apologize for my previous mistake.”
We’re all human, David.
Thank you very much for letting us know.
These discussions are always interesting. We’ve talked Mac versus PC, iOS versus Android, Victor Reader Stream versus SensePlayer, and here’s Christian opening up the subject of Bose versus Sonos. He says:
“I’m going to be buying a condo in Austin, Texas soon.”
That’s a nice city.
“And I’m wondering what you and the rest of the Living Blindfully community thinks about a home audio system – if I should go with Bose or Sonos.
Sonos and Google are currently having a disagreement, as I’m sure you know. And I want my home to be fully controlled by Google because I’m a Google loyalist. I even went as far as switching from my iPhone to a Google Pixel 7A.
Bose does still work with the Google assistant. But when you reviewed the Bose portable home speaker, the app had some accessibility issues. Those may have been fixed by now. I’m not sure.
What are yours and other listeners thoughts?
Another thing I’m considering is that all of the Sonos speakers, except for the Home, have touch controls. When I was at BestBuy checking out some Sonos speakers, I discovered that the touch controls aren’t very tactually distinctive.
How do you get around this? Do you ever use the touch controls at all?
Also, one other thing I like about Sonos is the sound swap feature, if you have a Sonos Roam. As far as I know, Bose doesn’t currently offer any similar feature.”
Well, Christian, I’m very much a Sonos loyalist. I really love their stuff. We’ve got Sonos everywhere.
The only thing that this patent dispute is affecting is the ability to have Google Assistant built into Sonos products. This will come down to personal preference. Again, I don’t need Google Assistant on my Sonos devices, and I actually prefer the Amazon Assistant to Google by quite some margin. That works just fine on Sonos, so we’re good there.
But even on your Google products, you will still be able to use the Sonos app. I understand it’s pretty accessible with Android, although I’ve not had any experience of that, nor have I had any recent experience of the Bose app.
I can tell you though that the Sonos speakers sound great. They come in all sorts of shapes, sizes, and variety.
If you set up a surround sound system in the lounge, it’s super impressive. And the multi-room features are great. They work reliably.
You can create all sorts of groups. Sonos has its own voice assistants now for grouping and speaker functions, and that works very well, so I’m satisfied with Sonos.
I don’t find myself using the touch controls too often because you can use voice commands to adjust the volume, and to skip and all those usual functions. You can also use the app.
But I do use them from time to time. And because there are a few of the touch controls, you get pretty familiar pretty quickly with where they are.
I would absolutely prefer buttons, but it’s not so bad that it’s a show-stopper for me, given everything else that Sonos does well.
If, however, having the Google Assistant on your smart speakers is an absolute must, then Sonos probably isn’t the option for you.
If you are embedded in the Bose ecosystem, how is the app these days? I guess for Android, in Christian’s case, since he’s got rid of his iPhone.
Let us know what you think if you’ve had any experience in recent times of Bose versus Android. opinion@LivingBlindfully.com, or give us a call in the US – 864-60-Mosen is that number. 864-606-6736.
“I am hoping the community can help with a problem I am having, or at a minimum, confirm if mine is an isolated case.
For some time, I have been trying to read Kindle books with my Braille display.
I have a Brailliant BI40X and an Orbit Reader 20, both display the same behaviour. I am using the Kindle app on my iPhone 12 Pro running iOS 17.1.
This behaviour has been evident since iOS 16.
All devices are fully up to date.
When I try to read after panning, the display flicks back to the start of the screen. This happens after a random number of pans. Occasionally, I can move through a number of screens before the behaviour shows itself. At other times, it’s evidenced immediately.
I have tried using a USB connection, using an older iPhone which is marginally better, but can’t connect by USB and has a tendency to drop the Bluetooth connection. The phone is a 6 Plus running iOS 15, the last released version.
I also tried using a cabled connection to a Chromebook. But while the Braille comes up using the Android Kindle app, I can’t pan.
I do also have a Kindle Fire HD 8, which is up to date. The Brailliant does not connect to this, and my Orbit now refuses to connect, although the Fire tablet can see it. I have been able to use this in the past.
I have notified Apple who have asked a number of questions, and I have sent them the logs on my iPhone.
Amazon Accessibility, however, have not responded.
I really want to be able to read books this way to improve my Braille reading, but also because I gain access to a much wider selection of books.
My wife reads a great deal on Kindle and I can read titles she has previously read, rather than paying for new titles or being limited to the Braille content in libraries like RNIB.
Kindle for Windows does work but needs me to increase my screen-on time, which I’d rather not do. It’s also not great if I want to read after my wife has gone to sleep, as the glow from the laptop keyboard and even with screen curtain on the screen disturbs her. I also prefer more portable devices.
If anyone has experienced this or knows how to stop it, I’d love to know, as I am pulling my hair out trying to fix it.
Thanks for any help you or the community can offer.”
Well thank you for writing in, Pete, and good on you for trying to persist with learning the Braille (with an uppercase B), despite all of these problems.
I’ve just done a very quick check after getting your email, and it does seem to be working okay for me at the moment.
I know there have been some problems with a range of apps pertaining to the auto scrolling feature. Automatic Braille Display Panning is what it’s called.
It does seem to be behaving for me at the moment with iOS 17.2, but let’s see if anyone else has any hints.
Perhaps, it’s to do with font size. I don’t know if you’ve tried adjusting the font size to be, say, smaller to see if that helps.
It could just be broken on certain devices.
I have a Mantis, and I’m using that. And that’s pretty similar in terms of how it’s displaying itself to the world to the Brailliant BI40X.
So let’s see if anyone else can shed some light on what might be causing this, and how you might fix it.
This email is from Danny Norris, and he says:
:Jonathan, I want to thank you so much for showing us the HeardThat app.
All I can say is wow!
I have a severe/profound hearing loss. I am a very long time hearing aid user.
The hardest place for me to understand speech is at church after the service, where we all stand around and visit. Even those with good hearing have a hard time.
This morning, I turned the volume completely down on my hearing aids. I used the directional mode on the HeardThat app and pointed it at a friend, and we started talking.
I am just so amazed! The only person I heard was my friend. I couldn’t believe it. All of the crowd noise was completely silent, except for my friend.
He said that my whole demeanour changed. I was sitting back, relaxed, and had a normal conversation with him, and he didn’t even need to raise his voice for me to hear him.
Last week, I was having dinner with my niece and her family, and put the HeardThat app into all voices mode, and put the phone on my side of the table, right next to me. And yet, I could hear her, her 2 daughters, and her husband quite clearly.
I am currently test-driving the new ReSound Nexia Aids, and am very happy with them. The Nexia Aids have the Bluetooth 5.3 LE low energy protocol. There is still a slight latency going on, but not bad. The latency is much shorter than it was on my 5 year old ReSound Quattros. I think one can get used to the slight delay, with a little practice. The very least is that I am now able to follow the conversation that is going on at the restaurant dinner table with family.
I would like to be able to press my iPhone 15 Pro’s action button, and it would turn my hearing aid volume all the way down to 0, start the HeardThat app, and put it into directional mode. As it is now, it takes me a few seconds to get everything set up.
Listen to me complaining. I can now hear a noise, and I am still complaining.
So thank you again for finding this app.”
Well, we can thank Robin Christopherson for that, Danny. It was he who turned us on to this. And it is indeed quite a find, isn’t it?
It just makes me smile to hear those stories. I’m so glad you feel able to participate better now.
Advertisement: Living Blindfully is brought to you in part by Aira. That’s the service that offers professionally trained agents to give you sighted assistance anytime, anywhere.
It’s good to see Aira now being offered as an accommodation in an increasing number of workplaces. There are so many ways that this can be useful from getting past an inaccessible website, to navigating an unfamiliar building, to have someone read you a piece of paper and even take notes on that document, sending it to you later. When your workplace funds Aira as an accommodation, you can say goodbye to relying on sighted co-workers or apps that may not provide the accuracy or speed that you need on the job.
Aira can improve your productivity, efficiency and independence in your workplace. And when you advocate for Aira to be available in your workplace, you’re not only improving your own productivity. You could be helping to make a dent in our high unemployment statistics.
We’re already seeing that workplaces that embrace the need for visual interpretation tend to hire more blind people. Win-win!
Aira will work with you to advocate for your workplace to offer sight on demand. You can email them to make an appointment so an expert at Aira can help make it happen. The address to contact is access@Aira.io. That’s access@A-I-R-A.IO.
Well over 20 episodes ago now, I think, I mentioned that I had replaced my audio interface with an Audient Evo 16. And I did mention that I had produced something for the Blind Podmaker podcast on this.
Audient got hold of it and they amplified it, and I appreciate that.
And I did mention that at some point, I would play it on Living Blindfully. I think now is a good time to do that, because it’s not just a US phenomenon. It’s become a global phenomenon that at this time of the year, you do start to see some pretty sweet deals on technology. So if you’re looking at one of these Audient audio interfaces (maybe you’ve heard of them on social media), you want to hear more about it.
I’m going to play this now, just in case some amazing special comes up in the next few weeks, and you take the plunge.
It may not be the 16 that you want. It may be a smaller model. But they’re all pretty accessible and very impressive.
So here is that demo of the Evo 16, recorded right after I bought it.
Today, I’m going to take a look at the Audient Evo 16 audio interface.
I’ve just got one of these amazing devices in my studio, and other blind people have paved the way to make this possible.
I can honestly say that the Audient range is the most accessible audio interface I have come across. And it’s a good news story because it speaks to what can happen when there’s goodwill on both sides of the partnership.
Sometimes, it can only take one dedicated developer to make a huge difference. In this case, Frank Chavez, a developer at Audient, took an interest in accessibility for blind people and worked with the blind community. In particular, Scott Chesworth, Tony Bart, and Nick Janak, among others, helped to make this astounding level of accessibility and functionality a reality.
So let’s start with first principles, and talk about what an audio interface is. I guess in earlier, simpler times, we would have called these a sound card, and they tended to be inside your computer.
These days, with the amount of digital production being done, audio interfaces can often be sophisticated external devices that connect via a USB port. They have physical connectors so that you can connect things like microphones, instruments, and line-in sources to record from, or broadcast from those sources. And they also have outputs, so you can connect them to things, be it speakers, a mixer, or simply headphones, so that you can play your audio back.
At the time that I’m recording this, there are 3 Audient Evo interfaces. They do make another range as well, but it’s the Evo that we’re talking about today.
The Audient Evo 4 has 2 analog inputs and 2 analog outputs. This means, for example, that you could use it to plug a couple of microphones into it, and to connect to a pair of stereo speakers for playback.
There’s also the Evo 8, which has 4 analog inputs and 4 analog outputs. It also offers some digital inputs and outputs.
And the Evo 16, which is a very special device, it has 8 analog inputs and 8 analog outputs. Two of those inputs are on the front of the device. Everything else is on the back.
The ones on the front are special because not only can they be used as microphone or line inputs with their combi jacks, but they can also be used as instrument inputs. And we’ll talk about how we toggle all these things in a little bit, when we take a look at the interface of the hardware itself and the way you can control it in software.
The outputs are standard 6.35mm or quarter inch balanced jacks.
There are 2 headphone sockets on the Evo 16, and one of the really cool things that we’ll spend some time talking about is that you can send different mixes to different places. And with those headphone jacks, you can send a different mix to each jack.
Now, I’m thinking more as a podcaster. Obviously, these devices can also be used to make music as well.
But one podcast scenario I can immediately think of where this is useful is, let’s say that you are recording an interview with your Audient Evo 16, and you have these 2 headphone jacks. You’ve got your guest connected to one set of headphones plugged into one jack, and then there’s you monitoring the whole thing with your headphones.
In your headphones, you may want to hear your screen reader so that you can query how long you’ve been recording for. (Just make sure that in fact, you still are recording because there’s nothing worse than doing the great interview, only to find that it didn’t get recorded.) You don’t want your guest to be distracted when they’re talking by hearing your screen reader in their headphones. So you can send a separate mix to your guest through that set of headphones that does not include where your screen reader is routed to.
There’s ample digital input and output on the Evo 16 as well, and plenty of room for expansion in that regard. That’s not something that I have had direct experience with. But if it interests you, you can of course check the specs.
When I’ve heard people talk about some of the Audient Evo interfaces, I’ve heard people criticise the build quality. The Evo 16 is very solid. It’s a big kahuna, too. It’s 354 by 285 millimetres.
The Evo range has a reputation for being easy to use, and I don’t think the Evo 16 isn’t easy to use at all when you know what you’re doing. But because there are so many inputs and outputs, it’s obviously going to be a bit more complex than the smaller Evo range.
There’s no doubt that this Evo 16 is kind of on the extreme end of the spectrum. And if you don’t know whether you need a device with all of these inputs and outputs, then you probably don’t, and you’ll probably find the Evo 4 and the Evo 8 much simpler to use.
Nevertheless, what I have to show you today will apply to those other interfaces, for the most part.
Let’s take a look at the layout of the front of the device.
If you go past the combi jacks which may well have something plugged into them if you’ve got the interface set up, you will find 2 buttons aligned vertically. The top one is input 1, and the bottom one is input 2. Next, you’ve got another pair of buttons aligned vertically for inputs 3 and 4. Similarly, inputs 5 and 6. And finally, inputs 7 and 8.
I’m going to keep going past input 7 on that top row of buttons. And next to input 7, we have a button for phantom power. Some microphones, usually condenser microphones, require phantom power. And so you’ll need to press this button to get phantom power to the channel in question. We’ll talk about how to do that in a moment.
The next button on the top row, right above the LCD display, is the instrument button. And if you press this, then you’re able to assign either or input 1 and 2 to be an instrument input.
Going down to the bottom row past the input 8 button, you’ve got a large rectangular button. And this is the Smart Gain button. This is one of the coolest accessibility features of this device, so we’re going to spend some time talking about Smart Gain.
Then, you have the LCD.
On the right of the LCD is a physical dial, and it’s the only physical dial on the device. The way that this works is that you press the button for what it is you want to control, and then you turn the dial.
Let’s say, for example, that I wanted to adjust the microphone level for the microphone I’m using at the moment, which happens to be plugged into input 3. I would press the input 3 button. And then when I turn the dial, it will adjust the volume of input 3.
This works with outputs as well. So if I press a button for one of the headphone jacks, then I can turn the dial and adjust the volume. It’s a very straightforward, simple system to use.
This concept also applies to certain buttons on the device. If you were to press, for example, input 1, and then the phantom power button, then phantom power would be applied to input 1.
Similarly, if you press input 1 and then the instrument button, then input 1 becomes an instrument input.
On the top row next to the rotary dial is a speaker button. So you can press that, and then you can use the rotary dial to control your speaker volume if you use the interface in this way. And as the case with a lot of these buttons, you can press it down to mute it.
Below that you have a software programmable button. You can use this for various things such as toggling to alternative speakers, or activating the TalkBack feature. And no, there is no Android screen reader in here. In this case, TalkBack refers to being able to speak with somebody who you’re working with in audio production.
Finally, you’ve got standard quarter inch headphone outputs here. And above each jack, you’ve got a button. If you want to adjust the headphone volume, press the one in question, and then turn the rotary dial.
Returning to inputs for a second. If you want to make any of them a stereo pair, you can press them both together. They obviously have to be logically aligned. So you can press 1 and 2 together, you can press 3 and 4 together, etc. And when you do that, they’ll become a stereo pair, as opposed to 2 separate mono channels.
It’s not impossible for a blind person to set good recording levels. You can use your ears if they’re working well enough. There are also tools in various digital audio workstation applications that will give you feedback about your levels.
But obviously, it’s good when these things can be as simple and efficient as possible. And the Smart Gain feature in the Evo range is just one of the coolest things.
I’ll demonstrate how this works.
I have my iPhone connected to my Evo 16, and I’ve deliberately set it at a very low level.
Now, it may not sound low to you because of the compression that we’re using for the demo. But nevertheless, I can promise you it is quite low.
I’ve got a book here that I’m reading at the moment about the Boris Johnson premiership. And if I just do a bit of a scroll here, …
[VoiceOver reads softly]
I can hear that, but it’s actually quite soft. It’s not optimal. What we’re going to do is see if we can get Smart Gain to fix that.
To make this go, we first press the Smart Gain button – the big rectangular one on the bottom of the device. And then, we press the input that we want to adjust. In this case, I can press either input 5 or input 6 because I’ve got them linked as a stereo pair. And when I do that, I will need to have the iPhone say something so the Audient Evo 16 knows where to set the level.
So let’s try this. I will lean over and press the big button.
So I’ve now pressed that. I’m going to now press 5. Then, I’m going to push the Smart Gain button again.
And I’ve got the iPhone reading now. But you’re not hearing that because Smart Gain has gone away to do its thing.
In a second, it should have got the level, and it will come back. And it should be a bit louder than it was, probably quite a bit louder than it was.
VoiceOver: With Johnson, and strategizing the campaign.
Jonathan: And it’s a lot louder.
It is as simple as that to set beautiful levels, and you can do that as many times as you need. And you don’t have to go through one input or one pair of inputs at a time, either. If things are going on in multiple inputs, just press the inputs you need. If things are going on in all the inputs, you can actually press and hold the Smart Gain button, and they’d all be selected.
This is just one of those features where once you’ve used it for a while, you’d never want to go back. It is similar to the feature in the VoCaster, and I think it does a similar job.
My thinking on the VoCaster is that it might be a little low when it does its Smart Gain type feature. I haven’t seen this with the Evo. It seems to be more on track in terms of what the level should be.
Now, we’re going to take a look at the Evo software which is very accessible. It’s impressive. It is usable with JAWS.
But it is not as intuitive to use with JAWS once you get into the Evo mixer, as it is with NVDA. Hopefully, this might be addressed by either party – Vispero or Audient.
I think you could get used to using it with JAWS if you knew the layout first. So if you use it with NVDA for a while and you’re primarily a JAWS user, and you just don’t want to change screen readers to make an adjustment, you can get used to it.
There’s nothing you can’t do. It’s just not speaking quite as much in the same way as it is with NVDA.
So I’m recording in Reaper, and I’m actually using the Evo 16’s loopback feature, which we’ll talk about a little bit more later because this is a great feature for screen reader demos.
I’m going to go to the system tray. In NVDA, we use the Windows method, and we’ll press the Windows key with B.
NVDA: Show hidden icons, button.
Jonathan: I’ll go right.
NVDA: Evo button, 1 of 13.
Jonathan: And I’ve made sure that the Evo application is not hidden (because I’m using Windows 11). So you’ll want to do that, too. Go into the taskbar properties, and just make sure that this is always visible in the system tray because you’re going to use it a lot.
Now, we’ll bring up the context menu.
NVDA: Evo window.
Jonathan: And I’ll go down.
NVDA: Show mixer.
Jonathan: We’re going to spend quite a lot of time in the mixer soon. But we’ll explore what else is in this menu first.
NVDA: About Evo.
Visit Evo knowledge base.
Check for updates.
Set ASIO buffer size, collapsed.
Set sample rate, collapsed.
Set clock source, collapsed.
Input routing, collapsed.
Jonathan: That input routing might be a bit confusing for some. This refers to the loopback source. If you want to configure where loopback is recording from, you go in here.
If you’re not familiar with the concept of loopback, this is a way to set as an input and output on the audio interface.
So let’s say that you have your screen reader routed to outputs 1 and 2 of your Evo 16. You want to record a screen reader demo. You want to set the loopback input to record from outputs 1 and 2 to pick up your screen reader. If you’re using a multi-track editor like Reaper, then you could have your microphone in one track, and this loopback feature in the other.
But there’s a lot more that you can do with loopback in the Evo 16. Loopback’s not an uncommon feature these days. But with the Evo 16, you can do so much more than just assign a single stereo output to an input.
Look in here. I’ll go right.
NVDA: Evo window. 8 at 15 plus 16. DAW 1 plus 2, checked.
Jonathan: At the moment, I’ve got DAW 1 or DAW 1 and 2 checked. That means that anything I send to those virtual outputs will get recorded when I choose loopback as my source.
NVDA: DAW 3 plus 4.
DAW 5 plus 6.
DAW 7 plus 8.
DAW 9 plus 10.
Jonathan: DAW 9 plus 10, it’s not a physical output because there are only 8 analog outputs on the device. But you can still have a DAW 9 and 10 virtual output.
NVDA: Master Mix.
Jonathan: Now, this is where it gets really interesting.
You can have several mixes with the EVO 16. So you can create one mix that’s your master mix. You can create several other mixes that you can have available to you at any time.
So if, for some reason, you want to create a mix which includes your microphone, your screen reader, something else all on the one source, then you can do that by setting up a particular mix that way with just the things you want to record, and then assigning that mix to be where your loopback records from. It’s very powerful.
But I do wonder how often you would use this feature. Because if you’re using something like Reaper or any multi-track digital audio workstation, it is preferable to record everything on separate tracks. It’s still really cool that you’ve got this option, though.
Jonathan: So here are the different mixes. And really, all this means is that you can have different levels, different things selected, different things deselected and muted, and you can choose from all of these mixes.
A 15 plus 16.
Jonathan: And now, we’re back around.
NVDA: DAW 1 plus 2, checked.
Jonathan: And DAW 1 and 2 is checked.
So I’m really impressed that you can actually choose different mixes as your loopback source. That really is very powerful.
I’m going to press the Alt key to get out of this menu.
And now, we need to re-invoke the Evo software.
NVDA: Show hidden icons, button.
Evo button, 1 of 13.
Jonathan: And now, we’re going to go into Show Mixer.
Eloquence: Mic channels are visible.
Jonathan: What you heard there was some SAPI speech that is coming from the software.
When you have a screen reader running, the Evo software recognizes this, and it will speak things through SAPI.
I understand that with NVDA, there may also be a way to have what would normally be SAPI speech spoken through NVDA itself. At the moment, I’m just using SAPI speech. It happens to be the British eloquence speaking.
And you will get some feedback. We’ll show you more about that a bit later.
I’m going to Alt Tab into this because it doesn’t seem to get focus.
Jonathan: Now, I’m going to tab around here.
NVDA: Mic 1 plus 2, edit. Mic 1 plus 2, read-only. Toggle this object to change labeling.
Jonathan: The first thing is the label of the channel.
And we’re now in a list of inputs. By default, this channel is called Mic 1 and 2.
I’ll press the Tab key.
NVDA: Mic 1 polarity off, toggle button. Off, not pressed.
Jonathan: You can reverse the polarity, if you want, if you’ve got some phasing issues there. Normally, you would not need to play with this.
NVDA: Mic 2 polarity off, toggle button. Off, not pressed.
Jonathan: Now as you’ll hear, Mic 1 and 2 are linked. And we’ll get that confirmation as we continue to tab through this dialog. Because actually, although by default, this channel is called Mic 1 and 2, I’ve got it as a line input source going to my main mixer bus because I’m using a mixer in my studio in addition to the audio interface.
I’ll talk more about whether you need a mixer or not a little bit later because there’s so much in here. Spoiler alert: you probably don’t, in most cases, need a mixer at all with an audio interface like this.
NVDA: Slider 0: mic 1 pan, slider.
Jonathan: Because this is a stereo pair, we have 2 sliders. The first is set at 0 for mic 1. In other words, the left channel.
NVDA: Slider 1: mic 2 pan, slider.
Jonathan: And the second is set to 1, all the way to the right for the right channel.
You can use the arrow keys to play with this.
You probably don’t want to, if you’ve got a stereo pair like I have.
NVDA: Solo toggle button, off, not pressed. Mic 1 plus 2 solo button, off.
Jonathan: If you solo this for the mix that you have selected (and currently, the master mix is selected. We’ll show you the master section a bit later.), then this will be the only channel you hear.
Where this could be useful is if you’re trying to make fine adjustments, you just want to make sure that it’s working correctly, and you just need everything else to be quiet.
We’ll press tab, …
NVDA: Mute toggle button, off, not pressed. Mic 1 plus 2 mute button, off.
Jonathan: I guess this is the antithesis of the other one.
You can also mute this channel if you need to.
NVDA: Checkbox not checked. Read-only. Mic 1 clipping indicator, off.
NVDA, and for that matter, any screen reader will identify this as a check box. But you can’t toggle it. You can’t do anything with it. But it’s telling you that there’s no clipping going on at the moment.
For those not familiar with that, clipping is a bad thing. It means that you’re overloading your audio interface. You’re sending it too much signal. There’s a possibility that you could get distortion. We don’t want clipping.
Now, you heard the sappy speech talking a bit earlier. What is really cool is that if you leave this software in the background, just alt tab away and do what you’re doing with Reaper or your broadcasting software, whatever you choose to use, if any of your inputs clip, you’ll be told. You’ll hear which input is clipping. It will tell you this, and it will use the SAPI voice by default to tell you that.
NVDA: Checkbox not checked. Read-only. Mic 2 clipping indicator, off.
Jonathan: There’s the other one for mic 2 because remember, we’ve got a stereo pair at this point.
NVDA: Slider minus 128, mic 2 channel fader.
Jonathan: That is the fader for the channel. If you bring this fader up in the mix, then you will hear the output of this channel on whatever mix is currently selected. Right now, for me, that is the master mix.
Because I am using a mixer, I don’t need to hear my microphone because I’ve got 0 latency monitoring, thanks to my mixer. But chances are, if you’re using a microphone, you will want to hear it. And so you can set this volume to a comfortable level for you.
NVDA: Mic 1 phantom power off, button.
Jonathan: I’m using mic 1 and 2 actually as a line-in source anyway, so we certainly don’t want phantom power going there.
NVDA: Mic 1 gain slider: minus 1. Mic 1 gain slider.
Jonathan: This is the gain slider for the microphone. In other words, how much signal is it getting in the input stage.
I’ve had this set automatically by Smart Gain, so I’m not going to change it. But I can use my arrow keys to give it a bit of a boost if I need to, or to turn the microphone down.
Now keep in mind, you can also do this on the device itself just by tapping the mic 1 button, and then turning the dial. And if you do that, then the change will be reflected here, and you will hear that change spoken by the software.
NVDA: Inst, button.
Jonathan: Now if we press this button, then it will turn mic 1 into an instrument input. We definitely don’t want to do that.
But you see, all these things that we looked at in hardware before are controllable in software. Very cool.
NVDA: Mic 2 phantom power off, button.
Mic 2 gain slider: minus 1. Mic 2 gain slider.
Jonathan: And you’ll notice that both are set to the identical level because it’s a stereo pair of minus 1. This is very good because if you’ve got analog controls, sometimes, it can be really difficult to get the absolute perfect balance when you’re trying to record from a stereo source.
But with Smart Gain and the ability to link these channels, it is so simple to get exactly the level that you want.
NVDA: Inst, button.
Mic 1 and mic 2 are joined in stereo mode. Toggle this button to split them in mono mode.
Jonathan: That’s a key feature here that inputs 1 and 2, which it calls Mic 1 and 2 (And you can apparently change the label for those, if you want to.) are currently linked as a stereo pair. That’s what I intend, so that’s really good.
If I press Tab now, …
NVDA: Mic 3, edit. Mic 3, read-only. Toggle this object to change labeling.
Jonathan: These are all very similar, except Mic 3 is actually a microphone. It’s going through an insert jack of my mixer so that I can bypass my mixer and record directly from this microphone.
Where this is useful, for those who are interested, is that when I’m doing live broadcasting, I can go and record conversations and do other things while music is playing. It’s really cool. So Mic 3 is just an ordinary mono mic source.
Do I have to tab through all of these things just to get to the next one?
No, I don’t. I can press Control Tab, and it will jump me to the next channel.
NVDA: Mic 5 plus 6 container.
Jonathan: I’ve jumped to Mic 5 and 6.
So does that mean that I’m wrong and that Mic 3 and 4 were linked as a stereo pair?
Well, I can just shift Tab now and verify.
NVDA: Mono, button. Mic 3 and Mic 4 are in mono mode. Toggle this button to join them in stereo mode.
Jonathan: Good on you, Daniel. It’s in mono mode, according to him. So they are separate channels and I can link them if I want, just by pressing this button, pressing Enter on it.
Remember too that you can do the same thing on the device itself by pressing buttons 3 and 4 for the inputs together.
NVDA: Mic 5 plus 6 container.
Jonathan: There’s the inputs 5 and 6, …
NVDA: Mic 7 mic 8 container.
Jonathan: And the analogs 7 and 8.
NVDA: PC 1 plus 2 container. PC 1 plus 2, edit. PC 1 plus 2, read only. Toggle this object to change labelling.
Jonathan: We have a set of 5 stereo outputs here, going from PC 1 all the way up to PC 10. And there’s something important to understand about this, if you have used older or perhaps simpler audio interfaces.
Often, you’ll find that the only thing that you can do with some older audio interfaces is to use them in what I would describe as a pretty linear fashion.
So if you send something to outputs 1 and 2 in Windows, it will come through outputs 1 and 2 on the back of the device, on the analog device.
Now, you can do that. And in fact, that’s what I’ve done because I’m using this Evo 16 in conjunction with a mixer because of my broadcasting work, and I just want to have faders in front of me when I’m broadcasting live. It’s really important. It’s a great use case.
But you don’t have to use it that way, and most people don’t. You do not really need a mixer in most circumstances with this Evo 16 because it has such good routing and mixing functionality built right in.
So the workflow is that you would create multiple mixers, and then you decide which outputs should receive each mix. So you might, for example, have the master mix sent to inputs 1 and 2. You might have mix QA, for example, sent to outputs 3 and 4.
What you might want to do is in your master mix, you might not want your screen reader. So you can have all your music, all the sources that you might play from (maybe you listen to internet radio, or Spotify, or whatever it might be), and you have those on a set of really awesome speakers.
You might create a second mix that just includes an output that your screen reader is routed to. So that will be a virtual output. For example, you might even have your screen reader set to DAW 9 and 10.
Then you can go and create an alternative mix, any of the mixes that are available. You can pick one and decide in that mix, I just want my screen reader. And I’m going to route that mix just containing my screen reader to a set of the outputs on the back of the device, and I’ll just plug a really tinny speaker into those outputs. Because sometimes, if you let your screen reader play through your really big high-quality music speakers, it can sound a bit boomy and bassy, and you just get far too much screen reader.
This way, you can keep your good speakers for the good stuff, and send a separate mix to another set of outputs just containing your screen reader. It really is very powerful.
And that’s just the beginning, really. Your imagination is the only limit.
So I’ll continue to tab through here.
NVDA: Slider 0: PC one pan slider.
Jonathan: What we’ve got here is a pan slider.
NVDA: Slider 1: PC 2 pan slider.
Jonathan: And we’ve seen this before. It’s a stereo pair. So the left one is panned hard left, the right one is panned hard right.
NVDA: Solo toggle button, off, not pressed. PC 1 + 2 solo button, off.
Jonathan: We can solo it again.
NVDA: Mute toggle button, off, not pressed. PC 1 + 2 mute button, off.
Jonathan: And mute it.
NVDA: Checkbox not checked. Read-only. PC 1 clipping indicator, off.
Checkbox not checked. Read-only. PC 2 clipping indicator, off.
Slider – 128. PC 2 channel fader.
Jonathan: At the moment, this volume is set all the way down, which means that on the master mix, you will not hear anything from this channel.
The reason for that is that I’m using a mode called DAW Through mode, and I’ll show you that soon.
If you want to hear anything that’s sent from Windows to this channel, then you want to make sure that fader is up.
You can, of course, have different mixes, and we’ll cover that in a minute.
NVDA: Stereo, button. PC 1 and PC 2 are joined in stereo mode. Toggle this button to split them in mono mode.
PC 3 + 4, edit. PC 3.
Jonathan: Now we’re on to PC 3 and 4.
If I press Control Tab, …
NVDA: PC 5 + 6 container.
Jonathan: We’re on to the next one.
And Control Tab again, …
NVDA: PC 7 + 8 container. PC 7 + 8, edit. PC 7 + 8, read-only. Toggle this object…
Jonathan: And we’ll Control Tab again.
NVDA: PC 9 + 10 container. PC 9 + 10, edit. PC 9 + 10, read only toggle.
Jonathan: This confused me when I got the Evo 16 because there are 8 analog outputs. But here, we have a PC 9 and 10 option. This is because you’re not constrained by how many analog outputs there are. You can send this anywhere you want to.
NVDA: Master section, button. Toggle this button to switch to master mix.
Jonathan: Now we are on the master section, and this is where we determine what mix we’re controlling. At the moment we’ve been controlling the master mix.
And if I were to change to another mix, then all the changes that I make such as the ones we’ve just been through with the mic inputs and all the other things we’ve looked at will pertain to another mix.
So I’ll press tab.
NVDA: Master Mix, edit. Master Mix, read-only. Toggle this object to change Master Mix labeling.
Checkbox not checked, read-only. Master Mix left clipping indicator, off.
Jonathan: We’ve got clipping for the Master Mix as well, so you will find when the whole thing is clipping.
NVDA: Checkbox not checked. Read-only. Master Mix right clipping indicator, off.
Button. Toggle this button to switch to QA.
Jonathan: This is our second mix which is called QA. You can use this for anything you want.
So if I were to press enter, then my view will change, and I’ll be controlling the QA mix.
All the changes that I make won’t affect the Master Mix. It’ll affect the QA mix.
How do we then work with that QA mix?
I’ll show you that in a little bit.
NVDA: QA, edit. QA, read-only. Toggle this object to change… Toggle button, off, not pressed. QA un-soloed.
Button. Toggle this button to switch to QB.
Jonathan: Now, we can switch to QB.
NVDA: QB, edit. QB, read-only. Toggle button off, not pressed. QB un-soloed.
Button. Toggle this button to switch to QC.
Jonathan: And so on.
NVDA: QC toggle button. Toggle this button to switch to QD.
QD, edit. Toggle button.
Mix channels not visible toggle button, on, pressed.
Jonathan: Alright. So we’ve got 4 mixes there – QA, QB, QC, and QD. They’re not for anything specific. You can assign them to where you want them to go. I promise we’ll get to that.
Now, we can specify which controls are visible.
NVDA: Mic channels not visible toggle button, on, pressed. Mic channels visible.
Jonathan: It’s a little bit verbose. But the key is the end there, where it says that mic channels are visible.
If you don’t want to play with these, then you can toggle them off, and they won’t appear in this application. You can, of course, come back to this master section and toggle them on at any time.
NVDA: Up toggle button, off, not pressed. Digital channels not visible.
Jonathan: I don’t have any digital inputs and outputs to worry about, so I’ve toggled those off to make the interface less cluttered.
NVDA: DAW channels not visible toggle button, not press. DAW channels visible.
Jonathan: You can also make the DAW channels invisible. I don’t want to do that.
NVDA: TB, button.
Jonathan: This is the button for the talkback feature, where you can speak to somebody that you might be working with.
NVDA: Mon, button.
Jonathan: This will put everything into mono, quickly.
NVDA: Alt, button.
Jonathan: This toggles the Alt speaker mode.
NVDA: Dim, button.
Jonathan: This will dim the volume.
NVDA: Cut, button.
Jonathan: And now, we’re at the settings button.
Before I go there (because there’s another way to get there), I should mention that we do have a menu bar. If I tap the Alt key, not a sausage. Nothing happens. I think this is a characteristic of applications written with the Juice framework.
So you do have to use first letter navigation to get the menu. I’ll press Alt F.
NVDA: Evo window.
Jonathan: And now, I can down arrow.
Jonathan: What are we opening and saving?
You can save presets.
It is a busy interface. And if you want to, when you’ve got everything configured just the way you like, you can save your settings. You can also save multiple presets if you want to quickly load things when you need to.
There’s also a stand-alone mode when you’re using the interface not connected to a computer, which will preserve settings for working in stand-alone mode.
Jonathan: Now if I right arrow, …
NVDA: Evo window.
Show analog inputs, checked.
Jonathan: We’re on the view menu.
NVDA: Show digital inputs.
Show DAW returns, checked.
Show mic pre-controls, checked.
Jonathan: That’s not checked by default. But I like all the mic options visible, so I’ve toggled that on.
NVDA: Show system panel.
Jonathan: And we’re going to go into the system panel in a moment.
NVDA: Show analog inputs, checked.
Jonathan: Alright. Now if I right arrow, …
NVDA: Evo window.
Set ASIO buffer size, collapsed.
Jonathan: We’ve seen this before. You can set the ASIO buffer size here.
NVDA: Set sample rate, collapsed.
Jonathan: And the sample rate.
NVDA: Set clock source, collapsed.
Input routing, collapsed.
Jonathan: That’s the loopback we talked about before.
NVDA: Store stand-alone state.
Jonathan: And this is where you can store the stand-alone state, which I’ve just talked about when you’re not using this with a computer, when you might be on the road, for example.
NVDA: Set ASIO buffer size, collapsed.
Jonathan: And now we’ve wrapped around.
There’s also another menu here where you can check the Evo knowledge base. Check for updates as well, which is important. And when I installed the software, there was a firmware update for the audio interface.
And you can get help as well.
Let’s go back into the view menu.
NVDA: Evo window.
Jonathan: And I’ll up arrow, …
NVDA: Show system panel.
Jonathan: Because we’re going to go to show system panel.
My understanding is that this system panel is not available in smaller Evo products. There is another mix you can get with the smaller Evo products. Scott Chesworth and Tony Bart have done a very good YouTube walkthrough of some of this, and I highly recommend it if you have one of the smaller Evo products or you’re interested in them.
NVDA: System panel window. Checkbox checked, read-only. Digital input 1. System panel window.
Jonathan: This is a very busy screen, but I’m just going to tab through it and explain some of it.
NVDA: Checkbox not checked, read only digital input 1. SPDIF.
Jonathan: These settings pertain to the digital I/O so I’m just going to skip them for now.
NVDA: Checkbox checked, read-only. Digital input.
Checkbox checked, read-only. Digital output 2.
Checkbox checked, read-only. Preferred clock source.
Checkbox not checked, read-only. Preferred clock source. Radio button not checked, mono mode left.
Jonathan: We found the mono mode button in the main interface, and this controls what happens when you press it. If you choose this radio button, then the mono mode will just play the left channel of everything.
NVDA: Radio button checked, mono mode center.
Jonathan: This is checked at the moment, and it’s probably what you want because it essentially turns both channels of a stereo source into mono. You get both left and right made mono.
NVDA: Radio button not checked, mono mode right.
Jonathan: And then we’ve got mono mode right, where it just gives you the right channel.
NVDA: Slider minus 15 dimmed level slider.
Jonathan: This determines the db reduction that the audio interface will apply when you press the dim button. There is a way of programming the software button so that that dim button is right available to you.
NVDA: Slider 0, alt trim slider. Analog outputs selected toggle button, on, pressed.
Unicode (UTF-8) SignatureJonathan: This is whether the analog or the digital outputs are selected. Analog is selected at the moment.
NVDA: Digital outputs 1-8 toggle button, off.
Jonathan: Digital are not.
NVDA: Digital outputs 9-16 toggle button, off, not pressed.
Jonathan: And the digital outputs nine to sixteen are not pressed.
NVDA: TalkBack toggle button, off, not pressed.
Jonathan: And we’re not going to worry about the TalkBack feature at the moment.
NVDA: 1+2 stereo, button.
Jonathan: Let’s have a look at what happens to inputs 1 and 2, and it’s mentioned that they are stereo. If I press enter, I can unlink them from this screen.
NVDA: Checkbox not checked, read-only. Outputs 1+2 routing to main mix.
Jonathan: This determines where analog outputs 1 and 2 pick up their audio from. We can choose main mix. You’ll notice that is not selected at the moment.
NVDA: Checkbox not checked, read-only. Outputs 1+2 route into alt SPK.
Jonathan: Now, there’s also an alt speaker option.
NVDA: Checkbox not checked, read-only. Outputs 1+2 route into QA.
Jonathan: And then, we look at these Q mixes.
NVDA: Checkbox not checked, read-only. Outputs 1+2 route into QB.
Checkbox not checked, read-only. Outputs 1+2 route into QC.
Checkbox not checked, read-only. Outputs 1+2 route into QD.
Jonathan: So you can have the alternative speakers or QA, B, C, and D. And this just means that if we were to choose any of these options, then the mix that we’ve chosen will come through that first pair of analog outputs.
I think in an ideal world, it would have been better if these were radio buttons because typically, with checkboxes, you can select multiple checkboxes. That’s not possible here. Only one checkbox can be selected at a time.
None of these is checked. Why is that?
NVDA: Checkbox checked, read-only. Outputs 1+2 route into DAW Through.
Jonathan: Because I’ve got the DAW Through mode selected.
You probably don’t want to use this unless you are sending audio from the EVO 16 to an external device, and I am. I’m sending it to my mixer.
So in my case, what I want is for each pair of analog outputs to simply take what’s coming in from the PC so that I can have them on separate faders on my mixer. If you don’t have a mixer, and I think the use cases would be very few for you to require a mixer with something as powerful as this, you may not ever want to worry about the rooting to DAW through.
There’s no processing here. And when you choose this option, you get a warning that this could be very loud. And it is. There’s plenty of output gain on this thing, which is wonderful.
So now, I’ll tab.
NVDA: 3+4 stereo, button.
Jonathan: You don’t seem to be able to Ctrl Tab in this screen, so it is a bit laborious to tab through.
NVDA: Checkbox not checked, read-only. Output 3+4 routing to main mix.
Jonathan: And similarly, we can tab through and make sure that those analog outputs, those physical analog outputs are routed to the mix that we select.
Once again, I’ve got the DAW Through selected, so that it works well with my mixer.
The final thing I will show you about this is the clipping indicator in action.
The way I’m going to demonstrate this without breaking your ears is that I recorded deliberately earlier, with my iPhone at a very quiet level. And we use the Smart Gain feature to get sufficient gain. That means that there’s plenty of gain if I turn my iPhone up.
I’m going to do that. I’m going to have the iPhone read something.
I’m going to turn it up, and you will hear what happens. The warning that we get.
I will not include this in the recording because it’s going to be loud, horrible, and distorted, but we’ll see if we can trip the clipping notification.
Eloquence: Mic 5 is clipping. Mic 6 is clipping.
Jonathan: And because they’re a stereo pair, we got warnings about Mic 5 and Mic 6 (which is where I have my iPhone connected) clipping. It is such a useful feature.
I cannot tell you how impressed I am with this. It really has taken my studio to a new accessible level.
I want to thank Audient for being so cool about all of this, and committed to accessibility.
And also, thanks to all the blind people who made this happen, who put in the work, who sat down with Audient, who were willing to listen. Well done to all involved.
That’s the Audient Evo 16.
And even if you don’t need this big brother of a thing, do check out the smaller ones in the range because they’re even easier and simpler to use, with fewer options, and they just work, especially with features like Smart Gain, the accessible clipping indicator, and software that’s such a joy to use.
Advertisement: Transcripts of Living Blindfully are brought to you by Pneuma Solutions, a global leader in accessible cloud technologies. On the web at PneumaSolutions.com. That’s P-N-E-U-M-A solutions dot com.
Here’s a little story from the famous Michael Bullis. He says:
I was recently asked to give a short speech and send it in as a video.
Well, I thought, since I live on a very nice piece of property with acreage and trees, that would be a nice background for the video.
But I would need to be away from the iPhone or PC tablet, so why not get a Lavalier mic to really produce some nice sound quality for my talk?”
I’m glad to hear this, Mike. Because so many people care about the video and give scant regard to the audio.
“I bought a Hollyland Lark M1 Lavalier microphone.”
I’ve not heard of that brand. Interesting.
“When I plug in the transmitter, I lose all VoiceOver audio.
Apple seems to view this as a feature, not a bug, in their responses to the problem.
So I’m wondering if you’ve found a workaround to this. It would be my preferred solution, since I have the iPhone with me most of the time.
Apple says they are avoiding feedback issues. Apparently, it never occurs to them that killing VoiceOver audio is a real problem for those of us who use it.
I’m not sure whether this is a blindness-related issue or a tech issue.
I did call Apple tech support without success, so I guess this is the question for your listeners. How can I hook up a remote Lavalier mic to my iPhone and still use VoiceOver?
Thanks for your assistance, and thanks for the show.”
Ooh! I wonder if we can get any answers to this one, Mike.
I’ve only used wired Lavalier mics, and Rode put out a little dongle that has 2 sockets where little lavalier mics plug in. Actually, it was a lightning accessory so I’d either need a dongle to keep using it, or something new. But I don’t use it very often these days.
So those are wired, and I don’t know whether that would work for you or not just having some wires. I think they were fairly lengthy wires that were running down there, and they might not be visible to the camera.
The whole package was called the Rode interview kit.
The significant thing in the context of this discussion is that it had a third socket. So it had the 2 sockets for the mics, and you could put them in stereo or mono, and it had a headphone jack.
So if this accessory is presenting itself to your iPhone as a microphone type audio accessory, it might be thinking, oh, if you want to monitor from headphones, then you would plug headphones into the accessory itself. And that could be where the problem lies.
If there’s no headphone jack in this thing, then VoiceOver is going to be muted. But I have not had any experience at all with wireless type receivers that plug into the iPhone of the kind you are mentioning.
But we have a very educated audience on various subjects. So perhaps somebody can offer some advice on this one.
And a quick question from Angus MacKinnon. He says:
“How do you email the developer of Mona?”
This is the app for Mastodon that we did an extensive review and tutorial on. It’s a great app.
“I am having a problem with Mona.”
Well, that’s no good to hear, Angus.
I don’t know. It might be that there’s email information in the about part of the app.
But when I want to contact him (and he normally responds very quickly), I just send him a mention on Mastodon.
It’s good to know he’s been plugging away with various new features. And I really do appreciate that feature he added where you can now read in reverse order from the norm because it means you can do a VoiceOver say all and have your toots reading continuously while you’re doing other things. Absolutely brilliant feature. It’s a little thing, but it makes a big difference.
Voiceover: If you’re a member of Living Blindfully plus, thanks for helping to keep the podcast viable.
If you haven’t yet subscribed, why not do it today?
Get access to episodes 3 full days ahead of their release to the public, you’ll get advanced notice of some of our interviews so you can have a say in what we ask, and you’ll help keep the podcast viable by helping to fund the team to do their work.
Our guarantee to you is that everyone who works on the podcast is blind or low vision, so we’re keeping it in the community.
Find out more. Visit LivingBlindfully.com/plus. That’s LivingBlindfully.com/P-L-U-S.
Pay what you can. It all helps. Thanks for your support of Living Blindfully Plus.
Jonathan: Well, that theme music was never more appropriate because sadly soon, Bonnie is about to be over the ocean in the United States of America.
Jonathan: But first of all, we say welcome to Citizen Bonnie, New Zealander Bonnie.
Bonnie: Yeah. Hi! Hi, everybody.
Jonathan: Tell me about it all, and how it went, and how you feel. What a major milestone moment, man.
Bonnie: It was good. It was a really good ceremony. It was quite moving at times.
I’m not sure how many people were there. About 30 different countries were represented.
Jonathan: Yeah. It was quite an intimate setting.
Bonnie: Yeah. You could dress in your native costume, but most people weren’t. Most people just dress nice, smart casual.
Jonathan: What would be the native costume for America anyway?
Bonnie: Don’t know.
Bonnie: Jeans and cowboy boots?
But there was a couple of people. There was someone in a kilt.
Bonnie: So we know what country that was.
But yeah, it was really nice. You didn’t have to do the oath or allegiance one at a time. You did it in groups.
Jonathan: You know the old joke about, “I asked a Scotsman, ‘What’s worn under the kilt?’
And he said, ‘Nothing. It’s all in perfect working order.’”
Bonnie: Yeah. [laughs]
Jonathan: Yes. Anyway, yes, go on. Yeah.
Bonnie: You went up in a row, and everybody got their citizenship diploma or their citizenship paper.
Jonathan: Diploma. It did feel like a graduation at times.
Bonnie: It did, yeah.
I have to say that my folder for my diploma was a little more fancy than my citizenship one. [laughs]
Jonathan: What did they actually give you?
Bonnie: Just a piece of paper.
Jonathan: What does it say on it?
Bonnie: I don’t even know.
Jonathan: We should ask Be my AI or something.
Bonnie: Yeah, we should ask them, yeah.
And the mayor was there, the mayor of Wellington, and we had a chat with her after the ceremony.
Bonnie: Yeah, it was very nice.
Jonathan: She’s a dog fan. I knew she’d be a fan of Eclipse, the dog to eclipse all dogs, so it turned out to be.
And you had a video message beamed in from the governor general, who was the king’s representative.
Bonnie: Governor general, yeah.
Jonathan: But a different governor general from the one that we met when I got my honour.
Bonnie: Yeah, it was a different one.
Jonathan: It must have been a slow year that year.
Bonnie: And a different monarch. [laughs]
Jonathan: Different monarch as well. Yes, that is true.
They had a traditional Maori welcome, and they had the kaumatua there.
And that was really pretty much it, wasn’t it?
Bonnie: Pretty much. And at the end, they sang the national anthem.
Jonathan: Yes, that’s right.
Bonnie: So it went pretty quick, which was good because sometimes at graduation ceremonies, you can sit there for hours. But this went really quickly.
Bonnie: Very quickly.
Jonathan: Just like a graduation ceremony, they say, “Hold your applause till the end”.
Bonnie: Which no one does.
Jonathan: That sort of thing. Yeah, yeah. [laughs]
But it was great to hear your name called up, and going across the stage and that sort of thing.
So we’re going to go through the passport process now.
Jonathan: There has been discussion in relation to this matter of yours. Many people sending in their congratulations on the Mastodon. Quite a lot of people, actually.
But we have a very interesting challenge for you.
Jonathan: From Damo McMorrow, you cheeky little kookaburra you, Damo.
He says now that you’re a Kiwi, we need to hear you at least trying to talk like one.
Bonnie: That’s not…
Jonathan: So can you talk with a New Zealand accent?
Bonnie: I can’t, no. I don’t even know what it…
Jonathan: You’re not even going to try?
Jonathan: Aww! That’s terrible. Damo’s going to be so disappointed.
Bonnie: I’m sorry.
Bonnie: I can sound like a posh British person.
Jonathan: No Kiwi accent? Oh well, there you go, Damo. She was not up for the challenge.
Bonnie: But I’m not sure what a New Zealander sounds like, really.
Jonathan: That’s a bit scary, given that you’re married to one.
Bonnie: I know. But I mean, I don’t really think about it.
Bonnie: Because I don’t actually hear the accent anymore.
Someone else asked, was I going to start talking like a Kiwi?
Jonathan: Because you moved here in, you know, well, I don’t want to say midlife because I’m sure you’ve got many hundreds of years ahead of you.
Bonnie: Hope so.
Jonathan: But you moved here after all that time in the US, so you probably wouldn’t really lose your accent at that point.
Bonnie: No, no.
Jonathan: Some people absorb accents better than others.
Like we had a prime minister, Jim Bolger, who’s our oldest surviving prime minister at the moment. He’s in his 90s.
And everybody used to comment on how when he went on overseas trips, that sort of thing, meeting with foreign leaders, he’d start talking like the local accent.
Bonnie: That’s hilarious!
Jonathan: It was quite interesting.
Bonnie: So if he went to like Turkey or somewhere, he’d have a Turkish accent?
Jonathan: Yeah. Some people just pick it up that way.
Jonathan: Now, what else is happening with the whole Eclipse situation and everything like that? Because by the time this podcast is published for non-paying subscribers, hopefully, you’ll have had a successful trip and you’ll be in the US by then.
Bonnie: Hopefully. Going well, I think.
Just a lot of paperwork finally getting done. I just emailed the last of it off this morning. Haven’t heard if they got it. They wanted it by noon, so hopefully they have.
Jonathan: And did they get it by noon?
Bonnie: They haven’t said whether they did or didn’t.
Jonathan: I mean, you sent it by noon?
Bonnie: I did send it by noon, yeah.
Bonnie: So a lot of duplicate paperwork because you send them one form, and there was, oh, well, we need this form. I already sent it to you.
So yeah. Hopefully, we have a vet coming Monday just to check her out before we leave.
Richard’s coming over Saturday to print out a bunch of forms because I want them in paper copy as well.
So, yeah. So hopefully, that’ll be done.
Jonathan: It has been such a gargantuan exercise.
Bonnie: It’s been horrible! It’s really been just crazy.
Jonathan: But you’re ready to go, you’ve got lots of things lined up for the US.
Bonnie: Oh, yeah, yeah. And just got to finish packing and stuff.
Jonathan: I mean, what’s going to become of me? That’s what I want to know.
Bonnie: It’ll be fine.
Jonathan: I’ll be bereft. bereft!
Jonathan: I won’t know what to do with myself. It’ll be so quiet around here.
Bonnie: I know. You’ll just have to read a book, or write a book or something.
Jonathan: Oh, yeah.[laughs]You never know. I might come back and tell you that I’ve written another book.
Jonathan: It’s been a while since I’ve done that.
Bonnie: A novel, or something. Some mystery, or something.
Jonathan: Now, that does remind me. I would like to say that Robert J. Sawyer is one of my favorite authors.
And one of the great highlights for me of doing Freedom Scientific’s podcast, FSCast, was that when Robert J. Sawyer did the WWW trilogy,the three books called Wake, Watch, and Wonder where the protagonist was a blind character, and he had a lot of information about Jaws in there.
It was just amazing!
And I thought, oh well, we’ll get him on for FSCast.
And it was like a major fanboy moment for me, because I’ve been reading Robert J. Sawyer’s books, actually since Paul Edwards introduced me to them. And they are very good.
To be honest, I didn’t enjoy the last couple as much. But I really like this new one.
And I’m also not a big audio book listener, but I did listen to this one because it’s the only way I could get it. And it’s pretty much a dramatization of his book, and it’s called The Downloaded.
And how do I tease this without giving too much information?
Basically, people get uploaded to this cloud-based quantum computer. And there are 2 different parts of the computer. Some of them are prisoners, and some of them are astronauts trying to get to another star.
And it all goes a bit wrong.
And I will stop at that point, but it’s a brilliant book.
So if you like Robert J. Sawyer, and if you like audio book drama type things, this is such a good listen. I can’t wait to find out what you think of it.
Bonnie: Cool! I’ll have to try it.
Jonathan: Listen on the plane.
You’ll be excited about the fact that by the time this goes out, episode one of the next season of For All Mankind will be out.
Bonnie: Yeah. We’ll already seen it.
Bonnie: Yeah. It comes out tomorrow.
Jonathan: Yes, that’s right. We’re recording this on the Thursday evening.
So what do you think will happen in this one?
Bonnie: I have no idea.
Apparently, there’s a lot of new characters in it.
Jonathan: Yeah, because we’re moving on. I mean, we were like 30 odd years now since it started, right?
Bonnie: Yeah. A lot of the main characters have died. They’ve killed off. So it’ll be interesting to see what happens yet next.
Jonathan: Yeah. It’s a great series. It’s been consistently good quality. So hopefully, they’ve been able to maintain that quality.
Bonnie: Keep that up, yeah.
Jonathan: Have you heard much feedback about All the Light We Cannot See?
Bonnie: Not really. Just the critics said it’s terrible acting, but that’s all I’ve really heard. [laughs]
Jonathan: I heard a review of it on the radio the other day, and the critic there wasn’t critical of the acting. But she did say that compared to the book, it felt a bit sort of condensed and had lost some of its magic.
Bonnie: I had heard that. But I’ve also heard that because of the type of novel it was, that it’s very difficult to put that in cinema.
Jonathan: Right. OK. Well, perhaps people can comment if they’ve seen this. I haven’t seen it yet.
Bonnie: I haven’t seen it. I haven’t read the book either, really.
What do you think of the new Beatles song?
Bonnie: I really enjoyed it. I especially enjoyed the video.
Jonathan: Yes. And Heidi’s amazing description.
Bonnie: Yeah, very good.
Richard hasn’t seen it yet.
Jonathan: No, he hasn’t. He’s got to find the right moment, apparently.
Bonnie: It’s on Disney Plus, apparently.
Jonathan: The video is. I think the documentary about the making of the song is. I’m not sure if the actual music video is.
Bonnie: I think it is, actually.
Bonnie: I think it is, yeah.
Jonathan: Well, I guess it makes sense that if one’s on, then the other’s also on. But you can just watch it on YouTube and that kind of thing as well.
But I’m not sure how popular Sir Peter Jackson is at the moment because he may have, shall we say, exceeded his brief, because he’s now going around telling people, well, there’s heaps of stuff lurking about on those Let It Be tapes that we could use as the basis for more new Beatles music.
Bonnie: Oh, great. [laughs]
Jonathan: I think Paul and Ringo were quite clear that this was the last Beatles song.
Bonnie: This is the end, yeah.
Jonathan: Maybe it isn’t.
Bonnie: Who knows?
Jonathan: I mean, maybe there’s some new stuff. Maybe we have a whole new Beatles album.
Jonathan: Gosh! Be still, my heart.
Bonnie: The Abba.
Well, we wish you a safe and uneventful journey to the United States.
Bonnie: And back.
Jonathan: And back. Yes, I will be counting the days until you’re back.
Jonathan: Yeah, and congratulations on your citizenry.
Bonnie: Thank you. Thank you so much.
Jonathan: Alright. Goodbye!
Man, it really is going to be exceptionally quiet around here, I tell you. So you’ll have to email me and get in touch with all your contributions to keep me company, or something like that.
In the meantime, thank you very much for all your contributions. It’s been fun.
Once again, I look forward to being back with you next week.
Remember that when you’re out there with your guide dog, you’ve harnessed success. And with your cane, you’re able.
Voiceover: If you’ve enjoyed this episode of Living Blindfully, please tell your friends and give us a 5 star review. That helps a lot.
If you’d like to submit a comment for possible inclusion in future episodes, be in touch via email,. Write it down, or send an audio attachment: opinion@LivingBlindfully.com. Or phone us. The number in the United States is 864-60-Mosen. That’s 864-606-6736.