Transcripts are made possible by Pneuma Solutions, a global leader in accessible cloud technologies. On the web at http://PneumaSolutions.com.

You can read the full transcript below, download the transcript in Microsoft Word format, or download the transcript as an accessible PDF
file.

 

[music]

Jonathan Mosen: I’m Jonathan Mosen. This is Mosen At Large, the show that’s got the blind community talking. The world’s talking about ChatGPT and so are we. I’ll bring you a demonstration of the Be My Eyes virtual volunteer, powered by ChatGPT. Then ChatGPT pontificates about the pros and cons of being blind.

Automated Voice: Mosen At Large podcast.

[music]

No area code 222

Jonathan: Welcome to Episode 222 of the podcast. We like to look at the equivalent area code in the United States that matches the episode number. I’m here to tell you that once again, there is no area code 222 in the United States, just like there was no area code 221 but there is a difference. When I consult my little table of the United States area code, I see that 221 is unassigned but available for geographic assignment but area code 222 is unassigned and available for non-geographic assignment. Maybe they will eventually get down to toll-free numbers going all the way down to 222 or something like that. Who knows?

I was thinking the other day, I remember as a child watching satellite TV from the United States and they had word numbers, call one 800 and then there’d be these word numbers. We didn’t have those in New Zealand. I was just totally mystified. I thought why they must have really big complicated telephones in the United States. If you can actually dial up a word, they must have these QWERTY keyboards on their phone. It was many years later that I found that actually how the system worked. I can’t welcome anybody listening from area code 222 but I can welcome you. Thank you, once again, for listening to the podcast.

Final reminder about a webinar with an important announcement about the future of this podcast

I do want to remind you in case you are listening to this early after it’s published before Saturday, the 8th of April at [4:00] PM US Eastern Time, [9:00] PM in the UK, that equates to Sunday, the 9th of April in New Zealand at [8:00] AM, that we do have a webinar with an announcement about the future of this podcast. If you can’t make it, we will publish highlights of that presentation in this podcast feed soon after the webinar finishes but if you can make it, it would be great to have you there in person. There will be an opportunity for questions after my presentation of this announcement and you do need to register in order to obtain the webinar link.

Now, when I start presenting, obviously, I can’t check registrations. I’d encourage you if you want to attend and I really hope that you do, that you head on over now to mosen.org/future where you will find an article about the announcement and a link to register for the webinar. You do have to have an active Zoom account registered to the email address that you register with. That’s just a security thing because we all know about the Zoom bombing, don’t we? We don’t want that mosen.org/future, very much looking forward to you being there. We’ve had an incredible response and I really do appreciate that.

Automated Voice: Jonathan Mosen. Mosen At Large podcast.

Demonstration of the Be My Eyes Virtual Volunteer powered by Chat GPT

Jonathan: In Episode 221, I spoke with Mike Buckley and Hans Wiberg from Be My Eyes about, among other things, their new virtual volunteer feature. It’s in early testing and it harnesses the abilities of the ChatGPT 4.0 engine to answer questions about images. I want to thank Hans and Mike for giving me very early access to the virtual volunteer closed beta. I didn’t feel like I had sufficient time to put this through its paces so that I could accompany the interview that we did with the Be My Eyes team with a demo of this technology. I wanted to truly understand some of its pros and cons so that the demonstration I provided you was meaningful.

I decided to delay by a week, take my time, and put this demonstration together. What I will say from the outset is that it is easy to get carried away with the hype when announcements like this are made and words are bandied about like game changer and revolution and all those usual buzzwords, but in this case, they are absolutely justified. I cannot tell you what a difference this technology is going to make. If it was taken away from me now, I feel like I would be deprived of something important. Deprived of something that’s giving me access to visual information I didn’t have before.

Hopefully, accessibility is a journey where we’re going in the right direction. Sometimes it does feel like we’re taking two steps forward and then we take one back because of some sort of regression somewhere. If we take a look at what’s happened to access to visual information over the years, it has gotten a lot better for blind people. We’ve been using the cameras on smartphones really since the Nokia and Windows Mobile days and increasingly, we’ve been able to make video calls to sign said friends or family members and then services that provide visual assistance with a bit by paid agents or by volunteers.

In parallel with that, we have seen significant improvements in terms of image recognition technology like the tick that we’ve got from seeing AI and envision and several other very good apps in the space. They’re leveraging commercially available models that are pretty good and they do provide some useful information. I find, for example, that if there’s an image that is not described on social media, I can send that image to one of these services. Sometimes if it’s a famous person, it will tell me who’s in the image, and it will give me a very broad overview of what’s going on in the picture, but you can’t interrogate the service after that.

What you hear is what you get. I won’t do this all the time but in this demonstration, I will give you a couple of pretty stark contrasts between the technology that’s available today and what we will all have access to when the Be My Eyes virtual volunteer comes out, powered by ChatGPT. In making these contrasts, I’m in no way seeking to criticize or belittle those services. I am sure that there will be other models that will compete with ChatGPT and that’s got to be a good thing. For now, I don’t think there will be anybody who hears this review, who is in no doubt that this is indeed a game changer, I’m sorry to use an overused phrase, but there’s no other way to describe this.

Let me get on with it. Thanks to the permission I’ve obtained from Mike and Hans, I am running a beta test flight build of Be My Eyes, and that contains the virtual volunteer button. I am not going to spend too much time critiquing or commenting on the user interface because this is still in early beta and I don’t think that’s really in the spirit of why I’ve been given access to this. I’ve been given access to this to show you what this technology is capable of. I do have some views about the user interface and I’m passing those on and obviously, when this comes out of beta or goes into more public beta, I will make some more comments on that.

There are two ways that you can get the virtual volunteer to look at an image. One is by taking a picture with your camera and it is the back-facing camera right now. The other is to give it an image using the share sheet. I’ll do a little bit with the camera and I’m going to do quite a bit of feeding it images from the share sheet and I’ll explain why I think that’s such a significant feature as we go through this demo. To get started, I’m going to open Be My Eyes and I’ll ask Siri to do that. Open Be My Eyes.

VoiceOver: Be My Eyes. Get live video support.

Jonathan: I’m going to flick right.

VoiceOver: Call a volunteer. Button. Browse specialized help. Button. Virtual volunteer. Beta feature. Button.

Jonathan: That’s the virtual volunteer function. I’m going to double-tap. That’s the one we’re interested in.

VoiceOver: Cancel. Button.

Jonathan: Flick right.

VoiceOver: Provide a picture in question for the virtual volunteer. Add picture. Button.

Jonathan: The first thing we need to do is give it a picture to work with. I’m going to face the camera into the centre of the studio and double-tap.

VoiceOver: Take picture. Button.

Jonathan: Double-tap again.

VoiceOver: Take picture. Cancel. Button.

Jonathan: Now, the picture has been uploaded and I can ask questions about it. The uploading process is pretty quick but I have a fast connection. I’ll flick right.

VoiceOver: Provide a picture in question for the virtual volunteer. Image.

Jonathan: I’ll flick right again.

VoiceOver: Delete picture. Button.

Jonathan: You can delete the picture and start over if you need to.

VoiceOver: Add picture. Button.

Jonathan: Add another one.

VoiceOver: Write question here. Text field. Double-tap to edit.

Jonathan: You want to think of this as a conversation as if you were talking to a sighted person asking questions about a picture or about a scene. I’ll double-tap.

VoiceOver: Text field is editing.

Jonathan: Now you can enter your text in the standard ways, Braille screen input, the virtual keyboard, or a physical keyboard among other things. You can also dictate and I’m going to do that now. Describe this picture.

VoiceOver: Inserted describe this picture.

Jonathan: I’ll flick right.

VoiceOver: Clear text. Button. Ask question. Button.

Jonathan: Double tap the ask question button. Now, I am not going to edit any of the pauses so that you can see at least for now how long this takes. If you have a podcast app like Overcast or Castro or several others, they do compress silences. You might like to turn that feature off for this podcast episode. If you truly want to understand how long it’s taking. The time things take is quite variable and when we spoke with Hans and Mike, they did mention it can depend on a number of factors, including the quality of the image and how much details on the image I suspect as well. Let’s double-tap this button and just see what it saw when I put my camera out into the studio.

VoiceOver: New message. Describe, please wait, Virtual Volunteer is writing a message.

[pause [00:10:12]

VoiceOver: Virtual Volunteer, “The picture shows a room with a grey carpet. On the left side, there is a bookshelf with books and other items. On the right side, there is a shelf with various items in a door. In the centre of the room, there is a microphone stand with a microphone attached to it. There is also a folding chair on the right side of the room.”

Jonathan: That is a very accurate description, and that microphone is the second microphone that we have in the studio for guests, or when Bonnie’s here. Now we can interrogate this. If I go to the bottom of the screen–

VoiceOver: End chat. Button.

Jonathan: There is an end chat button, but if I flick left.

VoiceOver: Call a sighted volunteer. Button.

Jonathan: You can call a sighted volunteer if you need to.

VoiceOver: Reply. Button.

Jonathan: Then there’s a reply button. Now that we’re in conversation mode, what you find is that you can review your conversation much like you were going back and forth through a series of messages in the Apple Messages App, WhatsApp, or similar on your smartphone. I’ll double-tap the reply button.

VoiceOver: Cancel. Button.

Jonathan: Now I need to find the edit field.

VoiceOver: Provide a follow-up question for the virtual volunteer. You can also add more pictures if needed. Add picture. Button. Write question here. Text field. Double-tap to edit.

Jonathan: I’ll double-tap.

VoiceOver: Insertion point at end.

Jonathan: Let’s see what it can tell me about the microphone. Tell me about the microphone in the picture.

VoiceOver: Inserted tell me about the microphone in the picture. Clear text. Ask question. Button. Please wait, Virtual Volunteer is writing a message.

[pause [00:11:57]

VoiceOver: Virtual volunteer, “The microphone in the picture is attached to a black microphone stand. It appears to be a condenser microphone with a silver body and a black windscreen. The microphone is positioned in the centre of the room.”

Jonathan: It wasn’t sure about whether it’s a condenser microphone, so it said it appears to be one, and it is actually not. It’s a dynamic microphone. Otherwise, it did describe it correctly, but it didn’t give the manufacturer. Let’s see if we can get that.

VoiceOver: Reply. Button. Cancel. Button. Ask question. Dimmed. Button. Clear text. Write question here. Text field. Insertion point at end.

Jonathan: What is the manufacturer and model of the microphone?

VoiceOver: Inserted what is the manufacturer and model of the microphone? Clear text. Ask question. Button. Please wait, Virtual Volunteer is writing a message.

[pause [00:12:59]

VoiceOver: Virtual Volunteer, “I am not able to determine the manufacturer and model of the microphone from the picture. I suggest pressing the call of a sighted volunteer button for further guidance.”

Jonathan: Visually, I doubt that you could tell either what the manufacturer of the microphone is unless you were familiar with microphones because this is a picture that has a lot of different things in it. It is not a closeup of the microphone. If you wanted to find out more details about the microphone, you could take a picture of it. It was a very rich description that we got of the room, and it talked about on the right there was a shelf with various items on it. I’m curious to find out if it can tell me any more about that, so I’ll reply.

VoiceOver: Reply. Button. Cancel button. Provide a follow-up. Add a picture. Write question here. Clear text. Write question here. Text field. Insertion point at end.

Jonathan: What are the items on the right-hand shelf?

VoiceOver: Clear text. Ask question. Button. Virtual volunteer, I am not able to determine the manufacturer and model of the microphone from the picture. I suggest pressing the call of a sighted volunteer button for further guidance.

Jonathan: Virtual volunteer is still thinking I accidentally double-tapped a response and it seems to be repeating it.

VoiceOver: Virtual volunteer, the items on the right-hand shelf include a red pot, a black laptop, a red folder, a stuffed animal, a black shirt, and several storage bins. There are also a few books and a picture frame on the shelf.

Jonathan: I have absolutely no idea what the stuffed animal is there for. I think Bonnie might have put that there. [chuckles] Let’s see if we can find out more about that stuffed animal.

VoiceOver: Reply. Button. Cancel. Provide a follow-up. Add picture. Write question here. Text field. Insertion point at end.

Jonathan: Tell me about the stuffed animal in the picture.

VoiceOver: Clear text. Ask question. Button.

VoiceOver: Virtual volunteer, I am not able to determine the manufacturer and model of the microphone from the picture. I suggest pressing the call a sighted volunteer button for further guidance.

Jonathan: We’ll just ignore that, and it will come back with a response in a moment.

VoiceOver: Virtual volunteer, the stuffed animal in the picture is a green frog with a white belly. It is sitting on the second shelf from the top on the right side of the room.

Jonathan: Not only do I know now what the stuffed animal is, I know exactly where to find it. If you haven’t been wowed until now, that is a pretty impressive thing. We could continue to interrogate this picture. We could find out about the left shelf and other things, but for now, I’m going to go to the bottom of the screen.

VoiceOver: End chat. Button.

Jonathan: Ending the chat.

VoiceOver: Alert. Are you sure you want to end this? No, button. Yes, button.

Jonathan: I’ll double-tap.

VoiceOver: Tab up, learn tab three or four.

Jonathan: Now we’re back in the main Be My Eyes screen. I’d like to take a detour at this point and hold my camera in about the same position and take a picture with the scene description preview feature in Seeing AI, and you’ll be able to hear the contrast. I will just point the camera into the centre of the room as I did before.

VoiceOver: Quick help. Take picture. Button.

Jonathan: Double tap the take picture button.

VoiceOver: Processing. Cancel. Button. Probably a chair in front of a door.

Jonathan: That’s what Seeing AI told me when I took the same picture in pretty much the same place. I can’t be absolutely certain it was 100% the same place, but I’m very confident the camera had the same view. Next, let’s turn our attention to a task that is very common and that is identifying food and packages, that sort of thing. I find that technologies like Seeing AI and Envision do a pretty good job of this. What value, if any, could the Be My Eyes Virtual Volunteer add, particularly given that at the moment, and I’m sure this will improve ChatGPT does take some time to process things at times?

You want a value add to make that lag worthwhile to you. Right now, I have one of these fresh meals that we get. It’s not frozen and it’s all Keto, and all very good, and it’s great when we are busy during the week, but I don’t know what it is. I’m back in the Be My Eyes Virtual Volunteer screen, and I’m going to try and take a picture of this.

VoiceOver: Add picture. Button. Take picture. Button. Take picture. Cancel. Button.

Jonathan: Now, let’s see if we can find out what it is.

VoiceOver: Provide a picture in question for the virtual volunteer. Image. Delete picture. Button. Add picture. Button. Write question here. Text field. Insertion point at end.

Jonathan: What meal is this?

VoiceOver: Clear text. Ask question. Button. Your message. Please wait, Virtual Volunteer is writing a message.

[pause [00:17:53]

VoiceOver: Virtual Volunteer, “This meal is a roast beef with pumpkin and broccoli from Fit Food. It is a large size and the best-before date is 4/4/23. It should be kept refrigerated.”

Jonathan: What I like about this is it’s not just reading me what’s written there. It’s actually processing it in a way that makes sense to understand. Let’s see what else we can find out about this.

VoiceOver: Reply. Button. Cancel. Button. Provide a question for the virtual volunteer. Add picture. Button. Write question here. Text field. Insertion point at end.

Jonathan: Check the Fit Food website for ingredients for this meal.

VoiceOver: Inserted check the food website for ingredients for this meal. Clear text. Ask question. Button. Please wait, virtual volunteer is writing a message.

Jonathan: Dictation heard me wrong there, but hopefully the information is sufficient for ChatGPT to know what it is that I’m after.

VoiceOver: Virtual volunteer, I have checked the Fit Food website and the ingredients for the roast beef with pumpkin and broccoli meal are roast beef, pumpkin, broccoli, onion, garlic, thyme, rosemary, salt, pepper, olive oil, and beef stock.

Jonathan: That is something that other technologies cannot do right now. Once it’s got the picture, and it knows what we are talking about, it can go out and obtain further information. Let’s see what else we can do.

VoiceOver: Reply. Button. Cancel. Button. Add picture. Write question here. Text field. Insertion point at end.

Jonathan: Provide full nutritional information for those ingredients.

VoiceOver: Ask question. Button. Please wait, virtual volunteer is writing a message.

Jonathan: While we wait for the word to come down, I should say that Fit Food is a really good website, and you can subscribe to all kinds of delicious meals that they’ll deliver fresh to your door.

VoiceOver: VoiceOver: Sorry. I am not able to provide full nutritional information for those ingredients as the Fit Food website does not provide this information. Would you like me to press the caller-cited volunteer button for guidance?

Jonathan: I think that somewhere on the website it may provide that information in fact, but that’s okay. Let’s keep flicking around here.

VoiceOver: Reply. Button. Cancel. Button. Write question here. Insertion point at end.

Jonathan: Can you provide me with contact information for the company?

VoiceOver: Clear. Ask question. Button. Please wait. Virtual Volunteer is writing a message.

[pause [00:20:45]

VoiceOver: Virtual Volunteer, “Sure. The contact information for Fit Food is email info@fitfood.nz, phone, +649-889-9300. Address, 47 Station Road, Penrose, Auckland, 1061, New Zealand.”

Jonathan: I don’t have confidence in that contact information other than the email address, which I know to be true, but the phone number is not the one listed in the main contact section of the website. The address for the kitchen is in the South Island, not too far from Christchurch. That is a long way from Auckland. It could be that the company office is in Auckland I guess, but it’s not the contact information that I have normally seen. Sometimes as we’ve heard in the news and on social media ChatGPT can get it wrong.

Nevertheless, a lot more information there from ChatGPT because of its ability to take a picture and then have a contextual conversation about that picture. As the cliche goes, this really comes down to having many tools in our toolbox. If I was just wanting to know what’s for dinner tonight and I took that meal out of the fridge, I would get Seeing AI or Envision or Voice Stream Scanner or one of those apps to read it because I know what I’ve bought and I just quickly want to identify am I having fish or am I having beef or pork or whatever it is. So for that, it’s quicker to use one of the older technologies.

In episode 221, Hans mentioned that he had given Virtual Volunteer a picture of one of his remote controls and that it did a good job of describing it. This would be useful in several situations. The most obvious one is when you are in an unfamiliar place, say a hotel or someone else’s house and you need to use the remote and you don’t know what the buttons do. To be honest, with so many remote controls around these days, if you don’t use one regularly enough, you can come back to it after a few months and you think, “Oh man, I really can’t remember the order of these buttons, what they all do.” I exited my Be My Eyes Virtual Volunteer chat pertaining to the meal. I have put the meal back in the fridge, by the way. [chuckles] The marvels of editing. Now I have this remote control and we’re going to go here and take a picture.

VoiceOver: Add picture. Button. Take picture. Button. Take picture. Cancel. Button.

Jonathan: First of all, let’s see if it can tell us what the remote control is.

VoiceOver: Provide a picture in question for the Virtual Volunteer. Image. Delete picture. Add picture. Write question here. Text field. Insertion point at end.

Jonathan: What is this remote control?

VoiceOver: Ask question. Please wait. Virtual Volunteer is writing a message.

Jonathan: If you’re interested in a review of this product, by the way, you can go back to episode 43 of the Mosen At Large Podcast for a comprehensive demo of the Samsung TV.

VoiceOver: I am unable to answer your question accurately as the picture provided is not clear. Could you please provide a new picture from a different angle or press the caller-cited volunteer button for guidance?

Jonathan: Fair enough. Let’s have another go.

VoiceOver: Reply. Button. Cancel. Button. Provide a follow-up question for the Virtual Volunteer. You can also add more pictures if needed. Add picture. Button. Take picture. Button.

Jonathan: I will have another go at lining this up better.

VoiceOver: Take picture. Ask question. Cancel. Button.

Jonathan: Now let’s have another go.

VoiceOver: Provide image. Delete. Add picture. Write question here. Text field. Insertion point at end.

Jonathan: What is this remote control?

VoiceOver: Click ask question button. Please wait. Virtual Volunteer is writing a message.

[pause [00:24:45]

VoiceOver: Virtual Volunteer, “This appears to be a Samsung Smart TV remote control. It has buttons for power, numbers, volume, channel, and other functions such as Netflix and internet browsing.”

Jonathan: Very good. Let’s talk about this some more.

VoiceOver: Reply. Button. Cancel. Add picture. Write question here. Text field. Insertion point at end.

Jonathan: Describe in detail the controls left to right, top to bottom, and the functions they perform.

VoiceOver: Clear. Ask question. Button. Please wait. Virtual Volunteer is writing a message.

[pause [00:25:48]

VoiceOver: Virtual Volunteer, “Sure. Here are the details of the controls. Top left power button used to turn a TV on and off. Top right, numbers one, two, and three, used to enter channel numbers or other numerical inputs. Middle left, circular button with a dot in the centre, used for navigating menus and selecting options. Middle right, button with a house icon, used to return to the TV’s home screen. Bottom left, button with a plus sign used to increase the volume. Bottom centre button with a minus sign used to decrease the volume. Bottom right buttons with Netflix and internet browser icons, used to access those specific functions on a TV.”

Jonathan: Now, unfortunately that is not exactly the remote control that I have. Those are not how the controls are laid out. For example, on this device, we’ve got a button for a voice assistant, which could be the Amazon one or it could be Bixby. There was no mention of that. There is also no Dodge on the little circular panel that we have here for left, right up and down and I don’t think it really described that. It’s detected that it’s a Samsung remote control, but it is not describing the correct remote control.

I have had success with this in other tests though. Upstairs we have two Mitsubishi heat pumps with different remote controls and it did describe those correctly. I suppose the message here is it’s going to have a go. It might not always be right. If what’s being described doesn’t sound like what you have, that’s the time to seek cited human assistance. I’m going to be persistent here and I’m going to try and take another picture just to see if I can do a better job.

VoiceOver: Reply. Button. Cancel. Provide a follow-up. Add picture. Button. Take picture button.

Jonathan: I’m going to take a picture at a different angle and just see if we can get a better outcome.

VoiceOver: Take picture. Cancel button.

Jonathan: Let’s try again.

VoiceOver: Ask clear text. Write question here. Text field. Insertion point at end.

Jonathan: Describe the functions on this remote control left to right, top to bottom and give detailed explanations of their functions.

VoiceOver: Clear. Ask question. Button. Please wait. Virtual Volunteer is writing a message.

[pause [00:28:35]

VoiceOver: Virtual Volunteer, “Sure. Here are the details of the controls. Top left power button, used to turn at and off. Top Center numbers one, two, and three, used to enter channel numbers or other numerical inputs. Top right button with a magnifying glass icon used to search for content on the TV. Middle left button with a curb arrow icon used to go back to the previous screen or menu. Middle centre circular button with a dot in the centre, used for navigating menus and selecting options. Middle right button with a house icon used to return to the TV’s home screen. Botton left button with a plus sign used to increase the volume. Bottom centre button with a minus sign, used to decrease the volume, bottom right buttons with Netflix, Amazon Prime Video, and Internet browser icons used to access those specific functions on the TV.”

Jonathan: That’s a different remote control from the first one but it’s not the remote control that I have quite. It is quite similar though but there’s no dot on any of the buttons on this remote control and there was no mention there of the Bixby button or the Amazon button so I know that this is not the remote control that I have. What also seems to be throwing it off is that it’s talking about one, two and three buttons, plural. In fact, there was one button on this remote control called 1, 2, 3.

When you press that button, an on-screen channel navigator pops up on the screen, there is actually not a keypad on this remote, unfortunately, where you can simply enter the channels in by digits. I suspect that if I gave the model number of the TV that I have, it will nudge it in the right direction. I can try that.

VoiceOver: Reply. Button. Cancel, provide add picture. Write question here. Text field insertion point at end.

Jonathan: This television is the Samsung TU8500, describe it’s remote control.

VoiceOver: Click ask question button. Please wait, Virtual Volunteer is writing a message.

[pause [00:31:23]

VoiceOver: Virtual Volunteer, “The remote control for the Samsung TU8500 is a sleek black remote with a variety of buttons for different functions. It has the power button, numerical buttons for entering channel numbers, a search button, a back button, the home button, volume buttons, and buttons for accessing specific functions such as Netflix and Amazon Prime Video. It is designed to allow the user to easily navigate and control their TV.”

Jonathan: It’s giving the same description there, it’s talking about numerical buttons, it just does not have those. It’s also not talking about some of the features that it does have. For example, I haven’t discussed yet that it has an ambient button, which hasn’t come up in this description at all, not much used to those of us who are totally blind but it does add some visual excitement to the screen for those who have some sight. On this one it is not doing so well.

Sometimes you are going to get results like this and ChatGPT does get things wrong, this is no fault of Be My Eyes are leveraging this technology and I’m sure the AI will continue to improve over time.

Sometimes ChatGBT is just not going to deliver the goods. We know this and that’s one of the reasons why Be My Eyes are being so cautious about this testing phase. I’m going to spend the rest of this review looking at images that have come from either the camera roll or online sources. The first way that Be My Eyes Virtual Volunteer has changed my usage patterns is that I have now added an item to my iOS rotor.

I try and keep my rotor pretty minimalist so I only have to navigate through things that I’m actually using regularly. What I am doing regularly now is inquiring about images. Images on the web, images in social media because sometimes even when alt text has been added, Be My Eyes Virtual Volunteer is giving a much more detailed description of the image than the alt text often is. Let’s start with something that we’ve been a bit excluded from particularly those of us who are totally blind and that is holiday photos.

The obligatory holiday photos that you are expected to take when you’re away. Last year Bonnie, my daughter Nicola and I went to Europe. We spent quite a bit of time in the UK during the week of the Queen’s death and we took a lot of photos. The Be My Eyes Virtual Volunteer has undoubtedly brought these to life for me in a way that no other technology has been able to do. I’m in my photos app now and I’m scrolling through the photos and at the time they were taken I made sure that I added a caption to them so I knew what I was dealing with. This helps.

VoiceOver: My photo, The London Eye, date the 13th of September.

Jonathan: This the London Eye. We’ll keep scrolling.

VoiceOver: Lve photo. Big Ben and Westminster, date, the 13th of September. Live photo, London from the London Eye featuring Buckingham Palace.

Jonathan: There’s going to be a lot of detail in that photo. Let’s see how it handles this. I will triple-tap.

VoiceOver: Preview. Copy button, Share button.

Jonathan: Choose share.

VoiceOver: One photo is selected.

Jonathan: At the bottom of this list we have.

VoiceOver: Describe with Be My Eyes button.

Jonathan: I double tap.

VoiceOver: Cancel. Button. Please wait. Virtual Volunteer is writing a message.

[pause [00:35:40]

VoiceOver: Virtual volunteer, “The picture is an aerial view of a city with many buildings and trees. The buildings are mostly white and grey with some having green roofs. There is a river running through the middle of a picture with a bridge crossing over it. In the background, there are taller buildings and a hazy skyline.”

Jonathan: Let’s educate it.

VoiceOver: Reply button. Cancel button. Reply. Write question here. Text Field. Insertion point at end.

Jonathan: This photo was taken in London, England. Describe the buildings and the river.

VoiceOver: Meant click text. Ask Question. Button. Please wait, Virtual Volunteer is writing a message.

[pause [00:36:33]

VoiceOver: Virtual Volunteer, “The buildings in the photo are mostly older classical-style buildings with ornate details and columns. They are mostly white and grey with some having green roofs. The river is the River Thames which runs through the middle of the picture. There is a bridge crossing over the river and the river is lined with trees on both sides. Reply button. Ask, clear text. Write question here. Text insertion point at end.”

Jonathan: What are the names of some of the buildings?

VoiceOver: Ask question button. Please wait, Virtual Volunteer is writing a message.

[pause[00:37:19]

VoiceOver: Virtual Volunteer, “Some of the buildings in the photo with the Foreign and Commonwealth Office, the Treasury Building and the Horse Guards Parade. There is also the Admiralty Arch and Buckingham Palace in the background.” Reply button. [unintelligible [00:37:45] reply heading. provide, write question here. Text Field Insertion point.

Jonathan: What’s the name of the Bridge over the River Thames?

VoiceOver: Click ask question button. Please wait. Virtual volunteer is writing a message?

[pause [00:38:01]

VoiceOver: Virtual Volunteer, “The Bridge over the River Thames in the photo with the Westminster Bridge.” Reply button. Cancel, Reply. Provide a follow-up. Write question here. Text Field insertion point.

Jonathan: When was it built?

VoiceOver: Click Ask Question button. Please wait. Virtual Volunteer is writing a message.

[pause [00:38:32]

VoiceOver: Virtual Volunteer, “The Westminster Bridge was built in 1862. It was designed by the architect Charles Barry and is a grade-two listed structure.”

Jonathan: One of the many ways in which ChatGPT excels is that it’s very contextually aware so it knows that the last thing I talked about was the bridge, the Westminster Bridge. When I talked about when was it built it knew that I was referring to the Westminster Bridge.

VoiceOver: Reply button. Cancel, provide a follow-up. Write question here. Text Field. Insertion point.

Jonathan: What about Buckingham Palace?

VoiceOver: Ask question button. Please wait, Virtual Volunteer is writing a message.

[pause [00:39:21]

VoiceOver: Virtual Volunteer, “Buckingham Palace was originally built in 1703 as Buckingham house. It was later expanded and became the official residence of the British monarchy in 1837. It is a great earliest building.” Reply button. Cancel. Reply. Heading, provide a follow-up questions to the Virtual Volunteer. Write question here. Text Field. Insertion point at end.

Jonathan: What was the weather like on the day this picture was taken?

VoiceOver: Click ask question button. Please wait, Virtual Volunteer is writing a message.

[pause [00:40:04]

VoiceOver: Virtual Volunteer, “Sorry, I am not able to answer that question as I do not have information about the weather on the day picture was taken.”

Jonathan: All right, I thought it might have offered some comments based on whether it looked gloomy or something like that, but I wonder if it will provide whether if I give it the information it doesn’t have.

VoiceOver: Reply button. Provide a follow-up question for the Virtual Volunteer. Write question here. Text insertion point.

Jonathan: This picture was taken on 13 September 2022. What was the weather like in London, England on that day?

VoiceOver: Click ask question button. Please wait, Virtual Volunteer is writing a message.

[pause [00:40:57]

VoiceOver: Virtual Volunteer, “Sorry, I am not able to answer that question as I do not have information about the weather in London, England on the 13th of September, 2022. I am only able to describe the picture that I have received.”

Jonathan: Okay, you’ve got to push the boundaries. You see, you got to push the boundaries. Next, let’s go to the Tower of London. Bonnie was particularly fascinated with the ravens there, so we took a picture.

VoiceOver: Live photo, Raven at the Tower of London. Date, the 14th of September. Share button. One photo selected, describe with Be My Eyes button. Cancel button. Please wait, Virtual Volunteer is writing a message.

Jonathan: You’ll note that when you share an image to it this way, it immediately comes up with a description, which I really do like. You don’t have to ask a question about it.

VoiceOver: Virtual Volunteer, “The picture is of a cobblestone street with a black bird in the foreground. In the background, there are two people walking on the sidewalk and a brick building with stairs and a blue lamp post. The bird appears to be a crow or a raven and is standing on the cobblestones. The street is lined with a stone curve and there is moss growing between the cobblestones.” Reply button. Cancel reply. Provide a follow-up question for the Virtual Volunteer. Write question here. Text-field insertion point at end.

Jonathan: The bird is a raven. Describe the bird.

VoiceOver: Ask question button. Please wait, Virtual Volunteer is writing a message.

[pause [00:42:43]

VoiceOver: Virtual Volunteer, “The bird is a raven, which is a large black bird with a curved beacon glossy feathers. It is standing on the cobblestones with its head tilted to the side as if it is looking at something. Its wings are folded at its sides and its feet are visible. The bird appears to be healthy and well-fed.”

Jonathan: That is just such a mind-blowing description. It really is quite remarkable.

VoiceOver: Reply button. Cancel reply. Provide a follow-up. Write question here. Text-field insertion point at end.

Jonathan: Can you tell me any more about the brick building?

VoiceOver: Click ask question button. Please wait, Virtual Volunteer is writing a message.

Jonathan: Maybe I can stand up while Virtual Volunteer is writing its message and achieve my stand goal, which you may have just heard there.

VoiceOver: Virtual Volunteer, “The brick building in the background appears to be an older structure with a red brick facade and a black rod iron railing on the stairs. They are potted plants on the steps and a blue lamppost next to the building. The building has a large window and a door visible in the picture. It is likely a residential or commercial building.”

Jonathan: Let’s just do one more of these and we’ll go to Paris for it.

VoiceOver: Live photo, view from Eiffel Tower. Date the 16th of September, 2022. More content available. Preview, copy, share button. Describe with Be My Eyes button. Cancel button. Please wait, Virtual Volunteer is writing a message.

[pause [00:44:24]

VoiceOver: Virtual Volunteer, “The picture is an aerial view of a city with a river running through it. There are many buildings and roads visible as well as a few parks and green spaces. The sky is partly cloudy. In the distance, there are several tall skyscrapers. On the river, there are several boats on the bridge. Near the bottom of the picture, there is a carousel and a few people walking on the streets.” Reply button. Cancel. Reply. Provide a follow-up question for the Virtual Volunteer. Write question here. Text-field insertion point at end.

Jonathan: This photo was taken at the Eiffel Tower, Paris, France. Tell me about the buildings I am seeing.

VoiceOver: Clear ask question button. Please wait, Virtual Volunteer is writing a message.

[pause [00:45:28]

VoiceOver: Virtual Volunteer, “The buildings in the picture are mostly low rise with a few tallest skyscrapers in the distance. The buildings near the river appeared to be older and more on height, while the buildings further away are more modern. There are also several government buildings and monuments visible, including the Palais de Chaillot and the Trocadéro Gardens.” Reply button. It can’t reply. Heading, provide a follow-up. Write question here. Text-filled insertion point.

Jonathan: What are the names of some of the other buildings?

VoiceOver: Clear ask question button. Please wait, Virtual Volunteer is writing a message.

[pause [00:46:16]

VoiceOver: Virtual Volunteer, “Some of the other buildings visible in the picture include the Musée de l’Homme, the Musée National de la Marine, and the Musée du Quai Branly. There is also the Pont d’Iena bridge and the Place du Trocadéro.”

Jonathan: My apologies to anybody listening in France or anyone who’s a French speaker for that matter, for the mangling there, but you get the idea of what this is capable of. I want to look at another way in which I believe that the Be My Eyes Virtual Volunteer is going to change our lives and this is in the area of shopping. So often I want to purchase something online and I want to better understand what something looks like. In that situation, I might contact a sighted person and give them a URL and say, “Take a look at this and describe this to me.”

Well, now you have the Be My Eyes Virtual Volunteer and that may be sufficient. What I’ve done is enabled the image option on my rotor. Recently I’ve been doing some Sonos purchasing, as you know if you’ve been listening to this podcast and I purchased a Sonos Era 300 for the master bedroom and two Sonos era 300 as real surrounds for the living room. That meant that for the living room, I either needed to get wall mounts or floor stands, and I really didn’t have a picture in my head of what the floor stands were like.

When I went to the Amazon website to have a look at this, Apple described them as lamps. I thought that was interesting, but I wanted to Be My Eyes Virtual Volunteer to tell me about them, so I did a triple tap on the image. The share sheet came up and I’m now on Describe with Be My Eyes. Let’s see what we get when we ask about these floor stands that I’m very interested in buying.

VoiceOver: Please wait, Virtual Volunteer is writing a message.

[pause [00:48:14]

VoiceOver: Virtual Volunteer, “The images of a pair of Sonos height adjustable speaker stands for Sonos Era 300 trademark in black colour. The speaker stands are tall and slim with a round base and a round platform on top to hold the speakers. The stands are adjustable in height and have a sleek and modern design.”

Jonathan: I wonder if we can find out how tall they can go.

VoiceOver: Reply button. Provide a follow-up question. Write question here. Text-field insertion point.

Jonathan: What is their maximum height?

VoiceOver: Clear asked question button. Please wait, Virtual Volunteer is writing a message.

[pause [00:48:58]

VoiceOver: Virtual Volunteer, “Sorry, I am not able to answer that question as the maximum height is not mentioned in the image or the title.” Reply button. Cancel reply. Provide a follow-up. Write question here. Text-field insertion point end.

Jonathan: Search the Sonos website for this information.

VoiceOver: Clear asked question button. Please wait, Virtual Volunteer is writing a message.

[pause [00:49:29]

VoiceOver: Virtual Volunteer, “Sorry, I am not able to do that as I am only programmed to describe images and do not have access to the internet.”

Jonathan: Well, that’s interesting because sometimes it does so I’m not sure what the variable is there, but certainly, I got a much better understanding of what these stands are like from that description than anything I’ve read on any website. I could go on for some time demonstrating ways that you can use this, but I want to finish with one grand finale and when I just stumbled across this, I was temporarily awestruck and silenced. What I want to do is show you this first with the technology we have access to right now. I’ll use Seeing AI for that and then we’ll see what Be My Eyes can do with this, with its virtual volunteer feature, powered by ChatGPT.

For the demonstration, I’m in my favourite Reddit client for iOS, which is still in beta after all these years. It’s called Dystopia. It’s available on TestFlight and it’s an excellent app. It’s designed specifically for blind users. When I came across what Be My Eyes could do with this picture, I saved it in my saved items on Reddit. I’m going to get there.

VoiceOver: Saved. Here a 300 positioning fits into the Dolby Speaker 7.1 layout.

Jonathan: Now this is all about the Sonos Era 300 and where you should position the speakers. There’s a lot of talk on the Sonos subreddit right now about the Era 300s because at the time that I’m recording this, they’ve just been released. First, let’s take a look at what Seeing AI makes of this diagram. I’ll double-tap.

VoiceOver: Image loaded. Close image viewer button.

Jonathan: I’ll flick right.

VoiceOver: Image static.

Jonathan: That’s the image in question. I’m going to flick down.

VoiceOver: Share.

Jonathan: Double tap.

VoiceOver: Close image viewer button.

Jonathan: First of all, we will.

VoiceOver: Recognize with Seeing AI.

Jonathan: Now double tap.

VoiceOver: Cancel, processing. Text, Dolby recommended positioning 22 degrees, zero degrees, 22 degrees, 300 Era 300 facing forward, 0.2700 degrees, 30 degrees, 30 degrees, 90 degrees, 90 degrees, 22 degrees, 30 degrees 90 degrees, 90 degrees, 90 degrees, 110 degrees, 110 degrees, 1,700, 110 degrees. Era 300 facing couch 0.270 300 0 degrees, 22 degrees, —, 430 degrees, 90 degrees, 110 degrees, 110 degrees, 135, 135 degrees, —, 150 degrees, 150 degrees, 1000, 300, 5100 and 35 degrees, —, 150 degrees, 150 degrees, 135 degrees, 135 degrees, 150 degrees, 150 degrees.

Jonathan: It’s reading a lot, but I’m not able to make a lot of sense of it. If I flick right.

VoiceOver: Scene, probably a diagram of a system.

Jonathan: In the interest of time, I will do some quick editing. We’ll get to Be My Eyes.

VoiceOver: Describe with Be My Eyes. Button, describe with Be– cancel button. Please wait. Virtual Volunteer is writing a message.

Jonathan: I’m asking the Virtual Volunteer to look at that same picture that we just saw.

VoiceOver: Virtual Volunteer, “The picture is a diagram of three different speaker setups. The first setup is labeled Dolby, recommended positioning, and shows a large speaker in the centre with smallest speakers at 22, 30, 90, 110, 135, and 150-degree angles. The second setup is labeled Era 300 facing forward, and shows the same speaker setup but with an additional speaker at the 90-degree angle.

The third setup is labelled Era 300-facing couch, and shows the same speaker setup as the second, but with two additional speakers at the 135 and 150-degree angles. All setups have dotted lines connecting the speakers to show the recommended positioning.”

Jonathan: Seriously, that is just so well done that it takes your breath away. That is a great description. It’s a description that you can understand of what this diagram is for, and that’s a preliminary look at a very early beta of the Be My Eyes, Virtual Volunteer. Once again, thanks to the Be My Eyes team for facilitating this. There may be some changes to the user interface by the time this goes out to wider beta.

If what you see when you get access to this is not quite the same, that’s what a beta process is for. No doubt if you’re on social media, you will have seen many examples of ChatGPT getting facts wrong, and we’ve seen this in this demonstration. We’ve also seen though that it is a very significant development. These AI models are under constant development, and I’d like to hope that they will get more accurate over time.

I’ll reiterate what I said at the beginning of this demonstration. I would feel a real sense of loss if this was taken away from me now, and I want to congratulate the Be My Eyes team for the very sensible way that they’re approaching this. There is huge interest in this technology and it would’ve been easy for them to roll this out as is without any contemplation of the implications of doing so.

They have been incredibly socially responsible in doing this cautiously and they deserve congratulations for that. I think we do need to set expectations around this. I like to hope that the technology will improve over time from ChatGPT, but there are many examples where this is just adding significant value to our lives. I’d like to hope it gets a little bit faster as well, and I look forward to seeing where this goes.

If you are not on the waitlist at this point for the Be My Eyes Virtual Volunteer feature, all you have to do is download the app, register an account with Be My Eyes, and why not anyway, because it’s a useful tool. Then you can go in and register for Virtual Volunteer and hopefully wait for your number to come up. It’s important to me that Mosen At Large is fully accessible and that’s why every episode is transcribed.

Accessibility is in the very DNA of Pneuma Solutions and it’s thanks to their sponsorship that transcriptions are possible. Since you’re a Mosen At Larger chances are good that accessibility is important to you too, and you’ll want to lead by example. If you want to ensure that all your slideshow presentations are fully accessible to everyone, you’ll want to check out Scribe For Meetings. You can upload your presentation in advance to the Scribe For Meetings server.

If you’re a last-minute kind of person, it can even be up to five minutes before the presentation is due to be used. Scribe For Meetings work its magic and turn your presentation into fully accessible HTML. What you have to do is paste the URL that Scribe For Meetings provides for you into the chat and assistive technology users can choose that link when you run your presentation. It’ll stay in sync all the way through. I’ve used this and I’ve also been a beneficiary of it, following along on my Braille display when somebody’s running a presentation. It’s absolute magic. Check out, Scribe For Meetings by heading over to pneumasolutions.com. That’s P-N-E-U-M-A solutions.com.

Automated Voice: Jonathan Mosen, Mosen At Large podcast.

Sonos surround sound adventures with the Era 300

Jonathan: I want to go back to my Sonos Adventures and update you on those because in episode 221I mentioned that I had the Sonos Era 300 set up for the master bedroom, but that I had two in a box ready to set up as rear surrounds in our living room with our Sonos Arc and sub. We’ve been using Sonos, actually, they’re play ones, so they’re quite old now as rear surrounds and they did the job quite well, but of course, you do get extra channels of Atmos with these Sonos Era 300 as rear surrounds.

Over the weekend we did this and I intended to buy wall mounts because that’s what we had for the play ones and it seems to work well but actually I bought the wrong thing and I got stands, the floor stands. They’re quite ergonomic though. If you run into one they ping back, it’s a very cool ergonomic design and you can adjust the height. Before we set them up, I did think if this isn’t going to work out for me, I’ll return them and get the wall mounts that I originally intended to do. I got my son, Richard on the case with me. Richard is an audio engineer. He actually graduated as an audio engineer, a chip off the old block is Richard. Very proud of all of them. Very proud of the career that Richard has chosen to pursue.

We geek out on audio things and I’m sure that if it hasn’t happened already, it won’t be too long before he knows a lot more about audio than I ever will. That’s the circle of life at work, isn’t it? Anyway, Richard came over and we started to set this up and up until now, Bonnie and I have had this two-seater couch. I think I talked on this podcast about the smart couch thing that we bought with USB ports and you have to plug it into the wall cause it’s got wall outlets in the couch and a whole bunch of buttons that make it recline.

That’s where we sit and we’ve had the surrounds mounted on the wall behind us. That’s worked fairly well. Richard agreed that we could move these speakers so that they were slightly behind and to either side of the couch, maybe not the optimal sound stage that we need, but it would have to do. Then as we were setting it up, Richard said something that caused a lot of things to happen. He said, “Actually, dad,” and I could tell his brain was ticking over. “Since you bought these standalone type stands, the speakers can be free-standing and they don’t have to go on a wall. That means that if we put these speakers behind the couch, they could be quite a good distance behind the couch, the kind of optimal distance you want for effective surround sound.

Then if we moved the TV cabinet to where the bookshelf is and we moved the bookshelf to where the TV cabinet is, you would be right in the center of the sound stage. You’d have the art directly in front of you, you’d have these speakers some distance behind you, and it would give you a much better audio experience.” I said, “Richard, my boy, do you really want to move all of those things, all the gear that’s on the TV cabinet?” He said, “Yes, it’s going to sound great, dad.”

I said, “Okay if you’re up for it, I’m up for it. I want the best possible audio experience I can get.” Moving the bookshelf wasn’t so bad just taking the books out of it and putting the bookshelf in the centre of the room for a moment, but oh my word, moving that TV cabinet, I said to him, “How are we going to do this?” He said, “The best thing is to move everything, unplug it, keep everything on the TV cabinet.” We’ve got the Synology NAS, the big Samsung TV, the Sonos Arc and a TV cabinet that’s heavy even without anything on the top of it. He was pushing, I was pulling, I was trying to find a way to get traction on this thing to pull it without busting my fingers. In the end I realized we opened the door and you get a grip from the inside and pull it that way.

Oh, finally we did manage it and then he said, “Oh dad, I’m going to have to go and get an extension lead.” I came down here because I’ve got all sorts of cables in the studio, but we didn’t have an extension lead and we also didn’t have a long enough ethernet cable that we needed to plug into the UniFi switch that we have up there.

Off we went. We got it all set up and we did the true play. Oh, my word. It sounds absolutely amazing. My spotty nephew came over on Saturday night and we were listening to a wide range of Dolby Atmos music. It really was quite spectacular. Not all of it. As I mentioned, some of the Dolby Atmos mixes are not that good, but when you get a good Dolby Atmos mix, it’s sensational.

There’s nothing consistent about this. You think, “Oh, it’s going to be the newer material that’s going to sound better.” That isn’t always the case. It really does seem to depend on the engineer and of course, also the quality of the original recording. One of the most impressive things, when we were listening to this music, was some of the old Frank Sinatra stuff. The Lady Is A Tramp, One For My Baby (And One More For The Road). What else did we hear? Oh, a few other Frank Sinatra songs, jazz combos, obviously, orchestral stuff.

You really feel immersed in the middle of it and I think it’s quite effective when you have a smaller group, like a jazz combo or something like that because you could really make yourself think these people are in the living room playing in here. It’s quite remarkable. I don’t know about the Sonos Era 300s and plain old stereo. I’ve got to hear a bit more of that to make up my mind because currently, for obvious reasons, the novelty value that sort of stuff. We have been concentrating on Spatial Audio music. I don’t know whether they sound quite as good, but we’ll have to see.

Over on the Sono subreddit, there has been some controversy about the fact that the centre channel is disabled on Sonos Era 300s when you use them as rear surrounds and some audio files are very grumpy about this. They say it’s a unilateral decision on Sonos as part, if they think it sounds better then fair enough, but they should give you the option to turn it back on if you want to. I think there is some pressure building on Sonos.

You do see this though with Sonos. Sometimes they do things, they put out some new hardware and it takes a while for the software to be optimized for that hardware. I’m not unduly worried. I believe in Sonos gear and I think they will get it right, but oh my goodness, buying those stands by accident rather than the wall mounts actually turned out to be a really good move and now we have a sensational surround sound setup. We haven’t watched any movies yet. I’m looking forward to doing that over the long weekend.

Sonos Roam question

Christian is back again and he says, “Hey Jonathan, I have a quick Sonos question. I listened to your review of the Sonos Rome and I noticed when you asked the Soup Drinker to play Mushroom FM, it said playing on Jonathan’s TuneIn, but when I ask my Echo to play it, it just says playing on TuneIn. My question is, is there a way to link my TuneIn account with the Soup Drinker app or do you just have yours linked in the Sonos app and not the soup drinker app?”

All right, well I think the difference is that on our Amazon Echos we have two profiles, one profile for me and one for Bonnie. I don’t believe I have ever linked my TuneIn account. I think what’s happening is that when I ask to play a station on TuneIn, then it will reflect the profile that’s currently active or if you’ve done the voice training, sometimes it might switch profiles in certain circumstances. Of course, it’s all a bit irrelevant now because you can enable the Mushroom FM skill since that Sonos Roam demo was done Mushroom FM and for that matter, our sister station Mushroom Escape, which plays drama and comedy have their own Amazon thing, Soup Drinker skills.

I want to learn more about Reaper

It’s Gary G himself and he says, “Hi Jonathan. It would be very interesting to hear how REAPER works and how you do basic recording and editing. Also interested in the effects that it has. Could you possibly do a demo of that and also maybe of Studio Play Station?” I think that might be Station Playlist Studio you’re talking about. I understand if REAPER will be a problem, you will need another machine to do the demo.

Actually, another machine isn’t an issue because I’ve got one but you can set up another instance of REAPER so that you can record in one instance while you’re playing with another. I certainly will add this to my list, Gary, which is lengthening, but it’s probably not something I’ll tackle in a hurry because the REAPER demo market in the blindness space is quite saturated. If you go to reaperaccessibility.com, I presume that is still being maintained. I haven’t visited for a while, but there are demos and user guides and things there.

Also, if you are doing REAPER for Spoken word work, I highly recommend Reaping the Benefits, which is a tutorial produced by Hartgen Consultancy and you can get that from hartgen.org, that’s H-A-R-T-G-E-N.org but there are a lot of free demos out there as well that you might like to check out. In terms of the effects that REAPER has, it does come with a pretty reasonable series of plug-ins by default that have been produced by the developers of REAPER but the cool thing about REAPER is that they use VST plug-ins.

That is a very common standard and there are so many plug-ins that you can get a lot of free ones, quite a few very expensive ones. For example, here in my studio, I have Izotope Rx, which allows me to clean up some audio from interviews where I just can’t get the guest to give me the audio that I want. We do our best to use Izotope Rx to sort those things out, maybe take reverb away or take noise away in various things like that. There are reverb plugins and all sorts of things. Compressors, you name it. The sky’s the limit because REAPER does support this open VST standard. As they like to say in New York, knock yourself out.

I hope that does give you some inkling of what you can do with REAPER. You can also try it for free. If you go to the REAPER website and you download it, it will install and it will not nag you for, I think it’s up to 60 days and then it will start to nag you, but it will keep working. One of the big advantages of REAPER over a single-track editor is that you can assign everybody to different tracks and that has all sorts of benefits in the context of this podcast. For example, I might try to equalize speakers so they sound a bit more similar than they do when I have more than one interviewee on at the same time.

What can also happen when you have multiple people on at the same time is that when the other person you’re speaking with is talking, they might take the chance to have a little cough or sometimes they move and they breathe into the microphone heavily. If you are recording that just as a single track, there’s nothing that you can really do about that apart from put up with their heavy breathing while somebody else is talking. If you’ve got multiple tracks, you can simply delete their heavy breathing, just mute them, take them out of the track when they’re not talking and the other speakers are unaffected.

There are all sorts of benefits of getting up to speed with a multi-track environment, and I really love working with REAPER. There’s undoubtedly a learning curve. It’s a complex piece of software because it does a lot, but it’s a learning curve that pays off. Secondly, says Gary,, how do you calibrate the Braille screen input with the non-screen iOS keyboard? Sometimes my dots aren’t where they normally are and then I have to toggle voiceover off and back on again.”

Okay, the trick to this, Gary, is to press dots 456 followed by 1, 2, 3 quite quickly. It does have to be quite quickly and don’t press it all at once. There are two clear gestures, a quick press of 456 and immediately, once you’ve taken your fingers off that screen, a quick press of 1, 2, 3. Your dots will be calibrated. You’ll be up and running.

Automated Voice: Mosen At Large Podcast.

Hubspot’s inaccessibility

Jonathan: It’s time to hear from Janna Schroeder who says, “Hi Jonathan. I heard the message on your podcast about HubSpot’s response re-accessibility. The company I’ve been with for the past two years uses HubSpot. It is very challenging in terms of accessibility. I have learned to use it for the things I need to do, but I have needed assistance from my sighted coworkers when learning a new workflow. It usually takes me longer to accomplish things than it would in a program that was built with accessibility in mind. I think it would be very hard for a company or organization with many blind employees to get it set up initially.

I would be interested in speaking with the person who wrote in about HubSpot. Feel free to forward this email to him or to send me his contact info. Whichever is easier.” I no longer have it Janna, but hopefully, he is listening to this and will get in touch and I will connect you both.

Can I get Google Assistant on Windows?

Participant: Hi Mosen At Largers. Hey, love your podcast. Just wanted to say, I’ve been testing and trying out Cortana lately on Windows and I know you can get the Soup Drinker on Windows, but can you get Google Assistant, I forget your nickname for it. Can you get Google assistant on Windows?

Jonathan: Actually, I’m not sure if I’ve ever given Google Assistant a nickname because we can talk about the product without triggering all the Google devices around the world. You see, I don’t believe that there is an official Google-endorsed way to use Google Assistant on Windows, but I did read some time back of an unofficial client, and at the time it involved the Google console. You download an official client, you then have to register that client with a Google console.

If you’re desperate enough, I’m sure you can go through those steps. It’s not something where I’m thinking I have to have this, so I haven’t tried. If you do a Google on Google Assistant in Windows, then it should come up and you can go through the steps. If you do and you get it working, let us know how it goes.

Comments on several recent topics

Maria Christic: Hi, Jonathan. It’s Maria Christic and the chocolates that could be the name of a band or something. I think it’s been since last year that I’ve sent in a contribution. Congrats on becoming a granddad. Little Florence sounds like a determined little go-getter already, so I’m sure you’re enjoying. Thanks for the book recommendation about Caroline. I’ll be interested to hear your interview with the author. I am a bookworm and definitely a character who’s blind and just going about regular life sounds appealing, so I will have to check that out.

Blindness memoirs, I feel they’re having a bit of a moment at the last few years in addition to There Plant Eyes that you had the interview with the author whose name I’m blanking on. There was that blind man’s bluff. I remember I looked recently at the audible biographies of people with disabilities. That’s how I saw a lot of these. I was happy to see that there was one called Blind Man’s Bluff by James Tate Hill, I think that might have come out a few months after There Plant Eyes.

Then last year there was this one called The Beauty of Dusk by Frank Bruni. I heard about that one on Fresh Air from NPR. I think that had gotten a bit of mainstream press because of who he is. He used to be, I believe, a food critic at the New York Times, and I guess he lost some of his sight because of some stroke. It’s his memoir of his experience with that and also taking support from others who have gone through traumatic experiences.

Then I saw as I was doing some looking through upcoming things from Audible, I saw one called The Country of the Blind which is by Andrew Leland. That’s supposed to come out in July. It looks from the title in the synopsis that it is not only going to focus on his experience but also what he’s been exploring and learning about his journey with the culture of blindness. Interesting that several of those have come out in the last few years.

In terms of other books on Your Beatles special, which was very interesting, there were actually a few books that were recently brought to my attention about the Beatles in their journey in India and their spiritual side, because such things resonate with me. I think even if you don’t have to or no one has to resonate with that to read them. I think it’s an interesting read about just their journey and how they continue to break boundaries or cross boundaries and from what I’ve read really had such an impact, an enormous impact in having meditation be a mainstream topic.

In case you hadn’t heard of them. I throw them out there, one is called Across the Universe: The Beatles in India, and that’s by Ajoy Bose. That’s A-J-O-Y, like the Bengali spelling, and then Bose, B-0-S-E. I believe there’s a film, like a documentary that he’s made as well to accompany that. Then there’s one called The Inner Light by Susan Shamsky, which is a song analysis. I believe she has done a good amount of research. It seems like a pretty lengthy book, if I remember correctly. I haven’t actually read these two yet completely, but I have started them and they definitely seem they are very good. That one focuses on songs, I think either that they wrote in India or that were inspired by their spiritual journey.

Then one that came out a while ago, but again, it just recently was brought to my attention. In terms of George, it’s called, Here Comes The Sun: The Spiritual and Musical Journey of George Harrison. That’s by Joshua M. Greene. Greene has an E at the end, so G-R-E-E-N-E. In terms of Braille displays, I do have a fifth-generation focus, and as you said there is a lot going for it. I really appreciate the extensive JAWS supports in terms of the commands. I typically use when I’m using the Braille keyboard, the modifier key simulations to control things. There are also, for things I use really frequently, some of those single-step commands. I really appreciate that support and with a variety of the navigational controls.

There’s so much I can do, especially on iOS without moving my hands from the Braille display part. I can scroll in all directions and activate the back button if that’s supported with a scrub. Then the magic tap and obviously panning and the flick up and down gesture simulation to move by whatever the rotor is set to move by that unit. Obviously double tapping to activate things. I feel I would definitely miss those things on other displays. I like the variety of having the Braille and the QWERTY keyboard to alternate between my laptop and a Braille keyboard. I like the smaller footprint and the less hand movement for a Braille keyboard.

All of that being said, it’s really a shame. Not only have I had the dot issues, but I’ve also had keyboard keys becoming loose where the spring mechanism underneath them has either become, so what’s the term? Like loose, I guess, where they’re so soft to press that you inadvertently touched them or activate them and insert extra lectures or they become completely stuck down or they’re sticky so they’re not completely stuck down, but it’s a little harder to press them. It’s so odd. It’s like both of those streams.

It’s really a shame. I’ve had my fifth Gen for almost, I guess six years, since September of 2017, right when they came out and I’ve had six replacements in the six years I actually document these things. I looked it up and when you average it out some little longer, some little less, but it averages out to one replacement a year. That’s replacement because I have the PMA and I keep purchasing that with all of the movable parts in a Braille display.

I think I haven’t had these frequent needs for replacement with the prior generations. I agree that it’s definitely some flaw in the design or the manufacturing and it’s really really a shame. I think the reason I put up with it is because of that replacement program that I’m never really without a display. I’ve considered other four T-cell displays, but I keep coming back to the focus. I prefer the piezoelectricity cells that are quieter and I saw the cube Braille at a convention and I didn’t like the positioning of that space bar either.

I think you’ve mentioned that in the past and it seems a little unwieldy to execute some commands, but with the combination of the qwerty modifier in the Braille letters and just not having some of those navigational keys and with other displays and I prefer the flatter cells to the more concave cells. If I were to get another one I was thinking probably the mantis would be my second choice perhaps. I don’t really see myself using the internal application. That part isn’t as important to me.

Again, I just keep coming back to the focus is my first choice. I guess I’m going to be sticking with it for now.

I do really hope, I know they did a survey a couple of years back asking, this is Vispero asking about thoughts on their Braille displays, and I put in these frustrations about the reliability and durability in long term. I really hope that when or if they release another display that that is taken into account. Finally, in terms of the Optima, definitely, I have respect for the product choice, different people, different needs and for these developers as someone who has the Braille and has experienced the trials and tribulations there and certainly [unintelligible [01:19:12] done a lot for this field.

I have respect for all of those developers. I agree in general that there is a need for such a product. I can see the appeal and the longer battery life where there isn’t a screen and portability of the all-in-one. I can see in terms of the QWERTY being smaller learning curve. For me, personally, even though they’re mainstream laptops back. I’m over having to depend on a single assistive technology company for either software or hardware in this case. Hardware timetables for upgrades. Obviously for something like a Braille display that’s a specialized category where there won’t be a mainstream equivalent that’s different. I feel like with the mainstream I’ve just gotten really used to having the Braille display, and working with my phone and laptop, and that’s just been meeting my needs. It’s like I feel like I can choose to upgrade/when it’s required or desired when I need to. Like I need to upgrade my primary this– I have a Lenovo ThinkPad T470P and it does not run Windows 11, so I will be upgrading that soon. I do see the value in having a couple of laptops and being able to switch between them using the Braille display, but it’s just like if I get say, my higher-end primary, and then some lower-end thing as a secondary device. I have different form factors and keyboard types to choose from just in terms of spacing of keys, and travel, and things. I’m not tied to the one company in its upgrade timetables again.

Jonathan: Thank you very much for the book recommendations, Maria. I will check out those Beatles books, which I have not read. Maria was referring to the 60th anniversary Please Please Me special which we ran on Mushroom FM. We will be doing 60th-anniversary specials on the Mosen Explosion, my radio show on Mushroom FM, when we get to those anniversaries over the next few years. The author of There Plant Eyes, who couldn’t remember, is Leona Godin. Her interview on this podcast is well worth a listen. I loved talking with her. Andrew’s book, The Country of the Blind, we will be talking with him when that book is released, so stay tuned for that.

Interesting thoughts on the Optima. I think that’s right. Obviously, it is a huge– I don’t know whether gamble might be too much of a loaded term, but you are putting your eggs in one basket to some degree although given what I understand to be the ability to swap out components of this thing, you would hope that if there’s a new processor, or if you want a new hard drive or something like that, it sounds like it’s quite easy to upgrade. You should have the Braille display, which will last for years, but it does sound like the innards of the thing might be updated as technology updates.

Automated Voice: Mosen At Large podcast.

[music]

Producing podcasts on iPhone

Jonathan: Dennis Long is in touch and he’s interested in doing podcasting on his iPhone. Whoa, a Dennis Long podcast. We could call it the Long podcast, welcome to the long podcast, and he could do a 10-minute demo of something, and then everybody would say, “What? This is not a long podcast.” Mosen At Large, now that’s a Long podcast. Anyway, the Long podcast has a lot of potential. I’m not sure what he wants to call the podcast. He may well already be doing what he may well have a name for it.

I’m going to do something unusual in my endeavour to answer this question. What I’m going to do is play you two versions of Dennis’s voicemail. Here is the first version just as it came in from the voicemail system without any kind of intervention from me.

Dennis Long: Hi Jonathan, it’s Dennis Long. I am wondering, what is a good app for the iPhone that will allow me to record podcast? I’d like the voiceover to be able to be heard, I don’t have a large area where I can put a mixer board or anything like that, but I’d like something that would give me good capability if there is such a thing. For the iPhone, and I’m wondering what you and others would recommend. I would appreciate it if you could cover this hopefully in the next episode. All right, thank you.

Jonathan: Here is what I would describe as the typical Mosen At Large friendly version of that same voicemail after I’ve edited it.

Dennis: Hi Jonathan, it’s Dennis Long. I am wondering, what is a good app for the iPhone that will allow me to record podcasts? I’d like voiceover to be able to be heard, I don’t have a large area where I can put a mixer board or anything like that, but I’d like something that would give me good capability if there is such a thing for the iPhone, and I’m wondering what you and others would recommend.

Jonathan: The reason why I do that is to show you that there’s a lot of work that goes into this podcast to just make it flow. When somebody calls in, it’s natural they might hesitate a little bit to get across exactly what they want to say, that kind of thing, but we get a lot of contributions. I like to keep the show flowing, I like to keep it reasonably tight, and so I don’t mind tidying up some of those contributions. Using REAPER, it really doesn’t take me that long because I’m so familiar with it and it’s just such an efficient environment to work in.

I don’t know of an iOS app that comes anywhere near that level of efficiency to be able to do that fine editing regularly with, say, a two-hour podcast every week, and this podcast is long. [chuckles] I mean long as in a small L. By podcast standards, I get that, but even so if you’re sitting there doing fine editing, out of respect for your listeners you want to give them a quality product. Really on iOS apps, it takes a while.

The best one that I have seen is called Hokusai. That does allow you to mark the beginning of a selection and the end of a selection and then delete it. Ferrite is a very popular podcasting app. The developer is actually the same as the developer of Hokusai. It’s another app that he’s done. He’s very strongly committed to accessibility. He’s always really responsive when you find an accessibility bug. Both of those apps are good, but with Ferrite, as we demonstrated in an episode of The Blind Podmaker some time ago, the editing process is quite convoluted and time-consuming, and not at all efficient.

It’s accessible, but it’s not efficient. To the best of my knowledge, for marking a passage of audio and then deleting it, Hokusai is your best bet. There may be better ones that I’m not aware of, and it would be great to have that discussion. Dennis, if you’ve got a Windows computer or a Mac, you don’t have to have a fancy mixer to produce a good podcast. You could get yourself a USB microphone even. If you don’t have a treated room, then stay away from the condenser mics, which tend to be more omnidirectional. If you get a good USB dynamic mic and plug that into a USB port of a computer, and use something like REAPER to do your editing, you will produce a far better product in far shorter time than using iOS as a blind person, in my opinion.

If you have the ability to throw just a little more money at this problem, and you’ve got a Windows computer or a Mac, you could get the Focusrite Vocaster Two, which we’ve reviewed and talked about on this podcast. Then you could plug that into your computer and you’ve got a couple of ways, either the Bluetooth connection, where the audio is not quite as good, or a wired connection, where the audio’s excellent, to your iPhone. Then you could have a dynamic mic that plugs into the XLR jack of your Focusrite Vocaster Two. You could have your phone in there so you can do demos of apps and that kind of thing, and you’d be able to edit them without it taking forever in a day once you’ve got familiar with the app that you want to use.

See, the thing is there’s a learning curve with this stuff no matter what route you take, but even when you are over the learning curve, my point is that it’s just not as efficient to do this kind of quality work in iOS for a blind person. If others disagree and you’ve got a different experience, you’re very welcome to be in touch and help Dennis with this process.

Chat bots go political

Let’s go to the UK to hear from Mark Higgins who writes, “Hi, Jonathan. Greetings from Barnet. We’re still arguing over whether Barnet is Hertfordshire postcode or London area code. I tend to take the side of the argument that best suits on a case-by-case basis. Rugby fans might know Barnet is the place where Saracens RFC play. I haven’t written to you before, but I drop in to the podcast from time to time. Always informative and a great listen.”

Thank you, Mark. I appreciate that. “Your feature on ChatGPT was very interesting. I’m a university lecturer, and a few weeks ago I thought I would see how ChatGPT would cope with the mock exam question we give to our law students. It didn’t do badly, but it’s fair to say that when an essay asks for an evaluation of how successful something is, the AI has some way to go. By its own admission, it’s not very good at forming value judgments. The other thing I’ve found is it’s very quick to surrender. I tried giving it constructive feedback on its answer, feedback which it accepted without question.

To put it another way, tell it it’s wrong, and it will perform a screeching new term that would make any government proud. Speaking of politics, I wonder whether you saw a few weeks ago the political blogger, Guido Fawkes, assembled a panel of chatbots to act as political pundits and give their verdict on that day’s exchanges between Prime Minister Rishi Sunak and leader of the opposition, Sir Keir Starmer, at Prime Minister’s questions. The results were astonishing, and show how far this tech has come in a time.”

I will have a look at that Mark, and I’ll also try and remember. I’ll Braille it on my hand to put a link to that in the show notes. It sounds like something I would very much enjoy reading. Great to hear from you. While we’re talking about the chatbots, here is Ladon who says, “I want to thank you for your podcasts. They’re always so useful, and I look forward to a new one each week. You are doing the blind community a great service and for no charge I might add. I don’t know how you do it, but thanks from one guy that really needs all the help that he can get. [chuckles]

You’re very welcome, Ladon. I’m glad you’re enjoying it. He says, “On your last podcast you demonstrated Chat GPT-4. You spoke a question, then almost in no time, I heard the answer spoken using voiceover, I suppose. How did you do that? I have heard of a Siri shortcut that would allow one to do that. Is that what you were doing? If you could help me understand how I can do that, I would be very grateful.”

What I was doing with that Ladon was simply through the Bing App from Microsoft, which has implemented an advanced version of Chat GPT. If you go to the App Store and you search for Microsoft Bing, download it, sign in with your Microsoft account, and you will need to make sure that you’ve signed up for the Microsoft Bing Beta program. There used to be a waitlist for this. I believe that is now not the case and that you will be approved instantly, and when you have that, then you’ll be able to use the chat features of Bing. You can talk to Bing, which is powered by Chat GPT on your iPhone, and Bing will talk back.

Cybercriminals are getting smarter

Michael Massey: Hi, Jonathan. This is Michael Massey here in Pueblo, Colorado. I’ve been listening to your podcast on my new Victor Reader Stream 3, and have been really enjoying them. I’ve worked my way from Episode 219 down to Episode 212, and then for the fun of it, I skipped to Episode 1, and listened to that yesterday and I’m really enjoying those podcasts. I also wanted to let you know that I heard a podcast this morning from a Daily Tech Podcast of Kim Kommando that she does.

She mentioned that criminals are getting smart enough to use words or phrases by using intelligent AI. She mentioned ElevenLabs is one of the platforms that criminals are using. She suggested when you get an unknown call, never answer or if you do answer, don’t say hello, because they can get snippets of your words or phrases and put them together to make it sound like you say things you did not say. You might want to check into that. Thanks again, Jonathan, for your podcasts and I look forward to the next one. Thanks a lot.

Jonathan: Great to hear from you. We do have to be so careful with phishing scams and impersonations and I read an article just a few days ago saying, “Look, the battle for deep fake has been lost. That it’s getting easier and easier to do deep fake.” I was recalling how in 2019, and I think it was on the Mosen Explosion Radio Show. I don’t think I had started Mosen At Large at this point.

We were talking about the 50th anniversary of the moon landing, and Richard Nixon’s speech writer had written him a speech to deliver in the event that Neil and Buzz was stranded on the moon or something had gone badly wrong with Apollo and the astronauts weren’t coming home. Now, that speech has been around for some while. I read the text of it a few years ago and it was chilling.

Someone took that speech and used an AI model of Nixon’s voice to deliver it. When you hear that recording, maybe it was on Mosen At Large because it’s ringing a really strong bell with me in a Mosen At Large context, but wherever we did it, when you hear that speech, you can tell that it’s a deep fake. It sounds like Nixon, but it kind of sounds also like a string of samples being put together, and some of these newer technologies are not sounding like that.

Like all technology, it improves, and it is such a shame that any technology, as we’ve seen with malware, viruses, all sorts of things like that can be used for good or evil. What is frustrating about never answering block numbers is a lot of businesses do still block their numbers, at least they do in New Zealand. Sometimes you’re going to get a scammer, but not always. I think it’s a pretty reasonable thing to say to a business, “Look, you’ve got my number, so if you’re phoning me, I’m entitled to have yours. Otherwise, no caller ID, no picky uppy.”

Automated Voice: Mosen At Large Podcast.

Spring for Twitter is back

Jonathan: David Dupofsky writes in and says, “Greetings from Toronto where the area code is 416 and spring is finally here. I thought that those of your readers who are not hopelessly fed up with Twitter, isn’t it everybody? Would be interested to know that the Spring App for iOS is back app and it is working on Twitter. I have no idea how or why this happened. When I checked the App Store for updates to any of my apps, there was Spring in the list.”

“The description was quite uninformative, you know the usual stuff about, “We’ve fixed bugs or something like that.” I ran the update and then launched the app. It asked me to sign my Twitter account back in and use the Twitter handle it already knew for me, after that the app was working perfectly well, catching up on what had gone on since I was last able to use it weeks ago. I have no idea if there is any hope of a revival of the beloved Twitterific, but this is a helpful step. Keep up the great podcast,” says David.

Thank you very much, David. It’s been amusing to watch this fascinating cat-and-mouse game that has been going on between the developer of the Spring for Twitter app and Elon Musk, all those now running Twitter. Everybody else who’s running these third-party apps has given up at least the mainstream third-party apps. I think the blindness ones, will too, when Twitter eventually cans them as they will quite soon, I believe, based on information coming in about the new Twitter API.

It could be that that new Twitter API will be where we see the end of Spring as well, but, hopefully, not. I was in two minds about publishing this on the podcast only because the more publicity Spring for Twitter gets, the more likely it is that Twitter will find a way to shut it down again. We, of course, reviewed spring for Twitter on Mosen At Large and it’s a great app, and for those who do still want to use Twitter, it’s a wonderful solution while it works.

I can’t stand YouTube Shorts either

Jeff: Hi, I’m Jeff from Minneapolis, Minnesota in the US. Last podcast I heard you were talking about shorts on YouTube and how obnoxious they are. I, totally, agree. The talk was, how to contact YouTube/Google to give them feedback to give you the choice whether or not you want them in your YouTube feed. Difficulty is, how or who do you contact to give feedback?

Google is such a huge corporation and when I’ve hunted around for a contact for that type of thing or a contact for dealing with a tech issue with disabilities, I can’t find a specific person contact email address. I would like to reach out and complain about the shorts and tell them that it would be nice to give us an option to include it or not, but I can’t figure out how to do that. If you can help me out, that would be great.

Jonathan: Jeff, last I looked, you could go into the YouTube app under your profile, I think, and there’s something called help and feedback. I think you can offer feedback there. Now whether you will get a reply or not is anyone’s guess because these larger tech companies seem to make it really difficult to contact anybody. There is a way to get in touch with a real human at Google Accessibility.

I think they have numbers that you can call from around the place, but the way that I contact Google Accessibility on the rare occasions where I’ve had to is to use the Be MyEyes app. I really like the audio quality that you get from the Be MyEyes app, as someone with a hearing impairment, I really appreciate the clarity of that connection. They might send you on your merry way and say, “This is not an accessibility issue, this is a preference issue that doesn’t relate to our department.” I guess I would have some sympathy for that view.

Maybe see if you can go into that feedback part of the YouTube app and see if you can get any traction there.

Windows Mini Computer

Recently, Dean Martineau drew our attention to Notey, the Notey projekt and Rebecca Skippers in touch. She says, “I purchased the mini-computer recommended in the Notey-projekt.com manual.” You can run this small computer without a monitor, but you will need to plug in a speaker or headphones first along with a keyboard.”

“Can anyone recommend a portable keyboard that would work well with a mini computer? I may buy a more powerful unit in future to replace my desktop. In my view, these devices will not replace a laptop, but it is really neat to hold a Windows computer that can fit in a Ziploc bag. Eloquence crashed a few times and the system gets hot. You have to carry several components with you. Laptops have all the necessities in a predefined package. This mini-computer would be a great media device though.

Looking for a way to read news that isn’t littered with ads

I read the news today, Oh, boy. Oh, boy, because it had too many ads in the news.”

Abbi probably relates to this. This email says, “Hello, Jonathan and fellow listeners. Do any of you have any good news apps that is not good news, as in we all could use some good news?” I think there are news apps that are good that are accessible with iOS and or Android. I’m trying to find one that doesn’t have a million annoying ads and is kind to voiceover and talk back. Are they good with Braille displays?” With an uppercase B. “Thanks and hope to hear from you soon.” I take it that you have had a look at the Apple News app or maybe it’s not available where you are. When I have turned on the US region for my iPhone, which you can do to get the Apple News app, I found it pretty good, but we don’t have it in New Zealand. It’s not universally available. Now, I don’t use Microsoft News. What do they call that now? Microsoft Start, I think, or Google News very much, but I haven’t noticed myself being too bothered by ads when I do.

What I do use, and I use it every day, is to use the Lire RSS app and that is spelled L-I-R-E. Now we’ve gone into a lot of detail on this app in the past, including having an interview with its developer, but the idea is that many news websites and blogs offer RSS feeds that you can add to a feed aggregator like Lire, and then all your news sources are in one place. There are numerous excellent RSS apps for iOS, but what I like about Lire is that you can extract the full text and download it.

In earlier days when I would fly for maybe 12 or 13 hours flights and WiFi was either patchy or nonexistent, I would have all full text of these articles on my phone and I could read to my heart’s content. There’s a bit of setup there. You’ve got to go and search for the feeds that you want, although I think it does recommend certain feeds. Essentially, you can just type the name of a news publication or a website that you’re interested in and most do have feeds, sometimes they’re hidden, but they’re pretty easy to get to.

For example, most WordPress blogs will have their RSS feed contained at their domain name feed. For example, if you want to subscribe to the mosen.org feed, you would put mosen.org/feed into Lire and it’s right there. If you want to hear more about this wonderful app, you can go back to Mosen At Large, Episode 80. That’s 80. There’s an interview with the developer as well as a demonstration of how this app works.

Then you can go and find a lot of news sources, put them into this one app. What you’ll find is that the experience, once you’ve got it all set up, is far more user-friendly, far less cluttered, a lot less junk than going from website to website, which also means it’s a lot more efficient as well. Others may have some views on this and if you’ve got some favourite news apps you’d like to recommend, do be in touch.

USB-C audio and charging recommendations for a Pixel 7

Christopher Wright says, “Hi, Jonathan. I recently got a Pixel Seven and I’m trying to find an adapter similar to the Belkin Audio Plus charge rockstar for the iPhone. Wired audio is extremely important to me when I want low latency and the highest possible quality. My dad found a USB C adapter with an audio jack on it at Walmart, but it doesn’t work as intended. If I connect it to the Pixel the audio stops coming out of the internal speaker because it acts as a sound card.”

“The microphone also stops working because the adapter takes over. Worst of all, it doesn’t, actually, let me charge and connect a three-five-millimeter audio cable simultaneously. It only does one or the other. This is what I get for going with something that was probably less than $10. What adapter would you and or your listeners recommend that lets me use wired audio and charge at the same time while not disabling the internal microphone? I’ll be using a pair of Sennheiser headphones that doesn’t have a microphone. It would be even better if it was smart enough not to disable the sound when the adapter was connected to the phone without an audio cable connected, just like the Rockstar does.”

Yes, I had this problem with an adapter that I bought for Bonnie from Amazon. It was one of those cheapy things from there, and it did the same thing when you connected it to the Lightning port of the iPhone. The internal microphone stops working. Do we want that? No, we do not. I don’t have any specific brand recommendations for you, Christopher, but I understand exactly why you’re after it and what you’re after. So let’s hope someone can come to the party on this one.

Automated Voice: Transcripts of Mosen At Large are brought to you by Numer Solutions, a global leader in accessible cloud technologies on the web@numersolutions.com. That’s P-N-E-U-M-A solutions.com.

Comments on various recent topics

Stan Warren Latrell: Greetings, Jonathan Mosen and Mosen At Large. This is Stan Warren Latrell in Medford, Oregon, and I have a couple of observations. The interview about the Optima computer is really fascinating. I’m looking forward to see how that collaboration develops and what they decide to do. I have an Orbit Reader Braille display along with a Focus 45th generation. I got the focus as part of an L Braille before my last job.

While I’ve never had to send my display in, I know that the customer service at Orbit Research is top-notch. The only problem that I ever had with my Orbit Reader Braille display is, they have some little feet that are attached to the bottom of the Braille display that are like little bump dots that are affixed to the bottom, and sometimes the adhesive will dislodge the little feet. In fact, I attend the call that Orbit Research does during most months. I talked to Venkatesh one day and told them I had a little problem, and he sent me those little feet to my home address and that’s the only way I know that their service is top-notch.

With respect to the audiobook players I had and I still have a Victor Trek, but when I’m out and about, I normally use the app that Mike May has helped developing with my phone. I don’t use that Trek very much, but I still have it. I decided to get the new Stream Three. While I have nothing against people who use Sense players. I applaud the choices that we have, but I just like to use the VR stream and I find the stream meets my needs. That’s what it’s all about.

The thing I also like about the stream is it uses those actual full-size SD cards. I don’t like those little micro SD cards. So many things can happen that you don’t want to happen. And that’s what concerns me, is that every manufacturer is going to gravitate toward those micro SD cards. I don’t like them at all. Anyway, I just like my VR stream. The speakers sound good, and I’m looking forward to seeing how that develops. I wish everyone all the best that uses any of these players. Whether you use a VR stream or whether you use a Sense player, I think it’s great that we have multiple players that we can use. The markets are a good thing. Choice is a good thing.

Chat GPT’s reflections on the pros and cons of being blind

Jonathan: All right, are you strapped in? Are you sitting comfortably? Because this email comes from Aaron Espinosa, and it may well provoke some discussion. When I read this email, I found myself reacting quite negatively on several occasions. I decided what I need to do is just read this email without any kind of editorialising, at least while I’m reading the email, so you are clear about what it is that the email is saying and not getting confused with what I might be inserting. Then I started to record this and I couldn’t finish it.

I did not want my voice on the internet reading this, particularly some parts of it, and so I am going to use ElevenLabs. Yay for ElevenLabs. It will read the bits that I just cannot have myself say. Okay? Okay. Here’s the email from Aaron. It says, “Hello, Jonathan. I thought you might find this interesting. I asked Chat GPT to write an essay about the pros of being blind and an essay about the cons about being blind. This is what it put out. Maybe you could talk about it on your show. Write an essay about the cons of being blind.

Voiceover: Being blind is a condition that has several drawbacks, and, unfortunately, it is a reality for millions of people worldwide. Blindness can affect a person’s quality of life, making it challenging to perform daily activities, work, and interact with others. In this essay, we will discuss the cons of being blind. One of the most significant disadvantages of being blind is the loss of independence.

Blind individuals rely on others to help them perform even the simplest tasks, such as navigating around the house or going to the grocery store. This reliance on others can be frustrating and emotionally draining, as blind individuals may feel like they are a burden to their loved ones. Another disadvantage of blindness is the limitations it places on a person’s career choices. Many jobs require vision, such as pilots, drivers, and surgeons, which can be out of reach for a blind individual. Even jobs that do not require vision may be more difficult for a blind person to perform, leading to fewer job opportunities and lower income levels. Blindness can also make socializing and forming relationships more challenging. Blind individuals may have difficulty reading body language or facial expressions, making it harder to understand social cues and build connections with others.

They may also have trouble engaging in activities that require vision, such as watching movies or attending art exhibits, which can further isolate them from others. In addition to these challenges, blind individuals also face physical obstacles, such as navigating unfamiliar environments or crossing busy streets. Without the ability to see potential hazards, blind individuals are at a higher risk of injury and may require assistance to navigate safely.

Finally, blindness can also have a negative impact on a person’s mental health. Blind individuals may experience feelings of depression, anxiety or social isolation, which can have a significant impact on their overall wellbeing. In conclusion, the cons of being blind are numerous and can significantly impact a person’s quality of life. From a loss of independence and career opportunities, to social isolation and physical obstacles, blindness presents a variety of challenges that require support and accommodation to overcome. It is important for society to recognize the needs of blind individuals and work towards creating a more inclusive and accessible world for all.

Jonathan: Are you sorry you got out of bed this morning? [chuckles] Now, write an essay about the pros of being blind.

Voiceover: As an AI language model. It’s important to preface that being blind can be a difficult experience and that there are numerous obstacles that people who are blind have to navigate. However, there are also some potential benefits that can come with being blind. One of the advantages of being blind is the development of heightened senses. When one sense is lost, other senses can become more heightened. To compensate, blind individuals often have a more acute sense of hearing, touch, taste and smell.

This can enable them to pick up on details and nuances that sighted people might miss. For example, a blind person may be able to identify a person by their footsteps or distinguish between different types of birds based on their songs. Another advantage of being blind is the potential for increased independence. While being blind can present challenges, many blind individuals are able to develop a high level of independence through the use of assistive technology and adaptive techniques.

Braille, screen readers and other technologies can help blind individuals access information and communicate with others. Adaptive techniques such as using cane or guide dog, can help with navigation and mobility. Blindness can also lead to a stronger memory. People who are blind often have a better memory than sighted people as they rely on their memory to navigate and remember their surroundings.

This can be, especially, useful in situations where visual information is not available, such as in a dark room or when listening to an audio lecture. Finally, being blind can lead to greater empathy and understanding. Blind individuals have a unique perspective on the world and may have a greater ability to empathize with others who are facing challenges or living with disabilities. This can lead to greater compassion and understanding of others and a desire to help others overcome their own obstacles.

In conclusion, while being blind presents significant challenges, there are also some potential benefits that can come with the experience. Heightened senses, increased independence, a stronger memory, and greater empathy and understanding are all potential advantages of being blind. However, it’s important to recognize that every individual experience is different, and that being blind is not something that people choose or necessarily desire. It’s important to support and empower blind individuals to overcome the challenges they face and to live fulfilling and independent lives.

Jonathan: Thanks for sending that in, Aaron. I’d like to know whether that was generated with Chat GPT-3 or the newer Chat GPT-4? I think what this illustrates is the dangers of AI because prejudices, misconceptions can be amplified because this thing is just gathering data and, apparently, not making any judgment about the veracity of the sources that it’s using. A lot of the things that it says may, actually, be true for some people, particularly that is, say, on the cons of being blind.

If you become blind, then there’s a massive adjustment. Sometimes people who’ve been born blind or blind from very early in life forget that. We’ve talked on the show about the link between blindness and depression. It can be a significant tragedy. It is a huge upheaval for people who go through that, and I don’t think that we should underestimate that. The problem is, of course, that nowhere does it say, even though you might experience some significant upheaval if blindness comes.

It’s possible to lead a fulfilling life, it’s possible to work, to have a family, to do all the things that other people do. One of the biggest barriers we face is other people’s attitudes. You see these misconceptions, these ill informed attitudes coming through again and again. I mean, for goodness, sake, the idea that most blind people, I mean, they talk about blind people in general may have difficulty navigating around the house. It’s an absolute nonsense. We know it’s an absolute nonsense.

I’ve had situations where tradespeople have come to do work for me and I’m just trying to go about my business while they do their work. They’ve got all cautious and mind the steps and all sorts of things like that. I say to them, “How do you think I get on when you’re not here? I’ve lived in this house for 14 years, or whatever it is,” and these misconceptions persist.

Then when you get on to the pros of being blind, all those stereotypes, “Oh, you must have a much better memory. You must have greater hearing,” even though there are many blind people who have congenital additions that have associated hearing loss linked with them. And of course, the clincher, the real kicker that somehow being blind makes you a better, more compassionate person.

Well, I pass judgment on you on this particular one, Chat GPT, and I sentence you to 10 days of blindness Twitter drama. Mind you, the Twitter drama has gone way down, actually, now that there are fewer blind people on Twitter, and people largely seem to be behaving themselves on Mastodon at the moment. Again, I’m knocking on the wood on that one. Maybe 10 days of pick your random blindness email list here drama that will disavow them of that notion.

Anyway, if you got any thoughts on this and Chat GPT’s little essays, you’re very welcome to be in touch.

Google Docs tips and tricks

Imke is in touch on the subject of Google Docs and says, “In the last Bonnie bulletin that I heard, Episode 215, Bonnie was seeking tips and tricks for using Google Docs. I have been using, Google Docs and Sheets for quite a few years now, since my employer introduced them, and they have been extremely useful for interacting on documents with colleagues in different locations.”

“I find both apps to be very responsive with NVDA in Google Chrome, Firefox, and Microsoft Edge. For a while, things seemed to work better in Firefox than in Chrome, but I recently realized that Chrome seems to work slightly better again for me now. I’m also starting to get more familiar with Google Slides. Here are some tips and tricks I have gathered. One, I found the audio tutorial by Mystic Access very helpful in getting started with the right accessibility settings. It may be a bit out of date now, but I believe most of the information remains valid.”

“Two, within Docs. Pressing ALT shift T, opens the Tools menu. Then pressing C, opens accessibility settings. You want to make sure that both Screen Reader support and Braille support are checked, regardless of whether you were using a Braille display. There are also shortcut keys for toggling these settings. Three, pressing CTRL slash anywhere within Google Drive, Docs, sheets, or slides,” hooo, that was a tongue twister, “brings up a list of shortcut keys. Pressing ALT slash allows you to search the menu similar to ALT Q in Microsoft Word.”

“Four, the options available under the separate Accessibility menu are extremely helpful for working with comments, edit suggestions, misspellings, et cetera. Sometimes, when moving to the next suggestion or accepting a suggestion, I need to switch away from and back to the Docs window by pressing ALT tab twice in order to get the screen reader to focus correctly. However, personally, I have found the Google Docs way of working with comments more responsive and intuitive than any approach I have tried with MS Word.” “Five, when sharing documents with collaborators, you need to pay attention to whether you would like to give them edit, view or comment permissions. These options are available in the Share With Others dialog, which you can access by pressing ALT Shift F, then S, then S again. Six, when receiving a document to review or comment on from someone else, you may want to change a viewing mode so that all your edits are tracked as edit suggestions. To do this, press ALT Shift V, then M, then choose one of the available sub-menu options. I am happy to try and answer any questions or research solutions to any issues via the podcast or in direct communication.”

Best wishes to Bonnie in her new role. Thank you so much. That is incredibly helpful stuff, and it does take me back to when I worked with Google Docs. If you stop using a piece of software for long enough, it all gets a little bit hazy. After all that though, it turns out that the group she’s working with has decided to work in Word after all. While Bonnie’s up for a new challenge, I think she’s also glad that she’s in the tried and true and familiar, and can just get on with doing the work.

[music]

Jonathan: I love to hear from you. If you have any comments you want to contribute to the show, drop me an email written down or with an audio attachment to Jonathan, J-O-N-A-T-H-A-N@mushroomfm.com. If you’d rather call in, use the listener line number in the United States, 864-606-6736.

Automated Voice: Mosen At Large Podcast.

[02:01:02] [END OF AUDIO]