The following is a transcript of AT Banter Podcast Episode 152 – The Return of the Google Accessibility Team
Ryan: 00:00:00 Wow.
Rob: 00:00:00 Aren’t you excited about what we’re doing today?
Ryan: 00:00:01 I am excited about what we’re doing today.
Rob: 00:00:03 Let’s frame it up. Hey Ryan-
Ryan: 00:00:04 But we get these guys on every year. This is really exciting. An exciting time of year. Are you ready? Are you ready?
Rob: 00:00:08 Let me frame it up, let me tee it up for you.
Ryan: 00:00:09 Frame it up, tee it up.
Rob: 00:00:11 Hey Ryan.
Ryan: 00:00:11 Rob.
Rob: 00:00:12 What are we doing today?
Ryan: 00:00:13 I don’t know.
Rob: 00:00:13 You suck. You suck, sir.
Voiceover: 00:00:28 This is the AT Banter podcast, a balanced and entertaining look at assistive technology, accessibility, and it’s importance in people’s lives. Join Rob Mineault, Ryan Fluery, and Steve Barclay as they banter with people around the world about anything and everything regarding assistive technology in the disability community. Now, on with the show.
Rob: 00:00:57 Hey, and welcome to another episode of AT Banter.
Steve: 00:01:02 Holy smokes, I’m out of practice, hold on.
Rob: 00:01:04 That’s right.
Steve: 00:01:04 Wait, wait, wait, wait, wait for it. Banter banter. Whew. Let’s see, one week of vacation, and all of a sudden, useless.
Rob: 00:01:12 It’s okay. And we didn’t have any rehearsals. My name-
Ryan: 00:01:16 Keeps himself off the phone tree. I’m not working this week.
Rob: 00:01:25 My name is Rob Mineault, and joining me today, Mr. Ryan Fleury.
Ryan: 00:01:30 I’m Ryan Fleury.
Rob: 00:01:31 And back from the sunny shores of Mexico, Mr. Steve Barclay.
Rob: 00:01:39 So, how the heck was it? How was the vacation?
Steve: 00:01:44 Cancun is amazing. It is beautiful. It is tropical. It is green. It is lush. There are critters running around all over the place. Amazing, absolutely amazing.
Rob: 00:01:58 Now, were you at a resort?
Steve: 00:02:00 Yeah, I was at an Iberostar resort in Cancun.
Rob: 00:02:03 Oh, look at you, plugging away.
Steve: 00:02:06 Well, you know what?
Rob: 00:02:07 And you get 10% off your next stay.
Steve: 00:02:09 I did reviews both on Google and TripAdvisor for them because I was so impressed with this resort. It was awesome. The folks there were just super nice, and the food was good. The services are amazing. Just a terrific resort.
Rob: 00:02:26 I’m telling you, you should email them and see if you can 10% off next, because you just gave them a huge plug, and with our listenership?
Steve: 00:02:32 I know, I know.
Rob: 00:02:32 That’s going to be a big boost in-
Steve: 00:02:33 They’ll be flocking to the shores of Cancun, no question.
Ryan: 00:02:37 Or ask them for a referral fee.
Steve: 00:02:42 I saw a news article the other day about a hotel. I can’t remember where it was. I think it was in the Philippines, but they basically out an announcement that they will no longer be providing any special opportunities to social media influencers. Because I guess this is a big scam now, right? All of these people contact the hotel and say hey, we’re a social media influencer. You should put us up for free so that we can tell people all about your resort. And they’ve had so many people try and pull this on them now that they’re just making it a flat-out policy, no social media influencers.
Rob: 00:03:16 Just ridiculous.
Steve: 00:03:17 Go away, people.
Rob: 00:03:20 Ridiculous. Anything else exciting going on, boys? As Ryan yawns.
Ryan: 00:03:26 Exciting going on, no, not at all.
Rob: 00:03:28 Oh, really?
Ryan: 00:03:29 Nope.
Rob: 00:03:30 Not even what we’re doing today?
Ryan: 00:03:32 Today we are talking with three members from the Google Accessibility team who just got back from Google IO.
Steve: 00:03:38 So just for clarity here, we’ve got Patrick Clary, who’s a Product Manager on the Accessibility team.
Rob: 00:03:43 Whew!
Steve: 00:03:43 We’ve got Victor Tsaran, who’s a Technical Program Manager on the Accessibility team, and I hope I said his last name right.
Ryan: 00:03:49 Victor!
Rob: 00:03:50 Yeah, you-
Steve: 00:03:51 And we’ve got Brian Kemler, Product Manager on the Android Accessibility team.
Rob: 00:03:55 Yup. All three.
Ryan: 00:03:58 You bet your boaties.
Rob: 00:03:59 Packed three guests-
Ryan: 00:04:01 I did indeed.
Rob: 00:04:02 Into one episode.
Ryan: 00:04:03 Yup.
Rob: 00:04:04 This is going to be epic. This is going to be bigger than Game of Thrones this week.
Steve: 00:04:10 Oh wow.
Rob: 00:04:12 There’ll be more dragon fire.
Ryan: 00:04:16 More menace.
Rob: 00:04:17 Yup. More characters doing things that don’t make any sense at all.
Steve: 00:04:22 More random violence.
Rob: 00:04:24 Yay. You guys want to talk about some news before we launch into the Google guys?
Ryan: 00:04:30 You got news?
Rob: 00:04:31 Yeah, I got some news.
Ryan: 00:04:32 All right.
Rob: 00:04:32 We always have news.
Ryan: 00:04:33 Spread the news.
Choir: 00:04:39 (singing)
Rob: 00:04:39 Hey, did you hear about this? Microsoft has patented a Braille controller for visually impaired gamers.
Ryan: 00:04:46 Really.
Rob: 00:04:47 Yeah, it’s just a patent, mind you. It’s nothing that they’ve actually announced, but the patent’s out there. Not long ago, they did the-
Ryan: 00:04:54 Adaptive controller.
Rob: 00:04:55 Of course, the adaptive controller.
Steve: 00:04:57 Right.
Rob: 00:04:58 This one looks to be specifically for the visually impaired, and it looks very cool. The patent itself shows a standard Xbox controller, but this particular controller has a back touch pad that raises Braille characters depending on the feedback from the game. So you might get in-game text that would be hard or impossible for somebody to read on-screen, and that would get sent to these Braille cells, and you’d get refreshable Braille right there to match the text.
Steve: 00:05:38 Very interesting.
Rob: 00:05:40 Supposedly it would work with chat, as well, so you could actually communicate that way, although I don’t know why you wouldn’t just voice chat. I mean, that’s what everyone does. They scream insults about their mothers through their microphones, so I don’t know really why you’d have to Braille that to somebody. You suck!
Steve: 00:05:58 Well, it would be for somebody deaf-blind, probably.
Rob: 00:06:01 Well, there you go.
Rob: 00:06:03 So the controller is also a little bit different because it has paddles on the back, similar to the Xbox Elite controller, but there are six of them. So that’s to basically help the accessibility. That keeps the player’s hands on the back of the controller near the Braille cells, so that you wouldn’t be constantly going from the buttons to the Braille cells and stuff. All your fingers and stuff would already be there on the back of the controller. It looks-
Steve: 00:06:31 I wonder if you could input text through them as well, to-
Rob: 00:06:34 Yeah, I don’t know.
Steve: 00:06:35 So when you’re getting owned by some 12-year-old and they’re cussing you out, and you’re getting all their cussing in Braille, you can cuss back at them.
Rob: 00:06:43 That’s right. Well, it lends me to think that you could also, with that, you could build in some sort of, I don’t know, almost like a haptic? It wouldn’t be a haptic system, but some sort of a system where, if you were steering your character or whatever, and you hit a wall, could you not send that signal to some of the Braille cells to raise, to indicate that you’ve hit a wall, or there might be all kinds of ways that they could perhaps utilize that in a way that would make somebody who’s blind be able to actually navigate in the game.
Rob: 00:07:22 It has all kinds of potentials. That’s pretty cool. Again, this is nothing that’s been announced. Microsoft hasn’t even said that they’re going to be releasing it anytime soon, but they’re looking at it. So once again, kudos to Microsoft Gaming by another really innovative and will be an incredibly, incredibly powerful controller for the people that like to game even though they’re visually impaired.
Ryan: 00:07:52 Okay, but wait wait wait. Let me ask you this, though.
Rob: 00:07:55 Okay.
Ryan: 00:07:55 As you guys who have gamed. I’m totally blind.
Rob: 00:07:59 Yup.
Ryan: 00:07:59 If I had an Xbox here, and I threw a game in, how do I even know where the “Load Game” is or the “Start Game”? Are all games going to have the narrator speech available? I know narrator works with some of the games, but these menu systems, are they going to be read aloud?
Rob: 00:08:15 I don’t know.
Steve: 00:08:16 I can give you an unqualified no idea. I’m not a console gamer. I’m strictly a TC gamer. I’ve played a handful of times on consoles, but I don’t have the skill set for it, so I just stick to my PC gaming.
Rob: 00:08:31 But there also is on all the controllers, there’s always a Start button, right? And that always, generally, always starts the game, for example. And the X button, the button at the bottom part of the diamond configuration of the buttons is always move forward through a menu. The circle, which is to the right, is always back out. So there is a logic to the buttons and everything, so in that sense, it wouldn’t be hard to plug a game in and just start it up, because Start always starts the game. Start always pauses the game, with the X button always advances through a menu. Yeah, I don’t know. Who knows? We could get to a point where there’s an on-board screen reader that works with these games.
Ryan: 00:09:27 Well, I do know that narrator is reading some of the menus, I believe, off the Xbox system, so I just don’t know if it works with the game menus.
Rob: 00:09:35 I don’t know. Microsoft is doing pretty good.
Ryan: 00:09:37 Yup. For sure.
Rob: 00:09:38 Pretty good lately.
Steve: 00:09:39 Yeah, they’re definitely making an effort, and that’s nice to see. It’s nice to see.
Rob: 00:09:44 It’s a good question.
Ryan: 00:09:45 And for our guest in June who is about gaming.
Rob: 00:09:49 Oh, okay, you’re plugging future shows again.
Rob: 00:09:54 Well, we just talked to Amy Cavanaugh last week, and she was saying that how she enjoyed the odd game even though she had to sit really close to the television and stuff, and she could only game for maybe an hour at a time because of her vision. But yeah, everybody likes to game these days. There’s a lot. There’s something for everybody out there.
Rob: 00:10:15 Yeah, we should look that up, Ryan. I want to play Grand Theft Auto V with you online. That’d be fun.
Ryan: 00:10:19 Yup. Alrighty.
Rob: 00:10:22 But hey, speaking of Microsoft, let’s continue on talking about Microsoft in not so favorable of a manner, in my humble opinion.
Steve: 00:10:30 Uh-oh. Uh-oh.
Rob: 00:10:32 Well, no, I shouldn’t say that. This story is positive, but it’s just, Windows updates.
Ryan: 00:10:39 Oh, are you scared?
Rob: 00:10:40 Windows updates.
Ryan: 00:10:41 It’s coming. Any day. Any day.
Rob: 00:10:43 Windows updates have been borked for what, four years? Whenever Windows 10 came up, and they just decided, you know what? Everybody needs to be on Windows 10, and they just started pushing updates without any sort of input from the user. The good news is, as of the new update that’s coming in May, all of that goes away, and users will now have more control over additional Windows updates, and whether or not they want to install it, when they want to install it. It’s giving users that functionality again.
Rob: 00:11:23 So one of the biggest features of the upcoming Windows 10 May 2019 Update is the promised safety of the Windows update process itself. Microsoft has assured that users will no longer be surprised with automatic installation of new feature updates, and they will be given more control over the entire process. The company has apparently started to deliver this much-needed feature to devices running Windows 10 version 1903, with some users spotting a new “Additional Updates Available” section. May 2019 Update is currently in the Slow and Release Preview rings, but anyone can install this update before the public release.
Rob: 00:12:04 Beginning with the Windows 10 May 2019 Update, users will be in more control of initiating the Feature OS Update, Microsoft has said back in April. While it was expected that this new Download and Install Now button will only appear for Feature Updates, currently they are showing up for the cumulative updates as well. The addition of a new button should hopefully rid Windows 10 users of surprise upgrades that happened when they just clicked on the Check for Updates button to see if there were any new updates available, not always with an intention to actually install them.
Rob: 00:12:40 Okay, so I see the problem. I see what they’re doing. Okay. Generally, if you were to go, oh, I’m going to check to see if there are any security updates or anything for … So you’d click on the Check for Updates. It would go. It would check for updates, and then it would just automatically install it.
Ryan: 00:12:56 It used to. It used to do that, yeah.
Rob: 00:12:58 So now what they’re saying what it will do is it will go, it will check for updates, and then it’ll say hey, there’s updates available. Would you like to install them? That’s what I got out of that.
Ryan: 00:13:10 Yes and no. Again, I think with the Feature Updates it’s going to show you what’s available, cumulative updates, security updates, and feature updates, and it sounds like you’re going to have the choice to install and download now, but I’d be willing to bet you you’re still going to have a limited time before that update gets pushed to you.
Rob: 00:13:25 Maybe. You know what, you may be right.
Ryan: 00:13:26 Because Windows 10 is a service now. It’s not just an optional, install the updates when you’re ready.
Rob: 00:13:31 This is so aggravating.
Ryan: 00:13:32 I know. And they’re doing this twice a year.
Rob: 00:13:33 Just go back to the way you did it. It was fine. It was fine.
Ryan: 00:13:40 You have no problems now. You took out your wifi card.
Rob: 00:13:47 Yeah, but … No, but just think of that.
Ryan: 00:13:47 Oh, twice a year is way too much, for starters.
Rob: 00:13:48 It’s way too much. It’s a real … I would be so frustrated if I had any sort of assistive tech on my computer that, every time there’s one of these updates, there’s a chance, a good chance, that it’s going to break whatever software or hardware you’re using.
Ryan: 00:14:06 Well, that’s not even assistive technology. In the October update that came out last year, people started installing it, and it started wiping out their files.
Rob: 00:14:14 Oh, I know.
Ryan: 00:14:15 And their documents and downloads or whatever it was. Two days later, Microsoft retracted that update, and then re-released it a week and a half later, right?
Rob: 00:14:22 But that’s a whole nother issue. That’s just-
Ryan: 00:14:23 Yeah, that’s got nothing to do with AT.
Rob: 00:14:23 That’s just-
Ryan: 00:14:26 That’s the quality of Microsoft.
Rob: 00:14:27 How about you release some updates that doesn’t break stuff? How about you take more time to actually quality check the updates that are coming out?
Ryan: 00:14:37 And they should. They’ve got, like you read, they’ve got the Slow ring, the Release Preview ring, a Technical Release ring or something. They’ve got all these beta test rings, with millions of people in there using them. They just need to start paying attention to what the bug reports are, and not releasing these updates when they’re not ready.
Rob: 00:14:54 Honestly, it’s just baffling to me. They had it down to a science. It was perfect. I remember back in the day, Windows 7. Rock solid. Windows 7 was rock solid. You updated it. Everything worked. You never heard stories of, oh hey, you may not want to install that Windows update, because it’s going to break a bunch of stuff. You never heard of that.
Ryan: 00:15:19 Microsoft’s got until the end of the year, now, because as of next year, January 2020, Windows 7 is no longer supported.
Rob: 00:15:27 Well yeah.
Ryan: 00:15:28 So they’ve got until the end of the year to sort it out.
Steve: 00:15:32 I like the way it works now.
Rob: 00:15:33 No, really?
Steve: 00:15:35 Yup.
Rob: 00:15:36 Really.
Steve: 00:15:37 On every computer that I’ve got, it’s been flawless. It’s just installed, don’t worry about it, just reboot the computer and away you go.
Rob: 00:15:43 Yeah, but you do a lot of computer configuration in terms of AT stuff, though. And you’ve talked to a lot of people who are scrambling because now their ZoomText or something doesn’t work because Windows updated on them. That’s got to be a frustration there, in that sense.
Steve: 00:16:04 The developers have an opportunity to get out ahead of this. The problem is that some of the developers haven’t done that. So there’s pre-release versions of this that they get to play with, to make sure that there aren’t problems, and honestly that’s their job, is to do it. And if they’re not doing it, then the consumers really need to get on their case about it, because it is mission-critical software for a lot of people.
Rob: 00:16:33 Yeah.
Steve: 00:16:35 But I don’t view that as Microsoft’s problem.
Rob: 00:16:40 No, no.
Steve: 00:16:40 Microsoft has to continue to deliver a solid product, a secure product, make sure that everybody’s safety is covered. That’s their job. Now I think think what they’re talking about now is features that they’re adding.
Rob: 00:16:55 Right, versus security updates.
Steve: 00:16:59 Versus security updates and such. So if they add a new feature, and you don’t want it for whatever reason, and this could actually be as a result of some of the lawsuit in Europe around them, because Microsoft has taken a lot of heat for doing things like forcing people to have Edge as a browser, for example. The Europeans took them to court to basically say no, you can’t force people into taking your browser. You’ve got to give them the choice as to whether they want to use your browser or not. And I think this may be just them creating a larger policy that’s more global than they’ve had in the past.
Ryan: 00:17:45 Well, and these updates that come out twice a year now are actually called feature updates. Yes, they’re full-blown versions of Windows again, but it’s not necessarily just features like Edge or features like a new media player or whatever. They’re actually called the feature updates because there are some new features that come out, but you are actually getting a brand-new version of Windows 10 twice a year.
Rob: 00:18:06 Yes, so I get it. I get what they’re saying. They’re making that distinction between, okay, we’ll make the feature updates sort of “optional.” We won’t make them, because we want to make sure that people are getting the actual security updates, which are the important updates.
Ryan: 00:18:23 Exactly, yeah.
Rob: 00:18:25 Yeah, I get it. Okay. Well-
Ryan: 00:18:25 No more auto-install, which is what we needed.
Rob: 00:18:29 Just give some control back to the users. That’s all I say.
Steve: 00:18:31 You’re just mad because your crappy computer kept crapping out on us.
Rob: 00:18:34 Well it certainly didn’t help. Did not help. Every time my computer went to upgrade Windows, it hung. It was terrible. The update, I would have to roll it back, and-
Ryan: 00:18:47 But keep in mind this article is talking about this May update. This could be after we get this May update. This May update’s still going to get pushed out to everybody, and still automatically install.
Rob: 00:18:57 Yeah, right.
Ryan: 00:18:58 Right? So after this May update comes out, we’ll have the option to turn those on.
Rob: 00:19:03 Yeah, I guess. I don’t know.
Ryan: 00:19:04 Let’s see how this one goes.
Rob: 00:19:05 Fine, all right, well, you know what? It’s a step in the right direction.
Ryan: 00:19:08 Yup, for sure.
Rob: 00:19:08 So I won’t be too much of a curmudgeon about this, but-
Ryan: 00:19:12 Have you checked out the new Edge?
Rob: 00:19:14 No.
Ryan: 00:19:14 No? You should check it out. It’s based off Chromium now, [crosstalk 00:19:18].
Rob: 00:19:17 You know, the browser wars are very interesting. Firefox annoys the crap out of me now. I used to love Firefox. When it first came out, I was like this is the greatest thing, and now I hate that browser, and it’s all about Chrome for me.
Ryan: 00:19:31 Well and see, I use Chrome myself, but half the time I bring up Gmail in Chrome, and the page doesn’t load, so I have to Alt+Tab and then Alt+Tab back to it, so I’ve tried Edge. You got to almost have three different browsers now to be able to do what you want to do.
Rob: 00:19:48 Yeah, sort of, yeah.
Steve: 00:19:49 I flip back and forth between Edge and Chrome.
Ryan: 00:19:52 Yeah? You should try the new Chredge. It’s still in-
Steve: 00:19:55 The new Chredge?
Ryan: 00:19:56 Yeah. It’s in beta or public preview or whatever. It’s based off Chromium.
Rob: 00:19:59 Is that what it’s called? Chredge?
Ryan: 00:20:00 No, that’s what Paul Thurrott and Mary Jo Foley from All About Microsoft on Windows Weekly call it. Chredge, because it’s based off Chromium. So Chromium and Edge, Chredge?
Steve: 00:20:09 Okay.
Ryan: 00:20:09 Yeah.
Rob: 00:20:10 You’re such a nerd, Ryan.
Ryan: 00:20:11 I am. Yup. Got to stay up to date on this stuff.
Rob: 00:20:11 Nerd, nerd.
Ryan: 00:20:15 Mm-hmm (affirmative).
Rob: 00:20:17 Hey Steve, why don’t you tell the fine folks about Canadian Assistive Technology?
Steve: 00:20:21 Well, Canadian Assistive Technology is a Canadian-based distributor of guess what? Assistive technology.
Rob: 00:20:28 I would not have guessed that.
Steve: 00:20:30 Really? Oh, I got to work something better into the name, then. And we do all kinds of low-vision and blindness aids, as well as all kinds of physical access aids and accessible furniture, you name it. Visit our website at http://www.canasstech.com.
Rob: 00:20:50 Rick, let me ask you about this. Chaos Technical Services.
Rick: 00:20:54 Chaos Technical Services.
Rob: 00:20:56 Don’t sound so excited about it.
Rick: 00:20:57 Whew!
Rob: 00:21:01 Speaking of repairs.
Rick: 00:21:02 We are the sister company to CanAssTech. We do the repairs on low-vision devices, reading machines for libraries, Braille printers, and pretty well anything in between. We can be found at http://www.chaostechnicalservices.com.
Rob: 00:21:23 All right, shall we bring on the Google boys?
Ryan: 00:21:25 Bring on the Googlers. Hi, guys, are you there?
Patrick: 00:21:29 Hi, guys, how’s it going?
Ryan: 00:21:30 Good, good.
Ryan: 00:21:31 All right, so I guess if we’re all ready, we’ll start to introduce-
Rob: 00:21:34 Let’s do some intros.
Ryan: 00:21:35 Yeah, exactly.
Brian: 00:21:36 Hi, my name is Brian Kemler. I’m a Product Manager, and I work on Android accessibility.
Victor: 00:21:40 I’m Victor Tsaran. I’m a Technical Program Manager, also on Android accessibility.
Patrick: 00:21:47 Yeah, hi folks, Patrick Clary. I’m a Product Manager on Google AI and accessibility.
Rob: 00:21:53 Great, well, we want to welcome you guys back to AT Banter this year. I see that you survived another Google IO, so we’re looking forward to digging in and finding out what was announced.
Brian: 00:22:02 Yeah, I’m absolutely happy to kick us off. So we had a super exciting year, perhaps even the best year ever at Google IO, at least for accessibility. We had tremendous visibility into really IO as a whole, and I think we’ve leaped out of the niche accessibility settings type features into the mainstream, and proved out with the new feature that we launched called Live Caption, which is a feature that applies captions to any audio on the Android device, and Sundar spoke about that in his keynote, and the press really got and understood that. So as they spoke about the hit or the most important features with an Android, we got a lot of uptick and a lot of coverage on that. That was super, super exciting, so that was one of the announcements we had.
Brian: 00:23:00 We also announced a bunch of new updates to TalkBack and heard from our friends Patrick and so forth from our central accessibility team, so it was really a great and exciting IO, and I think my summary would simply be it’s the year that accessibility became mainstream, and that to me was really an honor to be part of that.
Steve: 00:23:29 Yeah, we talk a lot on our show about Universal Design and the need for these sorts of solutions. It’s just really gratifying to us, after all our years in the industry, to see this stuff going mainstream, to see it just part of devices that you get off the shelf.
Brian: 00:23:50 Yeah, thank you, and I talked about that. I gave two talks. One was the one we give as a tradition, what’s new in accessibility, and one of the things that I talked about, speaking of the point of Universal Design, is our mission is “Universally Accessible.” Universal, make the world’s information universally accessible and useful, and so one of the strategies that we took this year from a product standpoint was not bolting or hammering on accessibility to something that was built for another set of users first, but rather building purpose-built accessibility apps with accessibility use cases in mind first and foremost, and actually as the primary use cases, and I think that’s proven out in three of the new features that we have.
Brian: 00:24:44 So Live Transcribe, which is a captioning app for deaf and hard-of-hearing people that we launched in February; Live Caption, which is the system overlay to caption any audio on your Android device that we announced at Google IO and that will launch in the fall; as well as Sound Amplifier, which is a basic audio augmentation and basically an app that helps you improve or clarify the sound around you. So this app is for people who maybe don’t have, want to experiment with, or perhaps can’t even afford a hearing app. So it’s not meant to be a hearing aid replacement, but it is meant to give anybody a little bit of a boost to the audio around them so it’s easier to follow conversations or television or movies and whatnot.
Brian: 00:25:38 The undergirding product philosophy behind each one of these three applications is building for accessibility first.
Steve: 00:25:47 Well, I don’t know about you guys, but I think that deserves three cowbells.
Brian: 00:25:55 That sounds great.
Rob: 00:25:58 That is very good. I don’t anyone’s ever gotten more than two cowbells. You guys should, that is some high praise indeed from Steve.
Brian: 00:26:06 I’ll take that as an honor and as an indicator that we need to double down and continue to build on what we’ve done, but also take that philosophy, not just into the deaf and hard-of-hearing user space but also into low vision and screen reader users, and there’s a whole series of other users who have accessibility needs, and we want to bring that philosophy, that design philosophy and that focus on those users first directly to all those other spaces.
Rob: 00:26:40 From what I understand, you also talked a little bit about Google Lens, and the text-to-speech feature. Could somebody speak to that a little bit?
Victor: 00:26:53 Google Lens launched this feature where you’re prompted to take a picture of any object, and then it’ll try to give you some idea to what it is. But I think their primary focus right now is on text, scanning text. That’s basically pretty much the summary of it.
Rob: 00:27:14 That alone, there’s some really wide-ranging implications to that. There’s educational, for learning disabilities, that’s going to be an amazing feature. Same thing with translation. Any sort of communication, vision, there’s a lot there to unpack that can really be a benefit to users.
Patrick: 00:27:40 Yeah, I totally agree. It’s very exciting, and I’m very excited to see, too, how this technology in general is migrating from when we think about accessibility, a lot of times we think about how to make our devices accessible and our apps accessible, and that’s super-important, but what’s also really exciting is seeing how we can use this technology to make real-world experiences also more accessible and more meaningful for these users. Many times, AI plays a strong role there, and so I think with recent advancements that you see in AI, we’ll be unlocking a lot more of these key experiences, which are really interesting and exciting, and inspirational, too.
Victor: 00:28:39 And also to piggyback on what Patrick just said, is that whole idea of multi-sensory UIs, user interfaces. They’re great for most people, really, not just for blind or physically disabled people as taken individually, but really for most users, because text prompts might be great for somebody who has difficulty seeing the screen, but it also might be great for somebody who is not looking at the screen. We’ve talked about this every year, pretty much. We have podcasts with you guys, is that we’re trying to prove this and see how this actually scales in the real life. And we’re seeing that people are responding really well. Haptic feedback is great for everybody, not just for blind people. Text prompting is great for everybody, and so on and so forth.
Steve: 00:29:29 There’s been all kinds of studies done around learning disabilities that have shown that multi-sensory reinforcement really helps people learn much better.
Rob: 00:29:43 What’s exciting, I think to me, is that we seem to be moving towards this world where potentially we could just have these devices in our pockets that are so powerful that they are this all-in-one communication aid, almost, that can literally break down almost any sort of sensory barriers that exist now. It seems to be within arm’s reach right now.
Victor: 00:30:18 We just need batteries that can last 20 times as long.
Steve: 00:30:21 Exactly.
Victor: 00:30:22 You are right. In terms of features, we definitely, I don’t even say we’re there already. A lot of the things we can do today with your phone, just a single phone, were not even possible five years ago. And so we’re not close. I wouldn’t even say we’re so so so close.
Brian: 00:30:42 Another way I like to frame it when I talk about it is, traditionally the design philosophy behind accessibility has been focused toward making things that are on the device, be it the hardware or the software, whether it’s the OS or the apps themselves, more accessible. But that phone in your pocket is now much more like a supercomputer, and we have the ability to shrink down what used to take a data center or an array or a Borg cell in a data center, and put it on that phone in the form of a processing unit, and with that we could use the capabilities of the phone’s sensor and its camera and so forth, and its microphone, to really provide assistance not just to what’s on the device but also what’s in the real world. I like to talk about what we do on the device, but also what we can do in the real world, and there’s just such an opportunity now to not just make the device more accessible, but to make the world and the planet more accessible.
Steve: 00:31:55 I tell you, just as a traveler, something like Translate is ridiculously useful, and I just got back from Mexico last week, and I was using Lens to capture Spanish text and then translating it into English in a museum, because they only had Spanish on some of the plaques in the museum. That was unthinkable years ago, and it’s so handy now.
Patrick: 00:32:29 I want to comment on that, actually. I really like that example, because I think it also, while that example you gave, that might be a case of where you needed a translation because you didn’t speak that language, I think it’s really interesting to think of this also as a accessibility need. That text wasn’t accessible to you because it was written in another language, and in the same way that text might not be accessible to someone with a vision impairment. And so the way that technology can address all of these different accessibility needs when it comes to communication, as you said, is very exciting to see them, okay, so that’s what we can do now, and then where can we take it? Where can this go next? For people like us who are working on accessibility at a company, at Google, when we see these advancements in this type of technology, it’s really inspirational to think about. Where can we take this? And what other situations are there where something like this might be useful?
Rob: 00:33:47 I was thinking about this while I was watching the Live Transcribe video that you guys released. In college, I worked at a warehouse, and I worked with a couple of guys who were deaf, and I remember in communicating with them, and what we basically would have to do is they would write down on a piece of paper what they wanted to say, and I’d have to write back, and to think that in the space of my lifetime, now here we are at a point where this little device in a pocket can just absorb speech and spit it out text for them to see, all on the fly, live. It’s incredible.
Steve: 00:34:33 Yeah, think about it. How many years ago was it that we were selling the UbiDuo as a means of communication for deaf people? A dedicated device that you had to type into, and it was expensive. It was five or six thousand dollars, if I recall, and single-purpose device. This has really been a revolution for the deaf community.
Rob: 00:35:01 So sort of a technical question, and mainly just out of curiosity, but we are always hearing these terms, machine learning, AI, and I think a lot of people outside the technical community will hear those terms and have maybe a little bit of a perception of what it is and what it means. Is that the real core of all this, in terms of the linchpin that’s really driving a lot of this technology forward?
Brian: 00:35:31 I think it is, and what I’d say is I hear this all the time, and even when people are not doing machine learning or artificial intelligence, everybody wants to jump onto this. It’s become marketing slang, but I think at least in this context, when we say it, we mean it, and what it means to us is that underlying, say, I’m going to give examples of Live Transcribe and Live Caption, underlying our ability to recognize speech and turn it into text is a massive machine learning and artificial intelligence infrastructure, where we take massive sets of data and then run algorithms on that data over and over and over again to be able to infer speech, infer nuance in speech, so be able to do things like add punctuation, add capitalization.
Brian: 00:36:31 I can say something into Live Caption or Live Transcribe like, “I bought a new jersey in New Jersey,” and if I say that, it understands that the first “new jersey” is simply an adjective modifying a noun, and the second “New Jersey” needs to be capitalized because it’s a proper noun, the state of New Jersey.
Brian: 00:36:50 So that, to me, is really what I call a magical capability, and as cool and awesome as it is for me, I’m a hearing person. I’m a seeing person. It’s neat. It’s awesome. It helps me if I’m at a bar and it’s loud, like if I’m in a situationally-deaf kind of scenario, but for somebody who actually has hearing loss, who actually needs captions and who actually has to rely on captions, it’s really really really powerful.
Ryan: 00:37:23 And now with all these apps, is it correct, or am I correct in thinking that all the processing is done on the device? You don’t actually have to have a data connection to use these apps?
Brian: 00:37:34 It depends. We’re not in a world where on-device AI and ML is as mature as it is in the cloud, and that simply makes sense, because just a few years ago you couldn’t run any of this on a small device. You needed a data center. So it’s migrating, and it’s moving over there, and it’s going to have a lot of benefits, like offline access, like very fast response time, low latency, and quite honestly a better story for security and privacy. The data doesn’t have to go anywhere. That’s awesome.
Brian: 00:38:08 Now, as we get to that world, some of these apps still leverage the cloud, where the cloud can do things that are better and that provide better functionality for users. Some of them use on-device models. So to give you an example, Live Transcribe we started about two years ago, and so we’re using cloud models. Live Caption we started nine months ago. We birthed it at IO, and that uses an on-device model. So we expect that on-device models to get better and better and better over time.
Steve: 00:38:42 Right. You know, this idea of voice recognition being housed on a cellphone really does melt my brain, because I’ve been working with voice recognition since pre-Dragon version one, and that was daughter cards that you had to stick into your computer, specialized processing cards. It was discrete speech, so you had to put a pause after every word. It was terribly inaccurate. It was very slow, but necessary for a lot of people. How far it’s come in the last few years is just staggering to me.
Rob: 00:39:24 Well, and the really important component of that is not only is it so much more powerful, but it’s also baked into this mainstream device.
Steve: 00:39:35 Yeah, it’s included. It’s just there.
Rob: 00:39:37 It’s no longer a specialized piece of technology that’s only good for one thing. And that’s the exciting part about it. Finally I feel like we’re actually, in terms of digital universal design, we’re getting real close.
Steve: 00:39:56 Yup. As it stands right now, if you’ve got Live Transcribe on, is there any way to indicate in Live Transcribe what text is being transcribed by which speaker? Do you have them color coded?
Brian: 00:40:12 Yeah, so that’s a feature that’s sort of technical jargon or parlance for that is called diarization. I find that a very technical term, so a simpler one is speaker separation, or speaker identification. There are some nuances there. That is a very difficult problem to solve, because we don’t know in advance. Brian’s voice is Brian’s, Victor’s is Victor’s, Patrick’s is Patrick’s, Ryan’s is Ryan’s, and so on, and determining that without hardware today is, it’s possible, but the algorithms that I’ve seen are not fully there and are not fully reliable. It is our absolute number one, not only requested but desired feature. Sometimes we get requested features, and it’s like oh, I kind of don’t want to do that. But in this case, I want to do that more than I want to do anything else for Live Transcribe and for Live Caption for that matter, and I can tell you we’re working on it. We’re working on it really hard, but it’s a hard problem to solve.
Steve: 00:41:21 That’s something that I’ve been asked for by people for years, because they’ve wanted to use voice recognition to transcribe meeting notes, for example, and of course it just wasn’t possible, although we did have a competitor who kept telling people that it was possible, and it wasn’t. But yeah, that’s going to be when it hits the streets, that’s going to be a super popular feature, I think.
Brian: 00:41:49 Yeah, I think it’s going to be really game-changing. I’ll give you guys a tiny scoop. We’re going to have a nice announcement. It’s not diarization, but we’ll have some new feature and some exciting announcements on Thursday, Global Accessibility Awareness Day.
Rob: 00:42:11 Oh, excellent.
Brian: 00:42:12 Keep an eye out for our blog, just a little plug and a little scoop for you. But yeah, that would be just so transformative. You think about what we could call an ears-free experience, when you can’t hear or you have a lot of hearing loss, so it makes it difficult. It’s not just speech. Speech is only one component of the ambient audio. There’s I’m knocking, I can make noises, there’s all of this other stuff and all of this other context, like who is speaking, what they said, and so the vision I think eventually will be that we can represent most of that the best we can in a fashion that deaf and hard-of-hearing users can understand and consume that.
Ryan: 00:43:11 Neat. Well, we’ll definitely keep our ear to the ground for that one.
Rob: 00:43:14 Yup, I’m excited about Thursday.
Ryan: 00:43:17 That’ll be a lot of announcements on Thursday.
Rob: 00:43:19 Yeah, I think so. I think you’re right.
Rob: 00:43:23 One of the other projects that was mentioned was Project Euphonia. Could somebody give us a bit of a rundown of what that is?
Brian: 00:43:33 If you think about, let’s think about Live Transcribe. Live Transcribe gives us the ability to use a smartphone. Now if I cannot hear, I get that speech-to-text. When we train those models, we train it on YouTube data sets, lots of video, audio, and so forth.
Brian: 00:43:58 The challenge is there are a lot of people out there who have what we would call non-normative speech. I don’t like to say speech impediment, but that’s the kind of common vernacular. So for folks who have non-normative speech, those models aren’t going to work. So if they speak into the assistant, it’s just not going to pick it up. It’s like having a thick foreign accent.
Brian: 00:44:26 So that data is really really really sparse in our models, so it’s a really difficult technical and computer science problem to be able to train on such sparse data and such unique speech patterns. So Euphonia was envisioned, and the project manager, her name is Julie. We work together very closely, so if you ever wanted to do a follow-up interview with her, I’m happy to connect you. Julie and Euphonia’s mission has been how can we train on effectively a data set of one, one person, one accent, and this is very helpful for people who have conditions whereby they lose speech, so things like ALS, and that ability to speak degenerates over time, and those people can also lose motor and physical functions, meaning that it’s absolutely essential that they be able to interact with a device and using voice.
Brian: 00:45:37 So if we use those standard models that we have in Live Transcribe and Live Caption, it’s going to fail, like if I spoke French into an English model it’s going to fail. So Euphonia was designed, and we’ve actually prototyped and proven out this concept, that we can train on a single individual, and have the algorithms understand what they’re saying. We’ve done this with one of our Googlers here who is deaf and has a very strong deaf accent. Those of us who work with him, we can understand him really well, but the algorithm wouldn’t be able to do it without this capability, without this technology. So this is really life-changing, and it’s bringing the power of AI and machine learning down to really small and individual use cases that heretofore have been just impossible to address.
Steve: 00:46:31 That’s really exciting, and for me personally, it’s exciting for two reasons. One, I have a father-in-law who has Parkinson’s, and ultimately he probably will have some degradation of his speech, but the other one is I have a birth father in Northern Ireland who has just an incredibly thick accent, and if you can make me understand him, that would be so cool, because I literally can’t talk to him on the phone because I can’t understand him.
Rob: 00:47:00 Wow.
Steve: 00:47:00 If I can’t be there in front of him looking at his face, I’ve got like zero chance of actually understanding what he’s saying.
Brian: 00:47:09 Totally understand that. I have to tell you, it’s not my product, so I don’t want to represent it incorrectly, but when it’s matured and when it can be productionalized, at least theoretically, there is absolutely no reason why we would not be able to include in Live Transcribe, and it would be useful not just for that individual person but for people who maybe can’t understand their accent. And that may be people you’re meeting for the first time, because it takes some time to get used to that non-normative speech, and especially to your point, if you’re not visually in front of that person or nearby them, it can be difficult to pick up on those cues. So we think it’s an immensely promising technology.
Rob: 00:48:09 Okay, Ryan, I’m going to let you off your leash.
Ryan: 00:48:12 I got more. I got more. Let me go. Let me go. Whew!
Rob: 00:48:13 I’m going to let you off your leash, so go.
Ryan: 00:48:14 All right, so this year there was a lot of tech on and for people with hearing impairments. Has there been any enhancements or changes to magnification, TalkBack, Switch Access, BrailleBack?
Victor: 00:48:29 Let me do a quick rundown on some of the TalkBack things. As you may know, we released TalkBack 7.3 I believe it was in March, if my memory serves me right. Time flies so fast, can’t keep track of it. And so two big new features were announced for the continuous reading and the screen search. I guess for anybody who’s using the screen reader, it’s pretty obvious why they’re useful.
Victor: 00:48:55 The first feature allows you to read continuously any content on the page, be it a webpage or just any screen. What we’ve found that it just reduces the number of swipes you have to perform to get around the content, because normally you’d swipe left and right, or have to drag your finger. In this case, you can simply launch continuous reading, and then as the content is being read you can quickly skip through things that you are not interested in, and while you’re doing that, it will continue reading, as opposed to just stopping because you decided to swipe.
Victor: 00:49:29 And the screen search, it allows you to search any screen for a content that you might be interested in. I personally use it for something like, let’s say I’m in a list of apps, and I need to quickly locate an app that I’m interested in opening. So just type in a few letters and pick from the list of matches, and double tap on it and take me straight to the place I need to go. And that works on websites. It works on Android native screens. It works in email programs, so it’s a very powerful productivity feature. And the fact that you don’t have to type full words obviously saves you a lot of not just swipes but a lot of typing. And whilst you are on the screen search, you can also use voice, because we basically pull up the straight Android keyboard, which also has voice dictate features. So if you don’t want to type, you can just simply speak what you would like to search for, and it’s going to perform the search for you.
Victor: 00:50:30 So these are two features we launched in TalkBack.
Victor: 00:50:34 There haven’t been any big changes to BrailleBack, but this is going to be our focus for the next year going forward. I don’t think I’ll have much to say at this point, but hopefully a year from now when we talk to you guys, we’ll have a lot more to say.
Ryan: 00:50:49 Okay. Before we move on, Victor, I did want to ask you. I notice in the Play Store the other day that BrailleBack is still a separate app. Are there plans to move it into the Android Accessibility Suite?
Victor: 00:50:59 There are many plans. At this point, yes, BrailleBack is currently a separate app, because we would like users to have the experience they are used to, and when there are new changes, we will definitely make sure that everybody knows about them, but we do have big plans for Braille.
Ryan: 00:51:15 Great.
Victor: 00:51:16 So just stay tuned.
Ryan: 00:51:17 Perfect, thank you.
Brian: 00:51:19 Yeah, I think directionally, and we love your feedback on that, so if you ever want to lead offline and tell us what you love, tell us what you hate, tell us top feature stories, or if you want to get it from your audience and provide it to us, we’re definitely super open to that. We’re aware of the fact that basically Braille and screen reader experience are, they’re fragmented, right? You need two different apps, they’re separate apps, it’s not easy to jump from one to the other. We acknowledge that, and a big part of our focus in the next letter release cycle is going to be not only on the screen reader, the blind, the deaf-blind type use cases, but also on low-vision use cases. So I’m super super super excited about bringing the kind of innovation you’ve seen in the deaf and hard-of-hearing space into blind, low-vision, deaf-blind.
Patrick: 00:52:22 And I’ll pick up with the services we have for users with motor impairments, dexterity impairments which are switch access and Voice Access. Switch access, we had an update where we improved basically text editing functionality. We know that editing text and composing text is something not just people with different accessibility needs, but anyone does this pretty frequently, and it’s core thing you do on your device to just connect with someone else. So this is why we’re really interested in improving text-editing functionality for switch users, because we know that’s also a big pain point that requires a lot of scanning with switches and a lot of frustration to correct text and get something right. So some of the additions that we made are, while you’re editing text, while you have a text field open, there’s new menu items that allow users easily to move the cursor to go back to previous words and beginning of the sentence and copy and paste and insert new things. So all of these features that you might expect for making text editing much easier and faster.
Patrick: 00:53:56 And then the other service I mentioned, Voice Access, so Voice Access launched in September, and this is basically an app that users will download from the Play Store. It’s not part of Android Accessibility Suite, and it is meant for users that have dexterity or motor impairments, or who find basically touching the screen to be inconvenient or difficult. It allows you to have full device control by your voice. So you basically vocalize your touch gestures by saying “scroll up,” “scroll down,” “go back,” “go home,” click on this, that, and enter text and so on. This was originally released in English, and the update to Voice Access is that now it’s available in not only English, but also French and Spanish, Italian and German. So we’re building out new language support.
Patrick: 00:54:59 We’re also working on some new features which I can’t announce yet, but there’s some really exciting and cool stuff coming.
Rob: 00:55:08 Every time you guys come on, you have announcements you can’t announce.
Patrick: 00:55:15 Yeah, and actually one other product I wanted to mention also is Lookout, which is another app that users can go download from the Play Store. Lookout was announced last year at IO, but not launched actually until March, so it’s just recently launched, and Lookout allows people with vision impairments to basically detect and gain more independence during real-world situations and tasks. So I basically would open up Lookout, and move my device around the room in front of me, and the camera sees a variety of objects and text and different things like that, and then my phone speaks back to me these items. As I mentioned, we launched Lookout in the U.S. on Pixel devices in March here, and then the exciting news at IO was that we’re expanding support beyond Pixel to also support top-tier Samsung and LG devices. So we’re very excited to get that in the hands of more people and start getting additional feedback and see how people use that.
Ryan: 00:56:24 Is there anything new in magnification?
Brian: 00:56:26 We haven’t released anything new in magnification, so I’m just going to be very clear and direct about that. And as I alluded to, for this next cycle of Android, the needs of low-vision users are really really really important to us. So we’re going to be focusing a lot on that over the next year, so I think it’s going to be a really exciting time for not just people with low vision, but people with other vision conditions, such as photophobia, light sensitivity, color blindness, and so that whole category is something that we’re looking into, trying to better understand user needs, and trying to see quite frankly how we can apply machine learning and artificial intelligence and all of these underlying capabilities, the Google magic that I talk about, to improving this, not only on the device, but also for real-world experiences. So I think when we talk again at this time next year, it’s going to be another really exciting conversation.
Ryan: 00:57:28 Okay, Steve, we all need to go to IO next year, and do the podcast live from the show floor.
Steve: 00:57:34 Where’s IO next year?
Ryan: 00:57:36 It’s probably Mountain View, probably California.
Brian: 00:57:38 It’s usually here. It’s usually at Shoreline Amphitheatre here in Mountain View.
Rob: 00:57:42 Road trip. Road trip.
Rob: 00:57:48 Just out of curiosity, then. When you guys are developing new products, because it seems to me that Live Caption and Live Transcribe are very much related, and so now you’re talking about going forward for next year, you’re going to work on the vision aspect. Is that generally how the development works, is that because there’s these products that are very much related and sort of using the same foundations, that’s just kind of how things roll out, is that they’re rolling out in themes?
Brian: 00:58:25 You know, I would love to make myself seem like somebody who’s that methodical and deliberate and put all of these things together for efficiency and stuff like that, but that wouldn’t actually be totally true.
Brian: 00:58:41 I think what happened in the deaf and hard-of-hearing space is we really didn’t have a lot of features for deaf and hard-of-hearing users. So it felt like that was the biggest area where we could have a massive user impact. And then two, we had all of these extant technologies that were simply like fire waiting to be discovered, to apply to these really transformative use cases, like captions and transcriptions for real-world conversation. Captions for any audio on the device, like now you have captions for, it’s been on your TV. It’s been on movies for a long time, right? But it’s never expanded out beyond that, and so now you have captions on Android for everything, because that technology was there.
Brian: 00:59:32 Now, what we learned from going through that process is, we had a lot of benefits to having partnerships in the deaf community, like with Gallaudet University, which is the world’s premier deaf university and one of the world’s only deaf universities. We’re working with those users, so there were these natural synergies, and it also helped us. Like I mentioned before, I’m a hearing person. I’m a seeing person. So I have to build empathy, and I have to rely on other people when I build products, and so it’s simply easier to do that if you’re focused. And we figured that out. We kind of figured it out. I don’t think it was … I want to be humble. I don’t think it was part of a grand design, but I think it’s something we learned from, and that we can apply across the board.
Rob: 01:00:22 Guys, anything that we didn’t touch on that you guys want to bring up or talk about?
Brian: 01:00:27 We’d love to hear from you. We’d love to hear from your listeners. We’re super open to talking, happy to connect you to Julie for Euphonia if you wanted to do a show on that, so just be in touch, and we really appreciate the opportunity to speak to you, to get questions and so forth, and yeah, thank you.
Rob: 01:00:46 Okay, great, so where can people go to find the Accessibility Team and to reach out and bring anything up to you guys?
Victor: 01:00:55 We do have lots of new communication channels for our users, and probably the best website to remember is g.co/disabilitysupport, and if people have feedback or they want to get connected to our customer care on any of the accessibility issues, I would probably go to that website. We just launched support with Be My Eyes partnership. There is an email support. There is even chat support. This is probably the most consolidated resource that I can think about. People of course can follow our blog, the keyword accessibility blog, so that’s probably what Brian mentioned before. This is where our product gets announced, but g.co/disabilitysupport is the website I would definitely send people to, to send feedback.
Rob: 01:01:43 Perfect. And we’ll include all those in the show notes as well. Guys, listen, we wanted to really thank you for taking time out of your day to chat with us once again this year. It sounds like it was a really, really landmark year for you guys this year, and so you guys need to top those three cowbell-
Ryan: 01:02:02 I’m setting a standard for you for next year, guys. You need-
Rob: 01:02:06 So we expect four out of you.
Ryan: 01:02:06 Yeah, at least four.
Patrick: 01:02:09 Great, thank you guys. Appreciate it.
Ryan: 01:02:12 Thank you so much for joining us.
Rob: 01:02:13 Take care. Man.
Ryan: 01:02:15 I wish we could all go to California.
Rob: 01:02:22 Get those cold calls going, Ryan. Let’s go. We got to get the profit margin up.
Steve: 01:02:26 Come on, listeners. Buy Braille things. There’s good margins on that.
Rob: 01:02:31 We’ll just start a Kickstarter, or a GoFundMe.
Ryan: 01:02:33 A GoFundMe, there you go.
Rob: 01:02:35 Send Ryan to California.
Ryan: 01:02:36 So I can go hang out with the Google team.
Rob: 01:02:39 That project Euphonia is, that’s amazing. That is melting my brain, thinking about that.
Ryan: 01:02:45 Well, and I think there are some videos out there. Unfortunately, the link we had wasn’t working, but I think it’s actually using the whole facial features, right?
Rob: 01:02:52 That’s nuts.
Ryan: 01:02:52 The shape of your mouth, and the lips, and the eyes, and just all that AI is taking in all that information to help formulate the speech, and it’s incredible.
Rob: 01:03:00 That could be, that’s earth-shaking.
Ryan: 01:03:04 It’ll unlock a world of people.
Rob: 01:03:07 Yeah, like a lot of people who are … Well, certainly there’s all kinds of levels and gradients of communication aids, but that could eliminate the need for communication aids for a whole segment of that community.
Steve: 01:03:27 It could, yeah. It very well could. But even just in the broader world sense, that technology will be able to help improve face-to-face translation.
Ryan: 01:03:40 Man.
Steve: 01:03:41 I mean, think back to when the Google Home devices started coming out, and the videos that were there of, oh, here’s a little old Scottish lady trying to use Google Home.
Rob: 01:03:56 That’s … yeah.
Steve: 01:03:57 That potentially goes away, right? Those sorts of issues go away.
Rob: 01:04:02 Absolutely.
Steve: 01:04:02 You think about, everybody thinks the human brain is such an amazing thing, and it can do all kinds of different things, but here we are. We’re hitting the point where it’s possible that a machine will do a better job of understanding each other than we do.
Ryan: 01:04:18 Yeah, there’s a heady concept.
Rob: 01:04:22 And on that note-
Ryan: 01:04:24 And they’ll start figuring out how to do everything else better than we do, and then robot overlords.
Steve: 01:04:29 I, for one, welcome our robot overlords.
Ryan: 01:04:31 I don’t think we have a choice but to welcome them, because they’re coming.
Rob: 01:04:35 Listen, as long as it means I get a comfy Matrix cocoon, and I just get in that little gel, and you have, like that honestly, I’m all for it. Go plug me into the Matrix. You can use my energy as a battery or whatever, as long as I’m comfy and I get fed through my feeding tube, I’m happy. I’m happy.
Steve: 01:04:58 EverReady Mineault.
Rob: 01:05:01 Anyways, we digress. No, so what do you think, Ryan? Are you excited?
Ryan: 01:05:06 I was actually really excited about the announcements this year. Like I said in the show, it sounded like it was a year for evolution and innovation in products for the hearing impaired, so I’m excited to hear, being blind, excited to hear that they’re working on the Braille side of things, because that’s always been a sticking point with a lot of blind users on Android.
Rob: 01:05:25 Sure, yup.
Ryan: 01:05:28 So yeah, I’m going to be really excited to see what comes out next year.
Rob: 01:05:30 Yeah, for sure.
Ryan: 01:05:31 But they’re definitely working hard.
Rob: 01:05:32 Where can people find us?
Ryan: 01:05:34 Oh, they can find us at atbanter.com.
Rob: 01:05:37 They can also drop us an email at cowbell@atbanter.com.
Steve: 01:05:44 And we can be found on the Twitters, on the Facebooks, on the Instagarms.
Rob: 01:05:49 All right, everybody, I think that’s going to about do it for us this week. Thanks for everybody for listening, and we will see everybody next week.
Voiceover 2: 01:05:57 This podcast has been brought to you by Canadian Assistive Technology, providing low-vision and blindness solutions across Canada. Find us online at http://www.canasstech.com. That’s C-A-N-A-S-S-T-E-C-H dot com, or call us toll-free at 1-844-795-8324. For all your assistive technology servicing needs, call Chaos Technical Services at 778-847-6840, or find them online at chaostechnicalservices.com. Music provided by bensound.com.
Speaker 11: 01:06:34 Whoa, look at that. That is sure the one take.