Lectures
2024 Lighthouse Guild Awards Lecture
Watch the 2024 Lighthouse Guild Awards Lectures featuring the Bressler Prize awardee, Dr. Thomas C. Lee, discuss his extraordinary work in innovating to cure childhood blindness. The Pisart Award recipient, Dr. John-Ross (JR) Rizzo, explores the future of assistive technology – trends, realities, and predictions. As well as, the Morse Lecture recipient, Dr. Hoby Wedler, inspire the audience to turn disadvantages into advantages with moving stories of advocacy through collaboration.
Transcript
Dr. Roberts: All right. Good afternoon. Welcome to the 2024 Lighthouse Guild Awards lectures. We are delighted to have those of you here in our technology center, a hub for developers, vision scientists, academics and physicians, and those of you who are joining us virtually. Lighthouse Guild acknowledges and celebrates outstanding achievements in vision research, technological innovation, and dedicated advocacy through a series of prestigious annual awards.
These awards and associated lectures are a tribute to those who have excelled in translating research into practical treatment and rehabilitation, pioneering technological advances and fervently championing the rights and well-being of individuals with vision impairment.
The remarkable accomplishments of our award recipients play a pivotal role in advancing Lighthouse Guild’s mission of providing exceptional services that inspire people who are visually impaired to attain their goals.
Today, we have the opportunity to hear from three remarkable pioneers in their respective fields as they share their work that is positively impacting the lives of individuals who are blind or visually impaired.
Since 2003, the Bressler Prize annually recognizes an individual whose research has translated medical or scientific knowledge into significant advancements in treating or rehabilitating people with vision loss. Today’s recipient is no exception.
Doctor Thomas Lee has been dedicated to advancing the understanding of pediatric retinal disease and childhood blindness. He serves as the Director of the Vision Center, Chief of the Division of Ophthalmology at Children’s Hospital Los Angeles, and Associate Professor of Clinical Ophthalmology at the USC Tech School of Medicine.
Doctor Lee leads a team of physicians and basic scientists developing new treatments for conditions such as retinoblastoma, and retinopathy of prematurity. In addition to building a robust research program, Doctor Lee has helped create an innovative education platform to teach doctors in other countries remotely via the Internet.
In 2010, he initiated a program in collaboration with the Armenian Eye Care Project, which deploys digital retinal cameras in Yerevan, the capital city of Armenia. This initiative allows US-based physicians to supervise Armenian doctors who are treating infants with retinopathy of prematurity. More recently, he has helped create a neonatal nursing education program to support the nurses caring for these children.
Honoring Doctor Lee today is a true personal pleasure as I have known him since he was an ophthalmology resident at Weill Cornell and later we were faculty together at Cornell.
Is my pleasure to introduce the recipient of the 2024 Bressler Prize in Vision Science: Doctor Thomas Lee.
Dr. Lee: I’d like to start off by saying how honored I am to be a recipient of this prestigious award and I want to thank the Lighthouse Guild for all the work it’s done both in the past and going to be in the future.
When I was a resident, we had low vision training. We had an opportunity to visit the Lighthouse and the Jewish Guild for the Blind. And I can tell you even back then it was a very much an underserved and underappreciated service and population. And so, I’m very grateful for the opportunity to come back to New York and share my story and how we’ve used innovation to try and address childhood blindness. And you know, in this world we all follow a mission and the reality is we want to do more for that mission.
But we’re doing it in a resource-constrained world where there never is any new money or more money to be had. And so, the dilemma is like, how do we move things forward when the resources available are sometimes even less than what we had when we started the journey? And the story I’m going to tell you today is about how we’ve leveraged innovation to try and get more care to more children without increasing the overall cost burden.
And so, this is the home base for this story. This is Children’s Hospital Los Angeles. It’s about a 450 bed hospital just for children. It’s a private hospital, but it’s a mission-driven hospital, meaning we perform care on a needs blind basis. It doesn’t matter what the insurance status of the child is, we will see that child and we will take care of that child.
And coming from LA, these are the number of children in LA County. 2.3 million children. And these are the pediatric populations of some of the other cities and major metropolitan counties in the country. And to give you an example of how significant that patient population burden is, if you were to take the 2.3 million children in LA and just carve them out, they would represent the 33rd largest state in the country.
So, the 2.3 million children is greater than the total city population of Boston and Philadelphia combined. And given that we’re the only safety net hospital in LA for kids who are 70% Medicaid, that would be like having a single tertiary care facility for Boston and Philly. And you can imagine how impacted that would make things.
So, as much as our mission is to address any child who walks through our doors and as I like to say, we love all children equally, independent of zip code, that mission sometimes seems impossible. That Mission Impossible that we’re trying to achieve in our mind, we think that we’re Tom Cruise. Dapper, in control, always on top of things, always pulling a rabbit out of the hat.
The reality is far from that. We feel like this: We’re, barely hanging on by our fingernails on to that major airplane taking off.
So, the question is, what do we do? And for us, the main reason we have this problem is that we are trying to address a societal problem of disparity. Now, here we have a picture in a developing country of a very wealthy neighborhood, separated by a very poor neighborhood by a single fence, and this actually is like many major U.S. cities, where we have poor and wealthy living side by side.
Which side of the photograph do you think has more children? It’s the poor side. So that’s the dilemma that we’re trying to address. Fundamentally, we can have the best doctors, the best facilities, but we are addressing a societal problem and that makes it challenging.
Now, here’s a picture of all these cute babies. That’s actually my practice. Now this will generate feelings of warmth. Warm fuzzy feelings. They’re so cute. I see that very differently. I don’t see lots of young babies in diapers. I see a storm cloud on the horizon.
This is how I feel every time I walk into the clinic, in part because many of these kids took six months to a year to get into my practice. And that’s despite having a clinical template like this.
So, my day starts at 7:30 in the morning and by an hour later we’ve had almost 10 patients clocked in and that pace keeps up until I end my day with my last patient checking in at 7:15 PM. I have no lunch break, and I’ve seen north of 70 patients in that day. So, I’m tapped out. My clinic space is tapped out and that’s a real dilemma. I can’t figure out how to get more kids into our system and avoid the one year wait.
This is a picture of my two bosses, the CEO of the hospital is on the right and the CEO of the Medical Group, our practice plan is on the left. And their goal is to solve that problem by expanding the care that we can deliver and by expanding the footprint of the hospital.
So, my CEO wants more. That’s plain and simple. He wants more. And if my CEO of my Medical Group has his way, it’s much more. The reality is they’re not the ones that actually determine it. It’s my staff.
So, at the Vision Center, we see about 28,000 kids annually through our system. And this is our triage folder where we get all the new referrals in for a day, which is north of 50. And the triage folder says that more than half of my doctors are scheduled out more than a year. And the other doctors, the sub-specialists are scheduled out six months, which means if a patient shows up sick, they have to cancel. Their next available appointment that I can get them back in is 6 months to a year out.
So, as much as we want to do more, really, our feeling is no more. We don’t have the capacity; we don’t have the bandwidth. So, when we want to scale our hospital, which we can do, we cannot scale the workforce at the same rate. So, the question is, how do we address that bandwidth issue? The delta. Fundamentally, it comes down to a question of who are we as a coordinary care hospital?
And that’s this. It’s a Hummer SUV, fully decked out. Elevated. We can take any kid on a journey to their destination, it doesn’t matter how rough the terrain is, we will get that kid there. We’re an over engineered beast of a hospital. But is that the scalable model or is it this, a Prius? Much more economical. Higher mpg. Not nearly as expensive.
Is that the way we scale? We have to choose, in theory. And so, the question is, do we scale our existing model as a Hummer or do we try and become a Prius? Or do we try and become both and in the process become the spork of healthcare. No one likes using a spork, right? No one likes that, so that clearly isn’t an option. So, we need to think differently as my favorite company would say.
And so, if we look at these two solutions and look at what their strengths are, the Hummer is the initial evaluation, the acute management, the surgery, the active pathology. The Prius is the follow up exam, table management, low acuity, mild disease. So, if we’re not going to become the Prius and we’re not going to become the spork, we’re going to maintain our Hummer status, we need to find a partner to offload some of these kids.
And so, for us, we found that that is in federally qualified health centers. And these get an enhanced reimbursement from the state and federal government to see underserved patients and children. And do it in a way that they can break even in what’s called a cost plus model and provide comprehensive services.
So, the model that we proposed initially was, why don’t we go ahead and share those patients with the FQHC, our stable patients go to them. If there’s any active patients they can come to us, all in the goal of increasing efficiency. We recognize that this would actually do the opposite. Because what would happen is kids living in the FQHC might be seen by their local optometrist there and then be referred back unnecessarily.
So, we wanted to prevent that from happening. And so, we came up with this was this telehealth model. Where we would use technology and innovation to co-manage them in a virtual system so that we would send our pediatric optometrists to the FQHC, which had space. They had the revenue model for it. They just didn’t have the patients or the expertise, so we would lend them our doctors and our patients to be seen in their physical facility. And whenever a problem would come up or something that needed a referral question, rather than migrating that kid back out to CHLA, we could take care of them on site.
Here’s an example of what that future looks like. (video shown)
So, we’ve done a lot of research on this. We have the have the world’s largest prospective study using real time telemedicine validated against an actual in person exam that was performed immediately after the telemedicine by the same reviewing ophthalmologist.
So, we’ve identified one partner which is an FQHC, which is great, but there’s a very famous quote about a bank robber. I think it’s Willie Horton and it was, “Why do you keep robbing banks?” And his answer was ‘well, that’s because where the money is’ and the reality is when you’re talking about children, the bank that holds all the children are schools. And so, what we’re working on right now is we have a grant working with Compton Unified in LA to perform on site exams virtually for children that are screened positive for eye disease through a program called Vision to Learn.
They have vans. They have optometrists, and we’re now equipping their vans with a Starlink satellite dish, along with real time video equipment. Video slip lamps. Video indirects. That allow us to do assessments on site and decide whether or not that kid has to come into Children’s and if that’s the case, then we have a caseworker who will then shepherd that child in.
The goal is if we can validate that platform we’re going to decant our stable patients just like we would to an FQHC, but this time into the school. If you’re a parent and you have to take a full day off of work to bring your child in for a routine eye exam, that may have to be performed two or three times a year, not even talking about trying to find parking, it’s a huge burden. And so if we can bring that service on site where the parent just shows up in the parking lot after school, that would allow us to create a whole new platform for healthcare delivery. So we’re actively pursuing this now, and we’ve already started to see some of our first patients, which is great.
So, I call this the sharing economy. So, we know that Airbnb, Uber, you take an existing need and you address it with resources and assets that you have. And Airbnb is a great model where you have people who want to come into a town and they want to have a place to stay. So, this is the healthcare version. The FQHCs in the schools have the rooms. We have the patients and the expertise. So, if we join forces and join that sharing economy model, we can take existing resources, existing overhead that’s already being paid for and use it and free it up to provide care for kids who would otherwise have to wait a year to be seen by one of our doctors.
So by doing that method we can avoid the spork and maybe even come up with a whole new model. So in this case, it’s a Porsche hybrid Cayenne SUV, which is the I wouldn’t mind having that myself, but that’s the aspirational model for what we’re talking about, is try and take the best of both worlds and combine them into a unified system.
Now, because of the nature of our hospital, we’re a global hospital. We try and meet a global need when possible, if and when possible, so I’m going to take you into a deeper look into that future.
About in 2009, I was approached by an NGO, the Armenian Eye Care project. LA happens to be the largest diaspora for Armenians living outside of Armenia, and it’s a very poor country. It would probably be called a Second World country where they’ve now have enough resources to build hospitals and neonatal intensive care units to at least the equipment level that we have in the US.
And over those years, the number of neonatal intensive care units have doubled every five years. During that, however, not a single baby was ever screened for a devastating form of childhood blindness called retinopathy prematurity. Those children will generally go blind before they’re even 39 weeks in the NICU and they go blind within 48 to 72 hours.
No kid has ever been screened, so in around 2009 I got a call from them because there was about 100 babies showing up annually in a pediatrician’s office blind from a disease that their doctors had never seen before, but they had heard about. And the reason was that they none of those kids ever survived. So, now they were surviving only to go blind and in the process they had to figure out a way to train their doctors and so that started a journey with this child born on June 29th, 2010 in Yerevan.
Yerevan, in Armenia, just for record, are halfway around the world. It’s exactly 12 time zones away. I had no idea where that country was. But the founder of the Armenian Eye Care project was literally like the Pied Piper. And he could convince and persuade anyone to join his cause. And what he didn’t know was that in Korean culture, which I’m Korean, guilt is a weaponized feeling parents use it on their kids all the time.
So, I was already sensitized to being guilty from birth and when he started to tell me about this problem, every excuse I could come up with in my head not to go to Armenia fell by the wayside because the guilt just kept getting bigger and bigger.
So eventually in 2010, we took this disease that we had actually discovered in 1947 with the advent of the first incubator. We now have a 94% success rate in treating, and we created a remote learning platform where we gave them about four days of intensive lectures from start to finish on ROP. Left these retinal cameras behind that allowed us to literally round on every premature infant in the capital city for a year and we had three doctors who were helping us do that: myself, Paul Chan, who is also a Cornell resident, and Michael Chang, who was a Columbia faculty, now director of the National Eye Institute.
And the three of us reviewed these images over the course of the year. By the end of the year their proficiency in diagnosing and treating ROP was the same as US experts here, all within one year literally done on my iPhone. Once a week, I would review all these images.
And with that, they were so much success. The Minister of Health was really excited and he said, well, we still have some children who go blind and we would love for you to train them on the Internet how to do the surgery. And I said, well, that’s actually kind of hard.
The reality is we actually figured out a way the first version was we used something called a Sling box, which used to stream. This is before, like Netflix. They would take your computer, it would take your cable subscription programs and you would plug into the back of your cable box and it was basically a server that would then stream it to your laptop. And this is before we had streaming services. And so we put that on the back of one of their microscopes in the capital city and started our first real time Tele surgery training program for a device that only was $120.00.
Fast forward several years later, we partnered with Microsoft, came up with a different solution that I’m going to present to you and what you’re about to see is the video at their 2017 keynote address in Washington, DC.
===========================
(Man in video from keynote address)
The importance of why we’re all here is the mission. It’s what gets us out of bed every day. In short, it’s about making a difference. And this final story, the surprise I have to share with you is truly just that.
I want to show you a video to tee up the story and then I’m going to invite on stage some very special guests. Please roll the video.
(woman and man speaking Armenian)
We were not able to have children for 20 years.
I fought to make my dream come true.
And succeeded in having twin babies Liana and Kevork.
Lianna is well behaved, very well behaved.
She’s always looking for new adventures and she’s a very determined girl.
As my children were born in the seventh month they were considered premature.
And just a day after their birth they were diagnosed with retinopathy of prematurity.
She was just two to three days old when she underwent her first surgery.
(Dr. Lee)
The children of Armenia were experiencing a very aggressive form of childhood blindness, which in this country is almost 100% treatable. We were able to partner with SADA systems to help us implement a cutting edge solution using Polycom devices and Microsoft Skype for business to stream high fidelity image out of the OR into our office here so that we could actually observe and guide surgeons who are halfway around the world and not only was it a point to point conversation, it was a multicast so that we can have multiple experts forming a small community around this trainee to provide a better future for that child on that operating table.
(woman and man speaking Armenian)
Liana was able to undergo a successful surgery.
Today I have the utmost respect for those who did so much for my family.
Lianna now has regained her sight thanks to their efforts.
All parents have high hopes for their children.
I’m excited for Lianna to see our country’s beauty with her own eyes.
(Man in video from keynote address)
So Doctor Lee, even though you saved Lianna’s site, you never actually had an opportunity to meet her in person. And I’m delighted to share with you that Lianna and her family have traveled from their home in Armenia to right here in Washington, DC, to meet you for the first time. Please welcome Lianna and her family.
Together, we will make a difference, and together we will change the world. Thank you so much for everything you do. We’re proud and humbled to be your partner. Thanks.
========================
Dr. Lee: So, what we’re talking about is using innovation and technology to do more with less so that we can accomplish the mission that we need to serve our patients with whether it be preventing childhood blindness or whether it be assisting and supporting patients who’ve already had some form of Impairment.
Almost 25 years ago, when I last was here in New York and I attended the first low vision therapy training program, it was very old school and now looking where the Lighthouse Guild has taken the field, AI and assistive technology are clearly going to allow you to do that very same thing which you’re already doing, and of all the programs that I know out there, this is the only one that is able to validate and really look at critically what are the right solutions for those patients through your technology group, through your social workers, the whole integrated vertical you’ve created is the place to achieve that mission and that goal.
Thank you so much. I’m very honored and I want to congratulate the other awardees as well. It’s really an honor to be in their presence. Thank you.
Dr. Roberts: 43 years ago in 1981, The Pisart Award was established to recognize an individual, group or organization that has made significant contributions to vision science. Now referred to as the Pisart Award in Technological Innovation, the award recognizes those whose technological innovations improve the lives of people with vision loss.
Doctor John-Ross Rizzo is the recipient of the 2024 Pisart Award for his pioneering work in utilizing technology to eliminate barriers for individuals with vision impairments. He serves as the Health System Director of Disability Inclusion, And Vice Chair of Innovation and Equity at the Rusk Institute of Rehabilitation Medicine at the NYU Grossman School of Medicine. He also holds positions in the neurology, ophthalmology, mechanical and aerospace engineering and biomedical engineering departments at the NYU Tandon School of Engineering.
Doctor Rizzo leads the Visual Motor Integration Laboratory, which focuses on motor control, with a particular emphasis on visual guidance and biomechanics related to blindness. He is also at the helm of the Rehabilitation Engineering Alliance and the Center Transforming Low Vision Laboratory or they call it, Reactive where he investigates bio-inspired multi-sensory assistive technology, particularly advanced wearable devices.
As an advisor to multiple startups and early ventures, Dr. Rizzo is dedicated to technological innovation for individuals with various types of visual impairments. He holds numerous board advisory roles as leading nonprofit organizations including the MTA, National Academies, Foundation Fighting Blindness, CRF, City Access and here at Lighthouse Guild.
Drawing upon his personal experience with vision loss, Doctor Rizzo integrates this insight into all aspects of his work. Am pleased to introduce the recipient of the 2024 Pisart Award in Technological Innovation, Doctor John-Ross Rizzo.
Dr. Rizzo: Thank you so much. It’s such an honor to receive this award and for a couple of decades I felt like I’ve always been part of the Lighthouse Guild family, so this is a real homecoming to me and I’ve always known Lighthouse Guild as a clearinghouse for all things blindness and low vision. So thank you all so much for everything you do.
It’s also so exhilarating to win this award in in the company of such luminaries, in the blindness and low vision world. So, I can’t thank my co-awardees enough for all they do.
A little bit of housekeeping. Again, my name is JR Rizzo. I’m a legally blind white male with a receding hairline and a sharp 5 o-clock shadow at about 12:35 in the afternoon. And to level set I’m wearing a yellow shirt, a blue tie, a blue suit, brown vest, brown shoes, thick brown rimmed glasses, and for those in the back I am wearing pants. You have to double check post COVID so I assure you of that.
I’m going to do my best to describe a lot. I’ve prepared a tour de force, but feel free to interrupt at any time. I will not see your hands. I promise you. So, just interrupt me like a Sunday dinner at the household, OK?
We’re going to be talking a little bit about the future of assistive technology, so, some trends and realities and we’re going to do a little bit of a show and tell to try to demonstrate and exemplify some of those trends.
So I have a bit of an ambitious agenda, but I do move very quickly. We’re going to talk a little bit about AI and how AI is everywhere. Thank you, Doctor Lee, for the tee up. Tough act to follow for sure.
Then we’re going to talk about some of the weaknesses and how we’re trying to cover ourselves for some of those weaknesses. Then we’re going to move into some of the work we’ve been doing in assistive technologies to highlight some of these trends. I’m going to provide a series of examples and navigation, obstacle negotiation and scene exploration, which I think will be really fun and then hopefully I’ll have time, I wanted to briefly touch on some of what we’re doing on the back end as it relates to data policies that I think is really supporting the next generation of assistive technologies and then we’ll wrap up with some conclusions. OK, background.
Let’s talk a little bit about AI. So AI is everywhere and there’s been an unprecedented uptake of new AI tools. I love to highlight this adoption slide here. This shows how long it took a company to achieve 1,000,000 users. You see Netflix here at 3 1/2 years. Guess how long it took chat GPT to reach 1,000,000 users. Five days. OK. And I’m pretty sure everyone here has looked at Netflix streaming. I also wanted to brief briefly highlight the predicted market growth up to 2030. By 2030 the AI market will be $1.5 trillion with a “T” so it’s really absolutely incredible.
AI offers a lot to us in blindness and low vision, but I would argue there are a lot of gaps we need to solve for and that’s why we need to have a firm partnership with the community. For one, I have some examples here looking at AI classifying different objects, detecting different objects, segmenting different objects. And one of the problems is it’s firing on everything in these images. So, it’s not focused or filtered and you know this creates problems. At the end of the day the user needs to select the right tool in the right moment and very often sift through lots of results in order to find what they actually want, and I think any user can attest to this, especially using some of the new AI.
Another problem is that I would argue, and I’m going to belabor these points, but a lot of what we get in terms of AI based tools, that control logic is just too simple. So, often times we may have an offering that has several features or functionalities, but we can only use them independently, one at a time, okay? And that’s problematic. Why can’t we start to move towards multivariate control or basically turning on multiple functions or features simultaneously? So we’ll talk more about that in a second.
So, I don’t want to bemoan AI. I think AI is incredibly powerful, but I would argue it’s like an unbridled mustang. So, what can we do to put the bridle in that mustang? To really realize that horsepower gain? Well, I think for one we can talk about making AI more tailored and precise, making it intent and context aware.
One example I love to give is, here I have visualized the blind end user using some smart glasses with some optical character recognition, text recognition basically, and he’s interested in getting this menu read. But if we actually do a profile on this end user, he’s actually a vegetarian, so pardon my French, he doesn’t give a shit about the meat options on the menu. He only cares about the veggie based options, Okay? And so can we filter these OCR results to only show him the vegetarian options?
Apologies for the expletive.
Another thing we’re trying to do is making that AI more focused, right? And part of what we’re doing is we’re trying to think about how can we integrate top down control or higher order control to these AI systems.
So we’ve heard a lot in recent days about generative AI. One key aspect that we’re working on is building memory or learning from prior experiences to help with that filtering. We’re also thinking about a user, putting in a deeper profile of your preferences so we can learn more and offer a more distilled product to the consumer.
I’m going to briefly introduce some of what we’re doing on the back end with data policies, but a lot of what we’re seeing with the new smart glasses and a lot of these new AI-based offerings is that you know, they’re all camera based and they use high resolution video streaming. And so we’ve been trying to think how we can leverage more from that video streaming and thinking a little bit about how we handle that big data in order to make the AI more powerful. So, I’ll get to some examples in a second.
So let’s move into some trends and realities. I told you we’re going to do a bit of show and tell to exemplify some of these trends and realities, and I’m going to start off each of these little mini sections with a problem landscape slide is hopefully help tell some of these little stories as a raconteur.
So the first problem statement, I would argue assistive technologies are often single purpose and lack interoperability. Pick your weapon. Seeing AI Supersense from Mediate, that’s an MIT spin out, MyEye from OrCam, etc.
Some of the common pain points. Existing solutions run individual services in isolation, and they don’t offer interoperability. So this is a big missed opportunity. So what we’ve been trying to do is create platform technologies, really think of it as a framework that you can layer different services within or on top of to offer more to the end user. Think of creating multi-purpose solutions.
So, I’m going to run through a brief example of what we’ve been doing in this space, and I can’t thank Lighthouse Guild enough. We’ve been collaborating with Doctor Roberts and Doctor Seiple for years. We now have $30 million in grant in our grant portfolio to help drive some of this work forward, which has been so exciting.
I’m going to introduce briefly our vision wearable platform. VIS4ION stands for Visually Impaired Smart Service System for Spatial Intelligence and On-Board Navigation. It took us a year to create that acronym in two years to learn how to say it. But I finally did it. Mom and Dad, thanks so much.
The top is our vision wearable, which is basically a sensorized backpack. It has a small micro computer and a battery and then it has little sensors integrated into the shoulder straps in the back of the backpack itself. The human machine interfaces the feedback you would receive, binaural bone conduction like an Aftershokz headset for those end users that are familiar with that. And then we also have this custom haptic interface, basically a vibratory feedback belt.
We’ve also created a mobile version of this that we call Vis4ion Mobile which is that same sort of platform technology approach, but all delivered through an integrated mobile application. And once I have these frameworks or these platforms, we can then start layering in different services. Think of seeing AI, but getting all of the modes of the channel simultaneously or concurrent.
We’re going to talk more about processing in a second. So again, I wanted to tell some smaller stories about different key services. You know, invoke topics and blindness and low vision. We’re going to talk about navigation. That’s all the rage these days.
We’re going to talk about some new technologies as it relates to supporting indoor and outdoor travel. We’re going to talk a little bit about travel safety or classically electronic travel aids for those who are familiar with the terminology, specifically highlighting obstacle negotiation.
We’re going talk a little bit about trip hazard identification through some cool curb detector programs that we’ve created. And then lastly scene exploration. So, learning more about user interaction and thinking about how I can learn more or garner more from a scene of interest.
So again, let me start off with a problem landscape. Let’s talk about navigation. So I would argue navigation is often not safe and inefficient due to inaccessible spaces. In addition, we have unreliable tools that only create an accessible experience for part of the built environment, leaving you to jump from an island of accessibility to an island of accessibility, which is really problematic.
One big pain point here – there’s no clean handoffs. I’m not sure who’s used any of the outdoor navigation apps, but a lot of times there’s not a clean hand off to an indoor navigation solution. Often times these navigation technology solutions require extensive mapping with fancy LIDAR cameras or laser scanners for those who are familiar with that.
Those scanners often cost close to $100,000 a piece and they don’t work very well.
They’re very finicky.
We often have a fragmented solution space, and here’s one of the key pain points. With environmental changes you often require remapping, the Achilles heel of a navigation industry. So what are we doing?
We’ve stood up UNav or that’s Latin for ubique navigare. So you UNav for short, and we basically use commercially available off the shelf 360 degree cameras, GoPro cameras that you would put on your helmet if you were a cyclist and we create these lightweight digital twins or these spatial maps and then we use those digital twins to help localize end users. I’m going to give you some behind the scenes on this briefly.
So, our answer to this and the trend is moving into a camera based infrastructure free navigation solution, providing step by step bite size guidance.
I’m going to give you a quick back story here. How does this work? The 10,000 foot view is we walk in a space with a GoPro camera. One of these360s. We acquire footage from multiple perspectives. It’s almost as fast as you can walk to the bathroom we can map that hallway, and then we reconstruct what are called feature maps in 3D or point clouds, if you’ve heard of those terms.
And then uniquely, we register those 3D point cloud maps to a floor plan. That provides the geo-referencing that you need, similar to what happens with satellites with outdoor navigation solutions and GPS. And then each image is uniquely tagged with a 64 digit ZIP code, or we create a numerical ID and this allows us to search for these particular visual features faster in these big databases we created.
So, let’s say you wanted to actually use this technology. So how would it work? So this is actually real data from our hospital on E 38th St. So you go into a space, you load up our mobile application, you take a photo, we do a feature analysis just like we did on the back end previously. We do a quick search process based on the visual features that we’ve geo -referenced, as I mentioned before. And within usually one second, I can infer your location to within 2 feet.
Then, once you figure out that inferred location, you can then tell me, I want to go to doctor Seiple’s office and using a shortest path planner algorithm I can support that journey step by step, feedback and guidance. We’re very excited about that.
You may be saying, JR that sounds super compelling, but I don’t think it can work behind the scenes. And I’m going to tell you a little bit about behind the scenes in terms of how we’re making this work really well and robustly.
Here’s what we call our map aligner for the registration. It’s basically like playing a video game and what you do in here, I have visualized actually an incoming picture from a
Thailand project. We’re using AI to pull out those features. I then have a floor plan and we match each of these 2D visual features to the 3D landmark location on the floor plan.
After we do this three times, I learn the transformation matrix and all of those visual features snap to the floor plan, and I know where all the visual features are. This is the key and how we’ve made it very agile.
We also have another sort of fun video game that we’ve created here that I’m showing where we can automatically through AI learn the boundaries of a space of interest, and then I can edit those boundaries with these little red edit boxes that you can see and on the right hand side I’m able to drop in specific destinations. Here’s Doctor Robert’s office. Here’s Doctor Seiple’s office. And in this new coordinate system that I’ve created, I can support travel between your origin and that target. And again, this happens on the order of a split second. Very exciting work.
So, what would this look like? Here you can actually see some incoming data from one of our engineering students testing this. He only walked into two walls. No, I’m kidding. That didn’t happen. But you can see us doing this visual feature analysis and then matching each of those visual features to their floor plan location. You can see how rich it is by all of this webbing going back and forth. What would the experience look like on the right hand side? You can actually see data here that we collected with Doctor Seiple and his team and you can see examples moving from the elevator bay to Doctor Seiple’s office.
There were three photos taken with the mobile application and you can see one, two three, you get bite-sized feedback and guidance. We’re really excited about this about this pipeline. But let me go to the videotape as Warner Wolf used to say, and I’m gonna do my best to describe this.
So, here we are in Lighthouse Guild.
(app voice)
Turn left at 8:00 direction and walk 8 meters.
It just inferred your location in 500 milliseconds. I’m now giving you step by step guidance. I’m walking through a hallway. This is the automatic photo acquisition mode, so I’m acquiring query integers.
(app voice)
Go straight at 12:00 direction and walk 9 meters.
The phone is sitting in a lanyard. You can see the next set of directions. Now I’m walking towards sort of a room with a table and chairs and I’m going to get the next set of directions.
(app voice)
Go straight to 12:00 and walk 4 meters. Then turn right.
We’re getting messages from the back end, talking about that step by step instructional guidance and also successfully confirming the receipt of an incoming image.
(app voice)
Turn right at 3 clock.
I’m not going to go through the full demo. We’re very proud of these results. I’m going to walk through some of the validations we’ve done. Just a little exciting sort of update here. I’m not supposed to be talking about this, but I’m displaying some mapping data that we’ve done at the United Nations by invitation a few years ago. In fact, we probably have the most robust maps of UN headquarters here off the FDR, which is really exciting.
So. I’m showing some point clouds. This is actually the secretariat building where the diplomats walk. This is my video feed and I’m actually showing my real time trajectories on the right hand side. This stuff is really built for action.
I wanted to briefly highlight the fact that we’ve technically validated this. You may be saying it sounds good, maybe too good to be true. How spatially accurate is it? If I were to take a picture and localize the end user, it’s within one to two feet which is 200 to 300% better than any of the beacon based tools you may hear about currently available in the marketplace. We also can provide orientation or angle positioning, bearing angle which is very important for navigation within two to three degrees of orientation error and it’s highly reliable across an entire floor plan. Here’s an example from some testing we did in the hospital using compasses and rulers and tape measures.
Very excitingly, we just finished our first clinical trial. We actually have an NIH grant. This is in Thailand outside of Bangkok at a disability college. I’m showing an end user wearing some of our backpack equipment. This is the Vis4ion system and he has some extra cameras here in order for us to do trajectory analysis to figure out how well he or she is navigating.
I’m not going to go into the weeds on this clinical trial. It took us a couple of years to complete, but the bottom line is we set this up where UNav was compared against what we called in person instructional guidance offered by an informed chaperone. So think of someone you know by your side giving you step by step guidance on a floor plan. And we compared that to UNav. This is called a non-inferiority trial.
Not only were we just as good, we were better than that in-person, instructional guidance, and of the 10 measures or metrics that we assessed, we were better of nine out of the 10. So, we were extremely impressed and this was actually better than we had hoped for.
I know that was sort of a whirlwind. I’m going to move into some work we’re doing with obstacle negotiation. So, the problem statement for obstacle negotiation is that environments frequently present a diverse range of obstacles, both static and dynamic and I have to continually negotiate around them. A failure to negotiate these obstacles creates a wide range of inefficiencies and potentially injuries, right?
So where are some of the common pain points? Unfortunately, most of the systems do not prioritize hazards or obstacles by relevance in terms of your intended path of the direction that you’re walking. And a lot of them often provide very low spatial resolution.
So some examples, things like the Sunu band that you may have heard about on the market, things like the WeWalk has two contact points providing low resolution spatial feedback, tactilely.
So, what are we doing? Well, how are we optimizing this and what are some trends in the marketplace or in the research space as it relates to sensory substitution and specifically tactile sensory substitution. For those are unfamiliar with sensory substitution, this is taking visual media and remapping it into an existing sense for folks who are blind and low vision. We affectionately call this program our virtual whiskers program because we’re creating virtual whiskers for our consumers.
So I’m going to have to go into a little bit of background. Here are some of the basics. How to create a higher definition tactile alerting system. So I have incoming camera imagery. I’m going to show you how we analyze that in a moment. And then I basically have this feedback bell creating a virtual whisker coming out of your waist, if you will.
We’ve iterated on this about 10 times over the last decade and we now have these really cool clips that I’m showing on the right hand side with little vibratory actuators, and you just clip them on your belt. You turn however many you want on and it creates a mesh network with the core device or your cell phone and immediately connects as a network.
So we have a couple of different flavors of this. The first is, you can imagine this version as feeling your way through space. So, we’re taking objects and based on proximity or the most proximal objects with a depth analysis and computer vision. We then trigger the actuators in a one to one correspondence from this pixelated view of the ambient field to that actuator grid based on the belt and how many of these units you’ve turned on.
You can see a picture of this on the right hand side, which is a woman wearing the backpack with a camera. And this haptic feedback belt. Perhaps more provocative, we created what we call the open path which is follow the buzzing. So, you can think of this more as a guide dog version of the same system.
Looking at traversable space or where is more space available for me to walk through and we’re actually playing a video here of one of our end users walking through a city park and I hope it’s very obvious, but you can see here that there’s a bunch of people that are being segmented the path and the walls or the boundaries of the path are being segmented and we’re pushing the end user directly into the center of that space where there’s more openness or more traversability.
This we’ve tested now and the results are very positive. I’m not going to go into the weeds on this, but we just tested about a dozen blind subjects in an obstacle negotiation task. And, not only did this take just about the same time as cane negotiation, but we cut down on the amount of hesitation that we saw during this obstacle testing. We also improve the safety window. That means your distance from the obstacles that you negotiated. And critically, we cut down on the number of cane contacts you needed, making travel more efficient.
I’m going to briefly touch on curve detection, so the problem statement here and we hear this frequently in our qualitative interviews with blind and low vision subjects, curbs are essential but challenging to detect for persons with blindness and low vision. To understand and negotiate curves, you need a lot of spatial information like the distance of the curve, the orientation of the curve, the type of the curve. Is at ramp? Is it an up curb, is it a down curve? Etc.
Most of the approaches, I have some examples here in terms of market competition, do not convey all of these key spatial properties. So we thought, could we come up using generative AI, a new curb detector that would segment curbs and then display multiple spatial features of that curb to an end user to make it more successful?
So, here’s a video demonstration of is. So, the way it basically works is we draw basically a tracing or we segment the curb itself. We then project safety zones. You can think of this as like a bumper sensor on a luxury car. As I get closer to the beeping scales. But what we’re also doing is we’re sonifying the orientation of the curb and that’s that rising and falling tone that you’re hearing in the background. So, you have a beat for distance and you have a tone for orientation.
We just tested this in another clinical trial, which was just accepted for publication excitingly. Guess what? We’re able to improve the safety window so people can travel more confidently. Very excited with these results and I can come back to this if there are questions.
The last sort of brief example I want to give before we get into some of the background work is scene exploration. The big problem statement here is environments are often rich in content and very difficult to describe a space in a pithy and meaningful manner. Current approaches are usually generically applied across the entire scene of interest without any focus.
There are several AT (assistive technology) solutions that are triggered by gestures. For example the OrCam MyEye with pointing and finding text. But I would argue they’re very low resolution and limited, and some of the anatomical models they’re using in order to help with that gesture recognition are very basic.
So, we created what we call the point-to-tell-and-touch system, which is basically again our approach to using generative AI to improve gesture recognition and to provide more focus to the models that are supporting this. You could think of this as audio feedback, tactile feedback, but also assessing in real time your joint position sense or what we call in medicine proprioception. So, this is a trimodal sensory substitution exercise, which is a new trend in the scientific world or milieu.
So, how does this work? I’m going to provide some background here, some basics. Essentially, I do a scene analysis using computer vision with depth scaping or doing depth mapping. I can detect specific objects. I detect the hand pose and specifically the fingertip, and then I compute or I calculate the spatial distance between that fingertip and the objects of interest in the background scene. And then I describe whatever’s underneath the fingertip.
You can think of this as almost putting a digital laser pointer on your finger, and then us describing the environment for whatever would be touching that that that digital laser pointer. We can give both spoken feedback. We also created a paired vibratory wrist strap that can help you with obstacle navigation.
So, here are some examples of this. I’m playing a quick video of one of our end users in a library and what’s happening is we’re able to analyze the fingertip and describe the object underneath the fingertip. Super helpful for efficiently learning new spaces or exploring a new space. This end user was extremely happy with this innovation and on the right hand side we actually did a clinical trial a couple years ago to see if this would pass muster.
Not only were we able to maintain accuracy of figuring out these objects on a test case where we sort of created a mock shopping exercise compared to digital exploration, but the exciting part was that we cut the total time down to complete that task by 50%. So you can imagine applying that to a shopping experience and how profound that would be.
Here’s some new work we’re doing with a second prototype that we just submitted in a big grant. I’m going to play a brief video. This is one of our engineers. He’s basically showing the Spider Man, Spidey sense sign. We have a full anatomic model of the hand. He
s then pointing his index finger and he’s literally grabbing certain objects digitally in the environment like spite Spider Man shooting Spidey webs.
(video voice)
Backpack. 4 meters.
(video voice)
Chair. 4 meters.
(video voice)
Suitcase. 4.5 meters.
I have another version on the right hand side where we’re describing the objects and specifically their distance, and we can customize this in the end and figure out whatever they’d like to hear as it relates to the object identity and spatial properties.
One more sort of fun plug here, and this is some of our work with dynamic gestures. So here we have another end user in a city park. What’s happening is they’re waving their hand like a windshield wiper across the scene and what we’re doing here is we’re going.
(video voice)
This image shows an outdoor park area with pathways, plants and seating far left. A tree and bench for resting; clear and accessible space.
So, what’s happening is we’re doing the hand analysis across image frames and we’re analyzing the scene in the same direction of the hand side, just like a windshield wiper. So, we’re very excited about that.
Let me talk a little bit about the back end service. Some of the behind the scenes magic.
So, I think I convinced you there’s a lot of potential with AI, specifically generative AI. Most of it’s based on computer vision. I’d like to argue that video quality or image quality really does matter. Just like it would hold back Doctor Lee if he were remotely supervising an operation.
There’s ways to think about video compression. I’m sure many of us can attest to seeing, you know, a video frame on either our zoom portal or in Netflix get caught up and buffer and not work properly. What we’re basically trying to do is thinking about that big video data coming up with stronger data policies.
So where does the rubber hit the road on this? I have two video demonstrations here on the left hand side you can see raw video being ported from one of our backpacks directly to a server for analysis and you can see all this jitter and artifact happening.
On the right hand side you can see our efforts to do video editing in real time to assess the local bandwidth environment to figure out what my Wi-Fi looks like, and then to transcode that video or sort of reconfigure the video with down sampling and compression to make sure it fits the available data highway properly.
Again, what’s super exciting about this, if I do it right, which is this middle option, I can end up preserving the integrity, the image frame itself, and then my obstacle detection is a lot more robust. I have more objects identified there. If I don’t do that transcoding with the raw video, you get what’s called the snowflake effect and everything gets distorted and limits my impact. And on the bottom side, if I just compress the hell out of these images or down sample the video, which is what most approaches on the market do, I end up limiting the quality of that image frame and then the obstacles I identify are probably 50% or less of what I’m able to do with the high quality.
Here is a zoom in of that to demonstrate this point. I have poor quality on the left. High quality on the right. This an image frame outside the engineering school. Were firing object detection on it. You can see I’m able to pull up five or six objects that may be pertinent to your travel and on the right hand side look at how many more objects I’m able to detect with these arrows.
It wouldn’t be a presentation if I didn’t trip at least once. So how does this work as it relates to, you know, thinking about, you know, being in the wild? I have good. I have bad networks. OK, I may have a good Wi-Fi strength or bad Wi-Fi strength. So what we’re doing is we’re trying to figure out where to send this big video data as it relates to computing or unlocking some of these services locally depending on the computer I have in the backpack versus what we’re doing remotely. So, can I actually figure out that I do have a good Wi-Fi network and send some of this video data over the web and then process and unlock advanced features on remote servers?
This is some of our work now and it’s becoming very powerful and basically creating that multi purposing that I was talking about before.
So, I want to end by highlighting a little bit about application architecture. So briefly, pain point, most applications suffer from simple designs that lack integration between assistive services. So when we think about using these big language models, a lot of times they often result in these sort of meandering, irrelevant and diluted feedback that I think we can make more precise and more impactful. One of the biggest issues or pain points is that they’re not tailored for the blind and low vision community.
So I think this is fairly obvious, but we said can we actually train a vision language model to make sure it is tailored for the blind and low vision community, and then put a bunch of other services along with it through a platform technology to make it more impactful.
So, we have a few grants supporting this effort. Here is our project called VisPercep which is actually a large vision language model that’s uniquely trained and tuned to better support blind and low vision needs. You can see some scene understanding work we’re doing. We’re also doing things like risk assessment and we can also offer object localization. I’m not going to go through the results, but you can see how we can offer much more pithy and focused results with this AI.
So lastly, what am I able to do with this tailored vision language model? Well, now I can go back to my platform technology and layer in all of those micro services I just created. Obstacle navigation or negotiation, outdoor navigation plus indoor navigation, curb detection with scene understanding and scene exploration. And this becomes a more powerful product where I could offer simultaneous services.
Let me wrap up with some conclusions because I think I’m at time or perhaps a minute or two over. Bottom line, AI assistance is undergoing a massive shift from a tool to an assistive companion. Some of the strategies we’ve used in order to help move that forward involve top down control, unique filtering, and specific initiatives to help with focus and intent as it relates to the AI.
I showcased a series of key services or modules we’ve created that show trends in the market, including things within navigation, obstacle negotiation, virtual whiskers, curb detection and scene exploration. There’s some very exciting work happening behind the scenes as it relates to data processing with video coding and application architectures. Again some of that platform technology work creating those new frameworks.
The bottom line is the next generation of AI-boosted platform technologies holds great promise for persons with blindness and low vision.
Thank you so much for your attention. I hope some of that resonated with you and I’d be happy to take any questions.
Dr. Roberts: Doctor Hobe Wedler is this year’s recipient of the Doctor Alan Morris Lecture in Advocacy for his outstanding work in enhancing employment and education opportunities for people with visual impairments and other disabilities.
The lecture is named for Doctor Alan R Morse, Lighthouse Guild’s President emeritus for his dedicated leadership of Lighthouse Guild and advocacy for people with vision loss. Established in 2022, it acknowledges individuals who through leadership, raising awareness and addressing barriers, are working to make a world where no person is limited by their sensory capacity.
As a chemist, entrepreneur and sensory expert, Doctor Wedler has actively paved the way for others to join him in his quest to follow their passions regardless of the challenges that lie ahead. He has been blind since birth. In 2011, he founded a nonprofit organization to lead annual chemistry camps for blind and visually impaired students throughout North America.
That same year, he opened tasting in the dark, a truly blindfolded wine experience in collaboration with Francis Ford Coppola.
Doctor Wedler is a passionate educator who advocates for inclusivity and business, inspiring a broad audience through lectures, workshops and mentorship programs. A co-founder of four companies, Doctor Wedler, views entrepreneurship as a means to address challenges, solve complex problems and improve the world.
I am honored to introduce the 2024 Doctor Alan R. Morse Lecture in Advocacy recipient Doctor Hoby Wedler.
Dr. Wedler: Well, thank you so much. What an incredible honor to be here. I can’t thank the Lighthouse Guild enough, and let me tell you, after reading the bio of Doctor Alan R Morse, I am so humbled to be and honored to be receiving an award in his name. what an incredible person. What an incredible leader. Someone who persevered. Someone who saw the future for blind people, possibly before they saw it for themselves. Also incredibly honored to share this experience and this opportunity and these honors with my fellow awardees, Doctors Rizzo and Lee.
Let me tell you those two presentations are a hard act to follow and I’ll just, spoiler alert, tell you this one’s a little lower tech.
So, I didn’t show slides on purpose because I just want to tell you some stories. Anyone have a problem with stories? Stories are good. Stories are fun. I just want to tell you stories about why blindness is OK.
You know, I think it’s great to do amazing work to try to eliminate vision loss if we can, because we use our eyesight to take in 85 to 90% of our surroundings, right? That’s a lot of surroundings for one sense. It means we use only 10 to 15% of our other senses. We use all four of our perfectly good other senses to take in only 10 to 15% of any other surroundings, right? So eyesight is important. And if we have it, we should certainly use it. But if we don’t have it, there’s a lot of life to live out there.
People fear disability because if we think about disability it’s the only minority group that any of us can join at any given time. And because our eyesight, I don’t call it vision, I think all of us have vision, some of us just might lack eyesight – Our eyesight is used so prevalently and it’s such a non-vulnerable sense, it is the most feared of the disabilities out of all of them. Many studies have shown that, right? But as someone who was born this way, I figure let’s just live life to the fullest and help anyone else with blindness or low vision live their best lives and enjoy as much as possible that life has to offer.
So, bring you back to 1987, the year I was born. My parents did all their genetic testing.
Did everything they needed to and everything turned out normal. They didn’t know blind people. I think my dad helped a blind guy across the street one time when he was like 20 years old. He didn’t know what blindness meant. They didn’t fear. They didn’t think about blindness. It was just something that was around them.
They didn’t, again, really know that much about or think that much about it. And then I was born and the doctors looked at my eyes and said very quickly, oh, there’s a problem here.
And within a few hours, they realized, yeah, Hoby is – he’s born. He’s alive, he’s healthy.
But he’s going to be blind and he’s going to be blind for the rest of his life most likely. They were absolutely devastated. They didn’t know how to deal with that. What are we going to do? How’s he going to grow up? How’s he going to make friends? You know, all the worst thoughts go into your mind, right?
My mom decided when I was about 12 hours old to call her best friend, Barb Morgan. And Barb’s husband answered the phone. And all he was saying were things like, oh, no. Oh, this is terrible. What are we going to do about this?
And Barb, being at somewhat of a person who wants to know everything right away, grabbed the phone from her husband’s hand and said, what is going on? And my parents explained, yeah, Hoby was born. But he was born blind. And Barb said, oh, what a relief. I thought he was dead.
Blind we can deal with. If I ever write a book, it’s gonna be called “I’d Rather Be Blind Than Dead.”
Now, why was Barb OK with my blindness? Why was this something that she did not see as a problem? Well, it’s because her father’s best friend when she was growing up, was a professor of psychology in the early 20th century, who was totally blind.
He was a handyman when the dishwasher broke. They called John. When a car needed fixing, they called John. When anything around the house that needed mechanical fixing happened, he called John Hudson. Barb, as a little girl around blindness, saw John as a perfectly capable guy, someone who she respected, revered, loved. He just so happened to be blind. So she learned early on that blindness was not a lifelong problem but rather a nuisance that you have to deal with.
And if from a young age we give people the ability to realize that whatever hand they’re dealt, whatever they deal with is something that’s overcome-able, something they can deal with, they take it and run with it, right? If we tell someone they can do whatever they want, they’ll believe us. If we have high expectations in someone, they’ll believe us. They’ll understand that anything is possible.
But if we tell them, oh, this is a big problem. You aren’t the same as everybody else.
You are going to suffer. This is going to be a difficult life to live. They’re going to take that heart. And then life will be hard.
Boy, that conversation with my mom’s friend when I was a day old or whatever, really inspired them to never lower the bar. My parents, from a very young age, told me that anything is possible. And they also told me and my sighted brothers, two years older, you need to take responsibility for yourself and for the actions you take so that when you succeed, the success that you feel is all yours. And when you fail, you need to take the blame for that failure. So, take the honor and the credit when you do something well, and don’t be afraid to take the blame when things don’t work out the way you want them to work out.
They also taught us that there is no substitution for hard work. My chores might have been a little different than my brothers, but boy, I was held to the same exact standards that he was held to. One of the things that that I was responsible for, I guess I spent some time in the kitchen as a kid, and my parents realized that I had a desire and a, you know, an interest in cooking, so, when I was about 8 years old, they said OK, your job is to make large pots of soups and stews that we will freeze in smaller aliquots and either use, you know, easy weeknight healthy meals, or that we’ll take to work with us is a healthier option rather than eating out.
When I was 10 years old, my birthday present was a 42 quart soup pot. So, that’s what I did. I spent time in the kitchen, I thought of it as a chore. What I didn’t realize is that my work building flavors, combining flavors, seeing what worked together and what didn’t work would shape my entire career moving forward.
I learned that celery and carrots and onions work well together. If I had read a French cookbook, I would have known that they figured that out centuries before. Ginger and turnips don’t work. Don’t mix root vegetables and ginger. The flavor isn’t right.
What I realize today is that I was using my skills as a young artist as someone learning how things mix together and I was learning exactly the same set of skills that my friends were learning by drawing on their Etch A Sketches and making pictures right, and learning about colors. So, while they were learning what colors fit together and how they fit together and what would make things look good, and what would make things not look good, I was learning the same thing about flavor, aroma and texture. And this experience shaped my future as a non-visual sensory guide to the greatest things life has to offer.
I might be a food scientist. I might be a chemist. But at heart I want people to get the most out of life. And I feel like our eyesight is a crutch to that. We’re so busy looking at our phones and looking at what’s happening around us and nervous about what people think of us. We’re distracted by our eyesight, so we can’t pay attention to the little things that the world has to offer.
The sound of a brook babbling next to you, the way the air smells as you walk or drive by food vendors on a Saturday morning. These are the things, the way a glass of wine tastes in front of you. Or a soup or stew that you might enjoy tastes. These things that we don’t let ourselves think about. Why a chemical reaction works the way it does. These are the things that interest me that I try to get other people excited about.
That’s my life dream, is to get people excited about the life they live without worrying about their eyesight and without using it as a crutch because eyesight can be totally blinding to everything else that we do in life.
So, I was a weird kid. I think I’m still a weird adult and you know, I was blind and I fell in love with the most visual subject out there or what people told me was the most visual subject out there, that being chemistry.
Why did I love chemistry? I just always had a knack for thinking about how atoms fit together to form molecules. Why one thing reacted with another to form an interesting product. And it’s all because I had a great high school chemistry teacher. A teacher who told us that chemistry is everywhere you go or where you look. We eat it, we drink it, we stand on the earth. The Earth is made of chemicals. We breathe chemistry. Every time you take a breath of air, how the air is processed through your body. This is all chemistry.
And she would tell us as a class, I think you should think of chemistry as something more than the prerequisite boring class that you have to take to study what you really want. Think of chemistry as something you can pursue and use as a tool as you live your lives. Pursue chemistry further than you think you might want to.
I’d go up to her classroom and talk to her and learn things that might have been confusing, that she was writing on the board. We’d talk through them and I’d say, you know, I love chemistry. I actually want to go study it in college and get a degree in it and maybe teach it one day. She would say, oh, Hoby. Chemistry is so visual. That’s never going to work for you. You should think of something more practical, something like being a literature major or a historian.
I took that one to heart. I got an undergrad degree in history. Because I love stories of how people existed and what they did. But, I knew that chemistry was my thing. I wanted chemistry. I wanted to study chemistry and teach it and get other people excited about chemistry, and I thought there’s got to be a way for me to describe why I should be able to study chemistry to her.
I came up with a story, or a line, I should say. I remember the day like it was yesterday when I went up to her classroom on a cold California morning, which means nothing in New York. In January, it was the second week of the 2nd semester and I went to her classroom at about 7:30 in the morning when I knew she was there prepping for the day but no other students would be there.
She said, Hi, how can I help you? I said, well, you tell me all this time that blind people shouldn’t study chemistry because it’s visual. Can you see atoms? She said, “No, no, it’s chemistry. I told you that atoms are microscopic. We can’t see them.”
Well, neither can I. Chemistry is a cerebral science. In our mind. Sure, our eyesight is amazing for telling us whether there’s a color change, whether a gas is being evolved from a reaction, what chemical we’re measuring, if we don’t label them in ways that are accessible. We can look at graduated cylinders and see what quantity of chemicals we’re measuring. But what is happening in that flask in the laboratory is stuff that we think about. That’s the point.
Chemistry is in our mind, and let me tell you, we look at the entire electromagnetic spectrum from waves that are meters long, like microwaves and radio waves, to waves that are tiny, femtometers long. A femtometer is one quadrillion of a meter. It’s actually a one 100th of 1 quadrillionth of a meter. That’s a huge range of a spectrum. The visual spectrum, the part that we can see, visible light only ranges from 400 to 700 nanometers.
Let’s say I were to draw a line representing the electromagnetic spectrum from New York City all the way out to San Francisco, CA near where I’m from. The part that we can see, the visible part would be about half a mile long. It’s tiny.
So we’ve spent a lot of money, not me personally, but the industry has spent a lot of money creating eyes that can see chemical phenomena that we can’t see with our eyes.
Think about in medicine the X-ray machine. You don’t have X-ray vision you can’t see bones. Think about nuclear magnetic resonance that we use to spin the nuclei of atoms and see what’s happening. We’re even using radio waves for that.
Think about all the different things that we might do in a laboratory that we can’t see.
We build instruments for that. Chemistry is that science that is a mind science.
And when I realized I had an advantage in organic chemistry that my friends didn’t have — organic chemistry is the most visual part of chemistry overall, and it’s the part that I fell in love with and earned my PhD in — but the reason that organic chemistry made sense to me is it’s an understanding of how atoms fit together to form molecules and how molecules fit together to form larger structures.
And I realized, wait a minute. I’m a very spatial person. I’ve never been able to see light in my life, so everything is in my mind’s eye. How you all are sitting in this room, I imagine. Where this building is somewhat confusingly positioned between West 64th Thelonious Monk and West End is in my mind.
Thinking about how to get from here to the local subway station. I map it out in my mind. So thinking about how to get from here to the subway station or back to the hotel I’m staying in is not very different than the process that I think about when I ponder how to add a chlorine atom to a benzene ring. I use my mind to visualize things that are things that things that you can see.
But we there’s no reason we can’t take those things that we can see, add kilometers and meters and whatever and shrink them down into nanometers and angstroms and think about how streets fit together exactly how we think about atoms and molecules coming together to form chemistry. What we do with our lives.
Now, I had amazing mentors. I’m going to get to this in a second. I had amazing mentors as I grew up. Amazing people who supported me starting with my parents. But all along I had a great support system and I advocated for myself and I found that great support system.
But when I was an undergraduate student, I realized that there were a lot of blind people in my community who were told they cannot, who were told you can’t do this, you’re blind. You shouldn’t study math. It’s too confusing. You shouldn’t study particle physics. No, it’s not for you. And I realized that there was a huge need to bring together a program to teach blind and low vision students that they can do whatever they want.
And I did this starting in 2011. And I utilized chemistry merely as a lens to show students that society might tell them that what they want to do seems too visual, but they really can do whatever they want. So, for many years, and I still do this as a consultant but for many years, my nonprofit that I co-founded was an organization that put these camps together and taught students how to do hands on organic chemistry.
We, of course, didn’t do chemistry that changed colors. We did chemistry that changed smell. We mixed really stinky carboxylic acids and alcohols together to form sweet smelling esters that they identified in fruits that we let them experience as well. We used onion and garlic as indicators of acid base titration rather than things that change color.
But the bottom line with all these camps is that we showed students that they should just do whatever they want in life. A very important part of these camps was that we had blind – we still do have blind mentors, blind role models, if you will, to work with our students, and I always encourage them – Take your students – We had a beautiful camp, by the way, run by the Lighthouse for the Blind, originally in part of San Francisco called Enchanted Hills Camp, just in the hills above Napa, CA. And it’s 311 acres of space, some wild wilderness, some built up space for people to come and do things that feel uncomfortable in a safe environment.
So, I would tell all my mentors the day before, I’d say I want you to take your students out and I want you to genuinely get lost with them, go for long hikes and the more lost and turned around you get the better. Because life as blind people is all about solving problems. Hey, where did I go wrong? Where am I now?
Now, with Doctor Rizzo technology, we wouldn’t have to get lost because it’s all just there, right? But I believe in lower tech solutions like this white cane because it’s how we learn. It’s how we think. It’s how we progress and when I get lost and find myself at the top of a 30 foot drop off, I realize, Okay, I don’t really want to go down there. But I wanted my students to go out and get turned around and get lost.
To this day, any workshop that I do for blind students involves cooking. Partially because I love the chemistry of food and drink. But mainly because when I asked my cohorts of students long ago who came in what they liked to do, many said what we would love to cook but were not allowed in the kitchen. Students who are 18 years old, 18, 19, 20 years old, were not allowed in the kitchen or near the kitchen because their parents thought that the kitchen was too dangerous and they might burn themselves.
So, I deliberately do not invite parents to my camps and I have them cook their dinner on large open BBQ grills with flames going everywhere. Because that’s how we learn. If you get a burn, you get a Band-Aid. That’s what we do. Don’t tell the insurance company about this. Please keep that one yourselves.
Just to tell you a few stories of this program, we had a student by the name of Newton. Newton came to us, he 2as very timid. He didn’t want to step into the laboratory space at all. I had to go to him and say, come on, Newton, let’s do this together.
So put your hands over mine and we would go together and we would pour chemicals in properly labeled bottles. He eventually came around and told me, I’ve always loved science, but nobody has told me that I should pursue it or that I can pursue it. I’ve always loved learning about and reading books that are on tape about how the universe is formed, how things come together, and when I told my parents I wanted to study science, they said No, no. You should be a geographer. That’s the closest to science we want you to get.
He said because of that, he was never allowed to do anything in the laboratory, wasn’t even allowed to wash dishes with hot water.
Newton came through our camp and went home and told his parents what he could do. They called me and said what did you do to our son. I said, well, I told him what he could do and I think he wants to do it. And they said well, we’re going to let him. The tables turned. Let’s see what can happen.
Newton didn’t know what he wanted to study, but he knew he wanted to get a degree. An undergraduate degree in some sort of science, and I’m proud to tell you graduated from UC Berkeley with a degree in astrophysics and then went on to pursue a PhD in meteorology, and is this amazing weather scientist now. Just an incredible individual.
Someone who I’m so incredibly proud of.
We’ve taught through these programs over 200 students from I think six or seven countries. I’m so excited to say we’ve trained people who are now PhD scientists. One of our students, Catherine, is about to get her PhD in plant science from the University of California, Davis. Veterinarians, vet techs and these are all people who love science but didn’t know what they could do with their love of science and their desire to be as strong as ever.
I’ll just give you a few thoughts about advocating for oneself. Whenever I work with students, this is what I this is what I teach them. But it’s how I made it. Advocacy is about looking at what people tell you is a disadvantage and turning it into a huge advantage.
Chemistry might be – I might be at a disadvantage in the eyes of a lot of people as a as a blind chemist. But I love fitting atoms together to form molecules. And I trained my palate to be really powerful. And that was all done when I started working with Francis Ford Coppola, when he called and asked me to do a blind wine tasting for him and I said Yes. And then I hung up. Whenever Francis Ford Coppola asked you to do something, you’re going to say yes, right? And you hang up and freak out about what you agreed to do, but that was the most powerful phone call of my career because it inspired me to fall in love with the two things that I love three things I love, which are people, food and beverage, and how flavor, aroma and texture fit together, and chemistry and to this day I bring these things together in my career.
So, I use the chemistry that I know and the chemistry that I do to develop products and to be a nonvisual sensory guide, as I said. So it’s all about finding something that people think you can’t do, and this is advice for everyone here, and turning it into something that is your superpower and use things that you know you’re good at along the way to do that.
I always tell folks who I work with, educate don’t litigate. There are so many people, unfortunately, in this disabled community who think they deserve everything handed to them on a silver platter. I didn’t get served right by this teacher. I didn’t get this done right in my setting of employment. It’s not because people don’t like you. They just don’t know how to work with you.
So, rather than suing them and making a huge bad name for the disability community, show them what you can do. Work with them and make it exciting to work with you.
What I have done throughout my life, whether I’m working with an assistant or I’m working with a client or I’m doing anything, is I make it fun for people to work with me. This life that we live is a two way street. Sure it’s got to work for us, but if we don’t make it fun, exciting, and rewarding for the people we work with, it’s not going to work for us, right?
So, it’s all about — when I was in college, it was about finding lab assistants, sure, and working with them and showing them what I needed. But it was about having a good time and letting them show me things they knew. I believe that life, business, personal, whatever, is a give and take and is a collaboration always about learning.
I try to teach people when I had professors that were really nervous about what to do with a blind student, I would go to them and say, look, you teach me chemistry and I’ll teach you how to work with a blind student. And it’s not the end of the world. I’d sit in the front row and ask a lot of annoying questions when they would write something on the board and say you see this, I’d say no, I don’t. What are you talking about? They’d explain it and they learned from me, I hope, just as much as I learn from them.
When I work with clients in the field, one of my most sincere mentors and clients here with us today, Jackie Summers, we are working on a project developing – a difficult project, mind you, developing a liqueur that has been in history for thousands of years and we have fun with each other.
I like to think that Jackie thinks of me as a decent food scientist who just happens to be blind. Blindness should not be something that is a life long detriment. My blindness makes life fun. I can tell you right now it is the biggest blessing and the biggest gift I have ever received is my blindness. And even if Elon Musk and Neurolink want to give me my sight back, I would say no, because I don’t want to relearn the world. That sounds like work.
I know the world is a blind person. I love it. And most importantly, I love helping everybody, blind or sighted, realize that life is about enjoying what we do, enjoying what people do around us and making the most of it. You know, who knows how long we all are going to be blessed to be on this earth? See, we might as well live every day not in disappointment, in sadness, in upset, but living everyday to it’s fullest potential.
And those of you who work with people who work with students, patients, clients, whatever you call them here at Lighthouse Guild everyday, remember to put that joy in their lives and let them know that just being alive on Earth, no matter what hand we’re dealt, is a blessing in and of itself. That’s the point.
You guys do amazing work and you are mentors to the people who you work with. That is something that is so important to me because I have been mentored by amazing, amazing people. And boy, I seek out mentors that tell me exactly what they think and never lower the bar because if they don’t tell me what they think, and they don’t make life hard, what good is it? Right. Our mentors need to challenge us. But they also are so special because they see a future for us before we see that future for ourselves.
One of my greatest mentors is someone who passed away earlier this year, who, some of you may know. Mr. Dan Callahan here in New York City. Dan worked with the Jewish Guild for the Blind for many years, and I actually met him through my deep involvement with the LaBelle fund.
Dan helped me so much in my professional career by telling me, don’t ever call me asking me – should I do this? Just get out there and do it and then call me and tell me how you got it done. The most important thing, I have so many mentors in this world, I just hope that I can pay it back and give the mentorship that I’ve received from so many back.
I think we all, In every stage of life that we live need to remember to find those mentors and never stop learning. Never stop your quest for enthusiastic curiosity, right? To me, research that we do, that you all do here at Lighthouse Guild that, a lot of people in this room do research, is just based on curiosity and if we have an innate sense of curiosity, we never stop learning. We’ll never stop discovering what this world has to offer.
So it’s just so important to me. I can’t stress it enough, how crucial it is to make life for everybody you’re with fun, enjoyable and interesting, right? Everybody has superpowers.
It’s just our job to find those superpowers and draw them out of people. And that, to me, in the honor of Alan Morse, is what advocacy is all about. It’s just about finding superpowers and people pulling them out and letting people giving them what they need to be the best possible selves they can be in this world and to remember that any disability — we all have disabilities in this room, right? Some of us might have a hard time walking. Some of us might have different sized feet. Whatever the case may be, some of us might be blind. Disabilities are just a nuisance. That’s what it comes down to.
I’ll end with what I tell everybody I work with, who I mentor, all my students. I always say you are enough. What you do in this world as long as you do your best and don’t be defiant is enough. You’re not alone. This world is here. It’s made up of a bunch of us.
Let’s not live our lives individually. Let’s rely on others to do things collaboratively, because when we think about things together and solve problems with a lot of voices at the table and embrace diversity, that’s when we win.
You’re not alone. And third, you’re amazing. Boy, I never ever tell blind people that they’re amazing like society likes to. Because we’re just people who happen to be blind. But I tell everybody in this world that there is no substitute for hard work and everybody has a superpower. You just have to figure out, figure out where you’re amazing. But you all are. That’s that point.
In honor of Alan Morse and with great thanks to the Lighthouse Guild, I am so deeply honored to accept the Alan R. Morse Lectureship Award for Advocacy. And I just want to say thank you so much to everybody in this room.
Thank you to my fellow awardees. It’s such an honor to share this platform with you and go forth and conquer. Remember, be happy.
Join our Mission
Lighthouse Guild is dedicated to providing exceptional services that inspire people who are visually impaired to attain their goals.