On Tech & Vision Podcast

AI Revolutionizes Vision Tech, Ophthalmology, and Medicine as We Know It

In 1997, Gary Kasparov lost an epic chess rematch to IBM’s supercomputer Deep Blue, but since then, artificial intelligence has become humanity’s life-saving collaborator. This episode explores how AI will revolutionize vision technology and, beyond that, all of medicine.

Karthik Kannan, co-founder of AI vision-tech company Envision, explains the difference between natural intelligence and artificial intelligence by imagining a restaurant recognizer. He describes how he would design the model and train it with positive or negative feedback through multiple “epochs” — the same process he used to build Envision. Envision uses AI to identify the world for a blind or visually-impaired user using only smartphones and smart glasses.

Beyond vision tech, AI enables faster and more effective ophthalmic diagnosis and treatment. Dr. Ranya Habash, CEO of Lifelong Vision and a world-renowned eye surgeon, and her former colleagues at Bascom Palmer, together with Microsoft, built the Multi-Disease Retinal Algorithm, which uses AI to diagnose glaucoma and diabetic retinopathy from just a photograph. She acquired for Bascom Palmer a prototype of the new Kernal device, a wearable headset that records brain wave activity. Doctors use the device to apply algorithms to brainwave activity, in order to stage glaucoma, for example, or identify the most effective treatments for pain.

Finally, AI revolutionizes drug discovery. Christina Cheddar Berk of CNBC reports that thanks to AI, Pfizer developed its COVID-19 treatment, Paxlovid, in just four months. Precision medicine, targeted to a patient’s genetic information, is one more way AI will make drugs more effective. These AI-reliant innovations will certainly lower drug costs, but the value to patients of having additional, targeted, and effective therapies will be priceless. 

Podcast Transcription

Livingston: A human cannot beat a chess computer that is essentially trying just straight up.

Roberts: Zephin Livingston writes about artificial intelligence for the technology magazine, eWeek.

Check

Livingston: There have been occasional matches between the top ranking chess players and top chess computers and the player loses. It just is what it is. Chess is not a solved game in the way like Tic-Tac -Toe is completely solved. There are too many variables to fully solve chess it looks like.

But the matter of machine versus man in chess has more or less been solved in the 21st century and the machine wins.

Checkmate

Roberts: “Swift and Slashing: Computer Topples Kasparov” This is the 1997 New York Times headline, when world chess champion Garry Kasparov lost in an epic rematch to IBM Supercomputer, Deep Blue. But our relationship with AI is no longer a competition.

Kannan:  I strongly believe in a future where man and machine work together rather than machine supplanting the man in the food chain. I think it’s going to be the two sort of working synergistically.  I think man is just going to adopt machine.

Roberts: Karthik Kannan is co-founder and CTO of Envision, an assistive technology company. He is using his background as a researcher and programmer in AI to develop technology solutions that help people with vision impairment access the world around them. 

Kannan: We’re seeing AI summarizing articles for us wonderfully. We’re seeing it answer basic questions really well. I think it’s going to get really good, but I think to reach human-level sentience is it not going to happen.

Roberts: Zephin agrees.

Livingston: AI can process information at levels that nowadays that we can’t really come close to. And that’s why they’re so good at chess. They can process all this information and they can kind of extrapolate that information out much further than we can. That’s a feat of engineering, but it isn’t the same as an existential crisis for humanity, so to speak.

Roberts: And it may be the opposite of an existential crisis for humanity. It may be a boon for humanity. One that promises to make us all much healthier.

I’m Dr. Cal Roberts and this is On Tech & Vision. Today’s big idea is how artificial intelligence is poised to revolutionize vision technology, and beyond vision technology — all of medicine. It’s a process that has already begun.

As I mentioned, Karthik Kannan along with his partner Karthik Mahadevan, founded Envision. Their work draws upon advancements in artificial intelligence, computer vision, and machine learning to expand access for people who are visually impaired to the many dimensions of the world around them.

Their extraordinary software was the winner of the 2019 Google Play award for Best App. It provides instant, real-time audio descriptions of a person’s surroundings, including written materials, objects, people, spaces, and on and on. It’s so user friendly —  it can be accessed through smart phones people already have in their pockets and smart glasses that soon people will already be wearing. 

Karthik welcome.

Let’s start off with natural intelligence, because I think if we understand natural intelligence, it really helps us in the area of artificial intelligence. So, I’m going to give you an example. Tell me what’s happening in my brain as I drive through this city that I’ve never been before. How would I recognize where a place could be to have breakfast?

Kannan: Definitely, you’ve been to restaurants in the past and that would serve as sort of a reference point for what a restaurant or a diner could look like. So usually, you’re looking for smells, you’re looking for sights of people coming out of a place, you’re looking for something that resembles a restaurant and that’s your reference point. And, if you’re somebody who is really completely alien to a city and looking for a place to eat you have a lot of visual, auditory, and maybe even like olfactory, like smells as well, all of these put together, give you this abstract concept of a restaurant or a place to eat in your head.

From that basic abstraction or that basic concept of what a restaurant is you would then probably look for the words “diner,” look for the words “breakfast.” So, it’s like kind of building up from a basic concept of a place to eat and then specializing it from that point on to what would be like a place to have breakfast.

Roberts: So, my eyes are acting as input devices. My brain in some way is digitizing the information into information that I understand. And now, in combination with the other senses, I’m putting this all together and in my mind that this would be a place for breakfast.

Kannan: That doesn’t represent the entire spectrum of human intelligence, but this exact sort of point that you talk about, like being able to like identify a restaurant, given a few basic inputs, in the world of artificial intelligence, this is actually something that computers can also do today. But there is a huge difference between what artificial intelligence, in this case, the way it approaches the problem and humans do. In an average human’s lifetime, if you’re a foodie like me, you probably see like maybe a hundred restaurants.  An AI would need millions of images, millions of restaurants. It needs to see so many restaurants before it can start to form the basic concept of what a restaurant actually is. 

And even then, it’s just a bunch of pixels that are put together a certain way that resembles a restaurant.  It doesn’t have memories of your favorite restaurant or restaurants from a childhood. [It just matches bits and bytes together and says, okay, based on how pixels are spread out on this particular image, it looks like a restaurant to me, or I would just label that as a restaurant.

And that’s what I think makes human intelligence so incredible that the mind just makes up these abstractions or concepts just from a few examples.

Roberts: I think you once said computer vision gives AI a pair of eyes that can be taught with machine learning, taking all three and putting it into one sentence, which I think is great.

Kannan: So, let’s say for the sake of simplicity that, okay, we’re going to teach the AI how to recognize the restaurant basically on the Envision glasses, right? Which is just like a smart glasses that you wear. So, a human being is going to be able to see it. The first thing I would do is I would go ahead and try to understand in what scenarios this is going to be used, is it someone going to be walking on the curb, looking at the front of a restaurant? 

81 % confident that this is a restaurant

Or are people going to walk into a building and then want to know if that’s a restaurant or not? To try and do a lot of research into the exact scenarios that a particular user would want to use this restaurant recognizer. And after that, the next step would be to try to understand, what are the possibilities of those scenarios happening and then trying to collect data accordingly.

And that’s where most of the work happens. Like a lot of people assume when trying to build AI, that it’s about trying to make the AI smarter with some kind of programming technique or the other, but the secret sauce is always in the data. And it’s not just collecting the data, but it’s also labeling the data. Even though AI has gotten really advanced, it still depends a lot on humans to tell it what is the right data and what’s the wrong data.  So, we need a human to actually sit down and draw boxes in the image saying, this is a restaurant.

91 % confident that this is a restaurant

This is not a restaurant. This is the inside of a restaurant. This is the outside of a restaurant.

98 % confident that this is a restaurant

And that takes up like 90% of the work. And that’s where most of the cost factor is. And that’s where most of the time goes in. And of course, you also have to clean the data because, if I just feed it like 10,000 images of a restaurant from the outside, then it’s going to have a lot of false positives. Right. It might just recognize any room as a restaurant if I don’t give it context. And so, all of that goes into the model itself. Once we push it into a model, which is a very fancy term for a black box, you know, it’s just a black box.

Calculating likelihood of being a restaurant

I give it a bunch of inputs. And then once I give it a bunch of inputs, it does a bunch of unknown mathematical transformations. It’s just a mathematical black box that you feed all the images in. 

So, you don’t stop there. So, you always have something called a validation data set, which is basically a small part of your training data set that the model has never seen, that you use it to tell the model that, okay, you’re doing better, or you’re doing worse on the predictions. So you put this in this training and validation loop taking the model through multiple iterations called epochs. Eventually once you’re satisfied with the accuracy of the model you stop, and it’s also an art on knowing when to stop training about it. Because if you keep training a model for too many epochs, it starts to recognize everything as a restaurant. It starts to overfit to the training data set. 

So, you have to figure out when to stop it. And then once you stop it, you finally get a model. And then you can put that in an app or on the glasses or in the cloud, and then keep sending it images and have it make predictions. And of course, you also need to get feedback from the people who are actually using it. In the real world models can always perform unexpectedly because you never know what kind of data it’s actually seeing in the real world. So, you need the users to also tell you, is it doing well or is it not doing well?

And then you need to feed that back into the model. So, it is a marathon. An AI is never finished actually training. That’s one of the great things about AI is that the more it’s being used, the more smarter it gets over time. But that’s how you build a restaurant recognizer, just get the data, feed it into the black box, work on it a lot, and then eventually you’ll have something that does decent in the real world.

Roberts: One of the things I found really interesting in what you had to say is the fact that you also have to show it what is not a restaurant. So, I know what’s a house. I know what’s an office building, and so that in AI you’d have to teach it, not just what is, but what is not.

Kannan: Exactly, right? That’s because human intelligence is so holistic, right? We have so many sensors on our body – your eyes, your nose, your ears, the touch, all the memories associated with the place.  All of this come together in this incredible fusion. And, then you have this concept of what is a restaurant, what’s a house and so on and so forth. Whereas an AI is taught only images, right? If I have to only think of the world as pictures, then of course my view of the world is going to be so one dimensional, right? Whereas with human intelligence, you have so many different types of sensors that come together and improve the overall accuracy of natural intelligence.

Which is why as of today, this concept of sensor fusion in AI is actually very popular where you don’t look at just one source of data. You look at other sources of data, and then you blend them together, to create a much more smarter model overall, right?

Sensor fusion happens all the time. For example, if you have like one of these fancy new iPhones that have a LiDAR sensor, that is sensor fusion. So, what actually happens when you open the camera app on your phone now is that the LiDAR sensor basically spits out lasers and then gives the computer vision model, like an additional source of input. And then the two come together and basically create a much more realistic depth effect than what it would just do with just the camera itself.

We use that within the Envision app. What we do is in order to estimate the distance of an object from a user, we go ahead and use the LiDAR along with the camera input to estimate the distance, because while it is technically possible to estimate the distance often object from where you are using just the camera input it, becomes like 10 times more accurate when you also bring LiDAR or other kinds of physical sensors into the mix, and then also blend it with the model’s own predictions. And that is  one of the most exciting areas of, of computer vision and machine learning right now, where you’re bringing all of these different sensors together to create a much more holistic picture, somewhat similar to how humans are in general.

Roberts: Potentially, of course, this can do things that humans can’t do because humans eyes don’t have LiDAR.  I know that when you play golf what you do is, every golf course has the flag on the green, the same height. And so that once you mentally know how tall a flag is by the relative size, you then know how far away it is. Wouldn’t it be great if we had a sensor that would help us do that? And, I think that’s one of the great promises of technology is that technology can supplement that which we already are able to do with our eyes, our ears, our nose, our fingers, and our ears.

Kannan: Yeah.  That’s an entire branch of artificial intelligence called transhumanism where, there are people out there who believe that the future of the human race is basically blending with AI or becoming one with AI in general. Imagine you had eyes that could have built in LiDAR sensors for example, that could look out into the distance  and know the exact estimation or if you had eyes, for example, that could zoom 10x or 20x just like how you do with your phone camera. That’s really sci-fi stuff, but I strongly believe that technology that’s on the horizon will benefit greatly from not just a LiDAR sensor, but other types of sensors as well.

For me, one of the most exciting things about AI in the short term – I’m talking about the next 10 -15 years, is how monumental it’s going to be in the healthcare industry. Right? I can see that it’s going to have its biggest impact in the medical space. And, when we are talking about something like, for example, cancer, right?  I’ve seen researchers build models that outperform radiologists when it comes to identifying, cancerous tumors or stuff like that from x-rays.

In fact, it is now almost a fact in the medical AI space that AI is going to outperform humans in at least the basic diagnosis part of it. And I very much believe that computer vision, will play a massive, massive role. Their AI model has about 99.5% accuracy. The humans, the same humans who have had like about 15 years’ experience in the field of radiology have only about 96% accuracy. And they’re currently now doing clinical trials and, and when those kinds of innovations start being mainstream, the cost of healthcare would reduce dramatically. At the same time, it opens up access to healthcare, too. It can also have the potential to catch a lot of illnesses very early on. So, it’s one of those certain areas where AI will have a massive impact in the short term.

So, in the medical AI space, the sensor fusion part is going to be literally the man and the machine sort of fusing together, working as this one entity where the machine tries to highlight the things that the man might miss. And a man or the woman is just going to be there trying to go ahead and identify, is it actually noise or a signal, right?

Roberts: We are seeing that right now in my field, in the area of diabetes and its effect on the eyes. As we all know that diabetic retinopathy is a very common condition that occurs among diabetics and can be blinding. And so, what we’re seeing is cameras that every diabetic is encouraged to have a picture taken by this camera, no matter where they are. The camera then determines, do they have early signs of retinopathy or not, sends all the positives to a central clearing house to just make sure that the diagnosis is correct. And in that way, the specialist office only gets referred the people who really need care and the specialist is not put in a position of actually doing the screening. The device does the screening, and now the doctor just does the care.

Kannan: That’s the ultimate promise of machines, right? Like they’re going to come in and they’re going to help us take away the tedious, everyday tasks. And then, we are going to be left to do other things while the machines take care of the basic stuff for us.

Roberts: How do you determine what are the projects to work on? How do you, how do you evaluate ideas or how do you find ideas of what the next great thing should be?

Kannan: I think some of the best work that we have done here at Envision has largely come from understanding people, you know? I spend a lot of time talking to visually impaired people. I speak to hundreds of visually impaired people each week and each month, being more immersed in their environment because at the end of the day, I realize I’m not someone who’s visually impaired. So, I have to be humble enough to understand their space.

And, as I spend time with them, I start to see patterns in the problems that they have. And then, I keep a very close eye on the research that’s happening in the AI space. It’s this merging of the two, like merging of the conversations that I have with people and looking at what’s happening in the field, putting it together, and then coming up with new ideas for features or products and so on.

Roberts: But there’s unmet needs and there’s unanticipated needs. I had a cell phone, a flip phone for 10 years. And if you had asked me what I wanted, I don’t think I would’ve ever told you that what I wanted was a camera in my phone. That was not something I had ever thought as unmet need. Obviously, that was an unanticipated need. And you put a camera in my phone and all of a sudden, my life changes dramatically. So it’s not just meeting the needs that people tell you about.  It’s anticipating needs that people hadn’t even thought about.

Kannan: I think it’s about putting it out there and seeing what sticks. So, for every really good idea that comes out, I think there are like a hundred or so ideas that get rejected. And I think it’s consistently putting yourself on the line, putting your thoughts out there, and then seeing it being accepted or seeing it being thrown away.

Like how we today have our smartphones always with us, how some of us wear smart watches all the time with complex health sensors.  I think humans are just going to get much better over time as a species.

Habash: Everyone asks me, oh, is AI going to take over for physicians? You and I both know, that’s not the case and we already use it in all of our daily lives.

Roberts: Dr. Ranya Habash is CEO of Lifelong Vision, a world-renowned eye surgeon, and a Visionary Innovation Mentor at Stanford University. She is on the FDA’s Digital Health Network of Experts and the former Medical Director of Technology Innovation at Bascom Palmer Eye Institute.

Habash: So, every good piece of technology comes across my desk. I see everything before it’s commercialized and I know what’s going to work and what’s not going to work within 30 seconds of seeing it. You know, I’ve just gotten really good and honed in at what I can throw my weight behind and what, what won’t work.

If somebody comes in and they’re carting all this like heavy device technology and plugs and wires everywhere, that’s an instant red flag.  They need a show with just a smartphone and then I’ll really take it seriously.

But when Microsoft approached her with the idea of using their supercomputers to train an algorithm to diagnose certain eye diseases, using machine vision and photographs, she was excited to try it.

Habash: So, they came to us and asked if we would be interested in showing how their supercomputers could help medicine. And, I was put in charge of that project and led it.  The thing is, they have the computing power and they have the technical expertise, but they don’t know what the diseases are or what problems need to be fixed or, even the data sets to pull for it. So, they need to partner with medicine in order to give them that information.

We use their supercomputers, so just basically their computing power and the Azure platform, which is their data collection platform. And it keeps all the data secure, protected  and housed with Microsoft so that we don’t have to worry about data being exposed. We can rest easy and just do the research then.

So, we took 86,000 images from Bascom Palmer and we ran it through Microsoft’s supercomputers. So, now glaucoma or diabetic retinopathy — we kept showing it these pictures that are now labeled by our retina specialists to say, okay, we agree. This is a picture of diabetic retinopathy, and keep showing it to the machine over and over again until it gets it.

Roberts: Today, Bascom-Palmer’s Multi-Disease Retinal Algorithm can accurately diagnose a number of eye diseases including glaucoma and diabetic retinopathy from just a photograph. And, Dr. Habash says, the more people who use the multi-disease retinal algorithm, the more representative and accurate the data-set will become. `

Habash: So, now you have somebody in Australia or in Africa, taking a picture of their eye and we’re getting their data to add to our algorithm to make it more robust and more representative of the entire population.

Roberts: After Bascom Palmer’s success with the multi-disease retinal algorithm, Dr. Habash had the forethought to use a wearable device called the Kernel in her research. She wrote a grant to win one of the exciting new Kernel devices for her institution.

So, so talk to me a little more about the, about the Kernel device. Exactly what is it doing?

Habash: So, Kernel was founded by Brian Johnson and he was Venmo, you know, essentially. So, when he sold off Venmo to PayPal he wanted to put aside some money to do something good for humanity. And so, he came up with Kernel and that’s his project.

And because he worked with Elon Musk in the PayPal transaction, they had discussed brain machine interfacing and Elon Musk had wanted an implanted device and Brian Johnson thought, “Well, the world’s not ready for that yet. So, I’m going to create a wearable instead.” And that’s how Kernel was developed.

He put out an announcement that said that for the first seven really good proposals from academic institutions or from any companies that he would award one of the prototypes, one of the seven prototypes of Kernel and we were one of the first to win it. I put together the proposal on behalf of Bascom Palmer, and that’s why we were awarded the, the Kernel device. So now we’re using it for a lot of our research.

So, the Kernel device is sort of like the X-Men helmet. That’s the fNIRS device. And so if you, if you’ve ever had an MRI, this is even more precise than an MRI. And so, you wear it on your head and then do different activities and it records brainwave activity, but then algorithms are applied to it to make sense out of that brainwave activity.  So, it’s really useful in patients who are blind or you’re trying to improve their vision because you can actually see very specifically what magnifies the signal to the brain and what doesn’t, or what’s interfering.

And so, for Bascom Palmer, I put together a list and explanations of all the different research projects that we could apply it to. The Alzheimer’s research. There is diabetic research, glaucoma research, and then the pain research.

Roberts: As we talk about the eye and specifically vision, certain reasons why people lose their vision or lose some of their vision is related to their eyes, some of it is related to their brain.

Habash: So, one of the really easy things we do is to stage glaucoma using the device, where we see how strong that signal is to the visual cortex, the occipital cortex. And we can see the signal attenuating over time. So, we can tell very plainly if the patient is deteriorating or not. And oftentimes that’s helpful when they have other things like, they’re myopic.  Where the optic nerve could be, it could mislead you into thinking they have glaucoma when it’s really just the myopic disc appearance that’s causing the issue.

Roberts: The Kernel device and AI algorithms that make sense of brain waves, have allowed physicians at Bascom Palmer to pinpoint where pain occurs after eye surgery, so that doctors can more exactly apply treatments to block pain.

Habash: So, we know that patients with post refractive pain, these are patients who have LASIK and then they come back and they have this intolerable pain that we really can’t put our finger on.   And, they are really miserable as soon as you turn on that light, for instance, and you just cannot see it on the exam.

So, what we are doing at Bascom is a pain study where we’re looking at patients with post refractive pain using the Kernel device.  We blow light in the eye or shine light in the eye, and we see that pain pathway light up like a Christmas tree. And, it’s actually really interesting because you see, wow, that patient really does have pain, even though I didn’t see it on the findings. But wait, there’s more, and here’s the better part.  The better part of this is that now we try treatment, like a Botox injection, a Botulin injection, and we give it in different areas, for instance, and we see where the pain is knocked out.

And all of a sudden you do not see that pain pathway lighting up anymore, and the patient feels better and it’s not a placebo effect. It is because you actually cut out that pain pathway with the drug.  I just don’t think there’s anything more powerful in medicine than to be able to treat a patient and get rid of a problem that is plaguing them so much.

Roberts: Well, it’s amazing because as physicians, we’re always looking to objectify subjective findings, right? So, a patient says “my vision’s blurry.” What does the doctor want to do? The doctor wants to come up with some kind of a number that does something to objectify how the patient is seeing.

Habash: Yeah.

Roberts: We ask patients to objectify subjective symptoms. We ask them to grade their pain one to 10.  But, we would love to have something that we can measure because if you can measure it, it’s so much easier to treat it.

Habash: If you can measure it, you can control it. That’s the saying, right?

Roberts: Exactly. So, this is an ideal use for technology. It allows the patient to present their symptoms without feeling that they’re being disregarded or otherwise being ignored.  I love it. I love it.  So, I learn something.

Habash: AI is used in every initiative. But if we think a little bit more like global healthcare  and digital health tools for, for patients, I think this is one really important place that we’re headed.  I’m already really trying to work on this and my whole initiative these days is really to get screening tools and diagnostic tools in everybody’s hands where they don’t need any fancy equipment to do anything, but they can still screen themselves for diseases, things that we could have fixed had we known about them sooner. And, that’s one really big initiative. I think getting those tools out to the whole population, but now just like we talked about before that also feeds data from the whole population back into our algorithms, which makes them better.

Roberts: While access to broad data reduces the likelihood of algorithm biases, Karthik Kannan reminds us that it is the responsibility of the developers to work against their own biases as they train models.

Kannan: Ultimately humans feed data into an AI model and that it learns from humans. And so it strongly takes over the bias of whoever is actually feeding that data, to the particular model. And I think that has much, much greater potential for harm than an even an evil super lord AI kind of taking over humanity. I think that causes real damage.  It has caused real damage in the past.

If we are not careful, if we are not cognizant of the fact that AI bias is real then it could cause real damage in the future. I know in the Netherlands the University of Amsterdam actually makes it mandatory that people actually do an ethics and bias course when they’re doing their Masters in artificial intelligence and machine learning. And I hope more people who are getting into this space are very aware of these biases and try not to amplify them in these systems.

Roberts: The other benefit of broad data access, says Dr. Habash, is precision medicine tailored for you.

Habash: We have your zip code, and we have your medical history and we have any allergies that you have. We have your family history, like f you have any cancers or heart disease or anything like that in your family, all of those things would now be in the database. So, now when you’re taking a drug in the future, we can actually fast forward and see what drugs would be best for you. What would work best for you? What would have the best outcomes for you?  We can extrapolate insights down the road based on that. That’s precision medicine or predictive analytics for healthcare. And I think that’s one of the most exciting places we’re headed with all of this.

Roberts: And thanks to AI, we may soon be able to discover and deliver targeted treatments that respond to individual genetic information. CNBC’s Christina Cheddar Berk recently wrote about how AI is poised to revolutionize drug discovery.

Maybe you’ve heard of a new COVID-19 treatment called Paxlovid.

Cheddar Berk: Pfizer said that by using the AI to screen the proteas inhibitor compounds to arrive at the targets, and  then also using that to screen out how they could deliver the drugs, that treatment was developed in four months. That, to me, is an incredible achievement.

What we’re talking about is going from having people in labs, like physically manipulating models, to being able to do that on the computer. It really, it does speed things up tremendously. So, I think that’s the exciting part to me.

Sequencing genomes has also become so much less expensive. So, we now are getting to the point where we’re developing these vast databases of information and continuing to develop those is also less expensive than it used to be. So, we’re able to kind of keep building this database.  I think it’s the scale at which we’re getting to. The idea of it, the fact that we could be accelerating this and also lowering the cost so much.

And then, obviously, the value to patients for having those additional therapies available is something that’s hard to put a price on.

Roberts: Today, machine vision helps people who are blind get meaningful access to their world using just their iPhone camera. It lets patients be diagnosed remotely. AI illuminates the black box of the human brain for doctors – things that couldn’t be done without our current processing power and computing speed.

Doctors around the globe are applying AI to drug discovery for cancers, heart disease, Alzheimer’s, and more, just as big data, including genomic data, is allowing not only for quickly developed, inexpensive, and effective medications, but also targeted precision medicine made just for you.

It’s an understatement to say that we are witnessing the end of healthcare as we know it, that we are embarking on a new world of medicine, one that a few years ago we only dreamed of. 

But with AI as a partner at our side – a lab assistant that can cycle through 200 million or more potential solutions per second – the innovations in medical science become boundless.

And maybe, so do we. 

Did this episode spark ideas for you?  Let us know at podcasts@lighthouseguild.org.  And if you liked this episode please subscribe, rate and review us on Apple Podcasts or wherever you get your podcasts. 

I’m Dr. Cal Roberts.  On Tech & Vision is produced by Lighthouse Guild.  For more information visit www.lighthouseguild.org.  On Tech & Vision with Dr. Cal Roberts is produced at Lighthouse Guild by my colleagues Jaine Schmidt and Annemarie O’Hearn.  My thanks to Podfly for their production support.

Join our Mission

Lighthouse Guild is dedicated to providing exceptional services that inspire people who are visually impaired to attain their goals.