Apparently, there was an overwhelming response to my first blog so I get to write another one. Thanks again to both of you who read it.
Given all the doom and gloom about AI in the media recently, in this episode I’d like to talk (write) about some of the positive things going on with a branch of AI known as “deep learning”. But first, a little history. Way back in ancient times when computers were just getting started in the ‘60s, while some scientists worked to perfect the lava lamp, others invented something called a neural network. Neural networks were loosely modeled after how they thought the brain worked back then, with an input and output and in between hidden layers of artificial neurons.
Neural networks were really good at solving a lot of problems, but unfortunately were also similar to grilled cheese sandwiches (stay with me here). As problems became more complex, the neural networks required more hidden layers to find a solution. However, after more than about two layers, they wouldn’t solve the problem and generally led to a mess that someone had to clean up instead of the cheesy goodness they were hoping for.
This was pretty much the situation until 2009 when Geoffrey Hinton and his team at the University of Toronto figured out that by tweaking the layers ahead of time, they could overcome this problem and voila!, peanut butter and banana tall stack.
These systems with many layers were called deep neural networks and could be used to solve much more complex problems and without any cheese being scraped off the ceiling. Fast forward to today and deep learning is being implemented everywhere, including the voice and image recognition done by Apple, Google, and Amazon. It’s also the foundation of self-driving cars.
Now rewind back to last year.
I met Andrew Beck at a conference in DC. Andrew has an MD from Brown and a PhD from Stanford. He has started three successful companies and is also an associate professor at Harvard (you know, the kind of person you secretly hope has some awful dark secret like a third foot or something). Dr. Beck also appeared to be a genuinely nice guy (unfortunately) as he presented some research he had done on pathology. The first set of research presented was done to determine which factors were most important in getting accurate pathology results.
Care to take a guess? Anyone? Type of equipment? Experience of the pathologist? Day of the week?
Who said “day of the week”? Ding! Ding! Ding! You win! You win, that is, unless your sample hits the pathologist’s desk first thing Monday morning, then not so much. It turns out, pathology labs are really busy on Mondays and Tuesdays where a pathologist may have upwards of a thousand slides to review in one day. By Wednesday, however, things have slowed down to the point that you can get a more reliable analysis.
For those not sure how important this is, it’s really, really important. The pathology results can determine the next steps in care ranging from no treatment at all to aggressive cancer treatment. I’m not authorized to dispense medical advice, but as your friend, if you need any tests done, I suggest they occur after taco Tuesday.
So back to Andrew Beck and team (oh, and before I forget: he only had the two feet, as far as I could tell anyway). They went to work on the problem with a deep learning AI. After fine-tuning the system, they compared it against an actual pathologist using samples with known results. The AI had an error rate of 7.5%. The pathologist, taking his time, achieved an error rate of 3.5%.
So good news: people are still better at some stuff, right? Sort of, but the better news was that working together, the AI and pathologist reduced the error rate to 0.5%. It became apparent that in pathology, people and computers make different types of mistakes, and by combining them, the accuracy of both can be improved. The whole process could also be sped up with the AI identifying features on the slide that the pathologist should focus on versus taking the time to review the entire sample.
Now fast forward to a few weeks ago.
I can’t be more specific because I don’t know when you’re reading this. If it’s 2275, then it’s obviously been more than a few weeks, and I’m more concerned that my words are the ones that posterity has passed on to a future society and suspect that your utopia is consequently in serious danger of collapse. Anyway, I had never heard of Taryn Southern until a few weeks ago (in 2017). According to my daughter, who is an expert in such things, Taryn is a YouTube personality who was on American Idol and is now also a pop singer. Taryn’s new album (or whatever you call them now) was just released and the single “Break Free” is doing well.
I’m sure you’re beginning to wonder how this is relevant. Trust me, it is. Taryn’s new album was the first one where the entire album was written and produced by an AI. Taryn found working with the AI in most cases to be preferable to working with human collaborators. She also said it sped up the creative process 20-fold.
So, yes, finally, we have the technology to streamline the creation of pop songs.
I’m running out of time, so, to sum up: from pathology to top 40, humans and AI working together are discovering new and better ways of doing things, helping us find new approaches to old problems.
So remember: the next time you hear that AI is going to destroy mankind, it might someday. But more optimistically, AI may also be responsible for getting you a more accurate diagnosis or producing that new hit song you can’t get out of your head.
If you have any questions about AI or machine learning, or need a good grilled cheese recipe, drop me a line.