Artificial Intelligence, Machine Learning and the White Coats

Rick Rader, MD, FAAIDD, FAADM - Editor in Chief


While on my recent flight flying home from San Francisco, I was seated next to a third grader who was getting his butt kicked by a computer-generated tic-tac-toe competition. During the four-hour flight I think the “human” was able to finish with a handful of ties.

Games like tic-tac-toe can be traced back to ancient Egypt on or around 1300 B.C. The British called the game “noughts and crosses.” It has traditionally been one of the first games we learn to play as children. The difference between the paper and pencil version and the computer screen version is that the former required two to play. Not only did it require two players, but they were eyeball to eyeball of each other. Today’s version doesn’t require a breathing opponent but simply a video screen. To make matters worse, the odds of the human beating the screen becomes less and less as the data bank strategies become more evolved and proficient.

There did come a time in my childhood where we were introduced to the idea of playing the game against an invisible opponent. In 1952, the game OXO was developed by British computer scientist Sandy Douglas for the computer housed at the University of Cambridge. It stands as one of the earliest known video games. The computer player could consistently play perfect games.

An early version of the “electric tic-tac-toe” game was the creation of James Mason Prentice who founded the Electric Game Company in 1926. He invented both electric baseball and football games which were the first board games to use electric relays. The electric “tic-tac-toe” game was recently a finalist for induction into the National Toy Hall of Fame. It was nosed out by the Magic 8 Ball.

There was a major milestone in the “computer vs human” competition for bragging rights in intelligentsia.

Writing in The Atlantic, Marina Koren takes us back to that game-changing day.  “There was a time, not long ago, when computers – mere assemblages of silicon and wire and plastic that can fly planes, drive cars, translate languages, and keep failing heart beating – could really, truly still surprise us. One such moment came on February 10, 1996, at a convention center in Philadelphia. Two chess players met for the first of six tournament matches. Garry Kasparov, the Soviet grandmaster, was the World Chess champion, famous for his aggressive and uncompromising style of play. Deep Blue was a 6 foot-5 inch, 2,800-pound supercomputer designed by a team of IBM scientists. Deep Blue won and the Genie was never able to be put back in the bottle.

Over the years, human intelligence was running neck and neck with motherboards and CPUs. With a sudden burst of speed, the human brain, our three-pound universe, was left in the dust. We (humans) used to be the unchallenged champion of creativity and intelligence. Artificial Intelligence (AI) has seemingly brought us to our knees in the realm of boasting “smarty pants.”

Beyond games, we can see the impact when machine learning dons the white coats. Reports from the Vodafone Institute for Society and Communications share that “Researchers at an Oxford Hospital have developed an AI that can use heart and lung scans to diagnose deadly diseases. While this has also been routinely done by human doctors, computer programs promise to yield clearly better results than even the best medical professionals, as it is estimated that at least one in five patients are misdiagnosed. Earlier, more accurate prevention and fewer unnecessary operations could lead to enormous cost reductions.”

Further evidence that AI can outperform MD comes from AI engineer George Zarkadakis. For human physicians, the challenge of making correct diagnosis is huge. It is estimated that in order to be at top of medical knowledge human doctors must spend 160 hours per week reading new research papers. IBM Watson’s AI does that at a fraction of the time. On top of this it has the ability to search through millions of patient records, learn from previous diagnoses, and improve the reasoning links between symptoms and diagnosis. The result? IBM Watson’s accuracy rate for lung cancer is 90%, compared to a mere 50% of human physicians.’’

One of the reasons that AI may have the upper hand at making an accurate diagnosis is due to the fact that machines are not anchored to following conventional and predictable patterns associated with human physicians. Humans are hardwired to follow the facts. Machines can use what is known as “counterfactual methodology.” This pattern of thinking was first formulated by, of all things, a human physician. Sir Arthur Conan Doyle, both a physician and author of the Sherlock Holmes mystery novels, promoted this line of thinking. Holmes constantly adhered to his creative dictate, “When you eliminate the impossible, whatever remains, no matter how improbable, must be the truth.”

One of the drawbacks of Artificial Intelligence is its inability to draw on general reasoning. This is still currently the domain of us mortals. General reasoning has perhaps been the single most important instrument in conjuring up strategies to assist people with disabilities in doing what appeals to them. We have not yet been able to convince, influence or program machine-learning complexes to appreciate, imagine or conjure up the potential, capacity, motivation, and determination of people with different abilities.

Decades ago, I set out to provide a simulation experience to my medical students and residents, to give them a taste of disabilities. Of course, the obvious ones were easy to replicate — earplugs for hearing impairments, blindfolds for vision impairments, gloves for haptic impairment, wheelchairs for mobility impairments, and rice in the shoes for neuropathic conditions. But I was unable to figure out a way to replicate intellectual disabilities.

I consulted with IBM software engineers. While they were able to send humans to the moon, they were confused as to how to program a computer to think in ways in which people with limited cognitive reserves responded to everyday trials and tribulations. They were unable to figure out how to program computers to think with only a fraction of their accumulated knowledge. It was the nature of the computers to reach into their vast reserves of collective wisdom from every discipline and use them to emerge as winners. The engineers could not program the computers to discard available information; they could not compel the computers to think with one arm tied behind their back. They finally put up the white flag and admitted that in their pursuit of perfection, there was no allowance for suboptimal thinking.

I think the human brain still remains the winner in the clash with the machine titans. For one thing, we have to give the brain credit for accepting, acknowledging and recognizing the inherent shortfalls of the human condition. I hope that until the day when AI and machine learning can incorporate that into their programs, that humans remain in charge and continue to exercise and pursue that most human of human qualities — acceptance and inclusion.

Previous
Previous

Building a Village of Support and Care for Individuals with Autism

Next
Next

How a Dental Hygienist Became a Passionate Public Health Leader