Use your words! Sorting through the confusing terminology of artificial intelligence

By | November 5, 2018

In Orlando on Feb. 11, HIMSS will be hosting its second annual Machine Learning & AI for Healthcare event. That ampersand is important, because there is a distinction between artificial intelligence and machine learning, even if they’re often confused or conflated.

Think of it this way: All machine learning is AI, but not all AI is machine learning.

To make it even more interesting, however, there’s a healthy handful of other, equally confusing terms associated with AI that even the most tech-savvy healthcare professionals would be forgiven for scratching their heads over. There’s cognitive computing and deep learning and neural networks and many others.

What does it all mean? What is legitimate technological terminology and what is just marketing jargon? (In healthcare AI, as you no doubt have noticed, there’s no shortage of marketing.)

Those questions matter. As Leonard D’Avolio, Harvard Medical School professor and CEO of Cyft, a healthcare machine learning company, has noted: “If I describe what I do as cognitive computing, but a competitor describes what they do as AI or machine learning or data mining, it’s hard to even understand what problems we are trying to solve.”

For a healthcare industry set to be transformed fundamentally by AI, whether it’s ready for it or not, such confusion is not helpful. It’s important to have a clear understanding of what these terms mean – or at least what the common consensus is about what they mean.

Cognitive computing

If AI is the umbrella term, machine learning and cognitive computing are two bits of phraseology that often cause confusion.

As Steven Astorino VP of development, private cloud platform and z analytics at IBM explained in a blog post, “Think of machine learning as a set of libraries and an execution engine for running a set of algorithms as part of a model to predict one or more outcomes. Each outcome has an associated score indicating the confidence level at which it will occur.”

Cognitive computing, meanwhile, refers to “the ability of computers to simulate human behavior of understanding, reasoning and thought processing,” he explained. “The ultimate goal is to simulate intelligence though a set of software and hardware services to produce better business outcomes.”

At the HIMSS Big Data and Healthcare Analytics Forum in San Francisco this past year, Zeeshan Syed, Director of the Clinical Inference and Algorithms Program at Stanford Healthcare, offered an explainer of his own for distinguishing between these computer science terms:

Read More:  Krill Can Reduce Cardiovascular Risk Factors

In an accompanying interview for Healthcare IT News, Syed explained that, at a high level, “AI is basically getting computers to behave in a smart manner. You can do that either through curated knowledge, or through machine learning.”

Curated knowledge, he explained, referred to the basic ability to hardwire specific data references into clinical decision support software. For example, if a patient’s temperature rises above 102 degrees, the system sends an alert that there’s a fever: “That’s getting the computer to behave in an intelligent manner, but it’s using existing knowledge embedded in the system.”

Machine learning: Supervised, unsupervised and more

With machine learning, the technology “derives knowledge from the data,” he explained, “to uncover new insights.”

Or, as another IBMer put it, machine learning refers to computers’ ability to get smarter “without being pre-programmed after a manual.” That could be through any number of algorithmic models that can “learn from data and create foresights based on this data,” as Copenhagen-based IBM exec Peter Sommer explained.

But wait, there’s more. Within machine learning, there are several specific subtypes. Supervised, unsupervised, semi-supervised and reinforcement. Again, it’s OK if you’re saying, “Huh?”

With supervised machine learning, the insights derive from both existing data and a specific outcome that might be associated with that data, scientist John Guttag, head of the Data Driven Inference Group at MIT’s Computer Science and Artificial Intelligence Laboratory, told Healthcare IT News in 2017.

For example, “We’re given all the people who have Zika infections and then we know which of the women have children with birth defects and which don’t – and maybe from that we could build a model saying that if the woman is pregnant and has Zika, what’s the probability that her baby has a birth defect,” he explained. “We have a label about the outcome of interest.”

In other words: “You have data about a problem, and information about certain outcomes; you’re essentially trying to predict or classify or diagnose the outcome from the data you have access to,” as Zeeshan Syed phrased it. “That’s why it’s called supervised: You’re learning with the knowledge of what the outcome is.”
   
Unsupervised learning leaves a bit more to the imagination. “We just get data, and from that data we try to infer some hidden structure in the data,” said Guttag. “Typically the nice thing about unsupervised learning is you find things you weren’t even looking for.”

Read More:  Social Anxieties - Its Effects and Treatment through Hypnosis

Or, as Syed explained, “you basically just have a bunch of data, and the goal is to find interesting structure in that data. It’s not necessarily related to any particular outcome, but it’s just what are the interesting characteristics of it, what are the anomalous records in a set of data you have.”

And semi-supervised, as its name suggests, is a bit of an amalgam of both approaches. “It’s sort of in the middle,” he said. “You’re trying to learn an outcome and understand what the relationship is between different parameters and data on that outcome, but in addition to having small amounts of labeled data.”

Reinforcement learning, meanwhile, is a specific type of ML that “typically focuses on being able to sequentially interact and learn from things, and then factor that in to iteratively improve your decision-making over time,” Syed explained.

Deep learning, neural networks and beyond

There’s no shortage of other terms that are often confused, or used interchangeably, of course.

Deep learning, for example, is where “software attempts to mimic the activity in layers of neurons in the neocortex, the wrinkly 80 percent of the brain where thinking occurs. The software learns, in a very real sense, to recognize patterns in digital representations of sounds, images, and other data,” according to MIT Technology Review.

There are neural networks, which have been “going in and out of fashion for more than 70 years,” as another MIT article notes. Such networks “consists of thousands or even millions of simple processing nodes that are densely interconnected. Most of today’s neural nets are organized into layers of nodes, and they’re ;feed-forward,” meaning that data moves through them in only one direction.”

It’s a lot to keep straight. There are certainly layers of overlap among those and other terms – paraphrase detection, object recognition, natural language processing, etc – that have quickly come to prominence across healthcare, impacting everything from EHR documentation to radiology reading.

It’s probably a good bet that such terms will continue to be misapplied and misconstrued for the near future as the software evolves and providers adjust to this brave new world. So it’s perhaps it’s best to not get too hung-up on terminology, and focus instead on the technology and what it can do for healthcare, now and in the future.

Read More:  Rare penny found in school cafeteria expected to fetch $1.7M at auction

“With every new emerging discipline there’s always a level of confusion around vocabulary, and people having different meanings for the same word,” said Sam Hanna, associate dean of graduate and professional studies and program director in healthcare management at American University. “That is not unusual.”

What’s more important than what words we call things, he said, is “knowledge and the knowledge translation.”

By any other name?

Even the generic term AI itself can be confusing. As Harvard’s Len D’Avolio has noted: It’s neither artificial nor necessarily intelligent.

That’s why Sam Hanna prefers to think of AI in terms of the catch-all phrase “adaptive intelligence,” he said.

“You think of intelligence, as humans, we’re always learning new things, and we are adapting our knowledge as we learn new things: Behavior and thoughts are always being adapted to new contexts,” he explained. “The same is true with machine learning and artificial intelligence: The more you learn, the more adaptive you become to the learning.

“So I think it’s very important to use that word, ‘adaptive,'” he added. “Artificial means synthetic. But if we really want to achieve the true power of AI, then we have to continue to teach it for it to continue to learn. And we need to be able to understand what we want it to learn.”

Indeed, in comments sent to the White House just last week urging continued and conscientious funding for AI research, the American Medical Informatics Association embraced an even different twist on those two omnipresent letters – one that also sought keep the emphasis on carbon-based lifeforms rather than silicon chips.

“In medicine, we tend to frame AI as ‘augmented intelligence,’ given that there is surely no better example of a scientific discipline so enmeshed with and influenced by the human condition,” said AMIA. “Given this view, the art and science of medicine will surely be impacted greatly by AI. Questions regarding how clinicians interact with AI or how AI will influence clinical decision-making represent daunting challenges for which federal R&D funding should be leveraged.”

Twitter: @MikeMiliardHITN
Email the writer: mike.miliard@himssmedia.com

News from healthcareitnews.com