Scientists have dubbed the 1970s and 1990s as two distinct “AI winters,” when sunny forecasts for artificial intelligence gave way to gloomy pessimism as projects failed to live up to the hype. IBM has sold its AI-powered Watson Health Earlier this year to a private equity firm for what analysts are calling residual value. Could this transaction herald a third AI winter?
Artificial intelligence has been with us longer than most people realize, reaching mass audiences with Rosey the Robot on the 1960s TV show The Jetsons. This application of AI – the omniscient maid running the household – is the sci-fi version. In healthcare, artificial intelligence is limited.
The concept is intended to function in a task-specific manner and is similar to real scenarios, e.g. B. when a computer-controlled machine beats a human chess champion. Chess is structured data with predefined rules for where to move, how to move, and when the game is won. Electronic medical records, on which artificial intelligence is based, do not lend themselves to the tight confines of a chessboard.
Collecting and reporting accurate patient data is the problem. MedStar Health sees shoddy electronic health record practices harming doctors, nurses and patients. The hospital system took first steps to raise public awareness of the issue in 2010, and efforts continue to this day. MedStar’s awareness campaign takes the acronym “EHR” and turns it into “Mistakes happen all the time” to make the mission clear.
In analyzing software from leading EHR vendors, MedStar found that entering data is often counterintuitive and displays make it confusing for physicians to interpret information. Medical records software often has no connection to the actual work of doctors and nurses, leading to even more errors.
Examples of medical data errors appear in medical journals, the media and court cases, and range from faulty code that deletes important information to mysterious patient gender changes. Because there is no formal reporting system, there is no definitive number of data-driven medical errors. The high probability of erroneous data being funneled into artificial intelligence applications derails their potential.
The development of artificial intelligence starts with training an algorithm to recognize patterns. Data is entered and when a large enough sample is realized, the algorithm is tested to see if it correctly identifies certain patient attributes. Despite the term “machine learning” implying a constantly evolving process, the technology is tested and deployed like traditional software development. If the underlying data is correct, properly trained algorithms will automate functions that make doctors more efficient.
Take, for example, diagnosing diseases based on eye images. In one patient the eye is healthy; in another, the eye shows signs of diabetic retinopathy. Images are taken of both healthy and “sick” eyes. When enough patient data is fed into the artificial intelligence system, the algorithm learns to identify patients with the disease.
Andrew Ray, a Harvard University professor with private sector machine learning experience, presented a disturbing scenario of what could go wrong without anyone even knowing. Using the eye example above, we assume that as more patients are examined, more eye images are fed into the system, which is now integrated into the clinical workflow as an automated process. So far, so good. But let’s assume images include treated patients with diabetic retinopathy. These treated patients have a small scar from a laser cut. Now the algorithm is tricked into looking for small scars.
Adding to the data confusion, physicians disagree among themselves as to what thousands of patient data points actually mean. Human intervention is required to tell the algorithm what data to look for, and they are hard-coded as labels for machine reading. Other concerns relate to EHR software updates, which can cause bugs. A hospital can switch software vendors, leading to what is known as data movement, when information is moved to another location.
That’s what happened and was at MD Anderson Cancer Center technical reason why IBM’s first partnership ended. IBM’s CEO at the time, Ginni Rometty, described the agreement, announced in 2013, as the company’s health care plan.moonshot.” MD Anderson explained in a press releasethat it would use Watson Health on its mission to eradicate cancer. The partnership fell apart two years later. To move forward, both parties would have had to retrain the system to understand data from the new software. It was the beginning of the end for IBM’s Watson Health.
Artificial intelligence in healthcare is only as good as the data. Accurately managing patient data isn’t science fiction or moonshot, but it’s essential for AI to succeed. The alternative is a promising health technology that freezes time.
Photo: MF3d, Getty Images