Blog Listing

Critics of Watson for Oncology Get it Right (and Wrong)

By Yina Moe-Lange  

 

Recently, there was a long and informative article published (by STAT News) that analyzed and discussed the progress or rather the lack of progress of Watson for Oncology. It highlighted several key criticisms of the program based on conversations with doctors across the world about how Watson was working for them.

 

The point I want to make here, is that the article is correct, but that it is not the failure of Artificial Intelligence (AI) that is to blame – but rather the wrong application of AI combined with extreme marketing hype. AI succeeds in many areas, but has just not in this particular application demonstrated clear benefits. Let us take a closer look at why that may be.

There are two conclusions that one can make after reading the article about what Watson for Oncology has failed to live up to.

 

The first has nothing to do with technology. Watson had a great marketing blitz that brought IBM to the forefront as a major player in the AI world. The marketing team overhyped Watson’s skills and overpromised what the project could do for humankind. One may even dare to say they made false promises about what Watson was currently able to do and what I would be able to do in the very near future. As the article writes, “the company could become a victim of its own marketing success – the unrealistic expectations it set are obscuring real accomplishments.” Watch for yourselves…

 

One of the ads for Watson that IBM put out featuring many high-profile celebrities.

 

This leads to the second conclusion, that they should have had the AI do something else. IBM set up Watson for Oncology to help doctors diagnose and treat cancer patients more effectively and efficiently. But the way they set up their AI was not right and this set up would not lead to fulfillment of the promises made by the marketing teams. A short and very simplified explanation: The way that Watson for Oncology works is that a board of top doctors in the US sits down with a group of IBM engineers and they go through cancers and the treatments these doctors recommend, and this data gets put into Watson. Along with these direct recommendations, Watson also has a database of medical journals and other written information. A doctor anywhere with a Watson for Oncology machine should then be able to input their patient’s information and then get a recommendation for treatment along with extra resources, such as medical journal articles on the patient’s disease and treatment options. This is highly supervised – the doctors are training the system and making the decisions.

 

What the AI should have been based around is patient outcomes. The machine should be fed data on exactly what a doctor has done to treat a patient, the details of the patient’s disease, and the outcome of treatment. By feeding into the system who dies and doesn’t die, means that the AI can learn specifically which treatments work and when to use them, and then make recommendations on care based on the outcomes. Complex pattern recognition is one of the things that AI systems are very good at. Moving away from one specific group of doctors’ recommendations would be the best way to use AI capabilities to the fullest extent. Taking this approach would help to alleviate some of the criticisms that doctors have about Watson for Oncology. For example, as stated in the article, patients around the world are living in different environments and doctors have different resources, these types of elements would be now considered more closely.

 

The goal of Watson for Oncology – democratization of cancer treatments across the world – is noble. There is nothing bad or wrong with increasing access to health care treatments for those who live in an area where there are no cancer specialists, or trying to ensure that the treatment that patients are receiving is the best option.

 

While the article is right about its criticisms of Watson for Oncology, it is fundamentally wrong when it blames AI for the shortcomings. AI has many different applications and future possibilities in medical diagnosing, it just would have had a much better outcome if the AI system would have been trained on patient outcomes rather than doctor recommendations. AI has been blamed for being overhyped, usually that label is incorrectly given, but when it comes to Watson for Oncology, it definitely looks like a case of overhype.