Practicing physicians are faced with the need to make decisions and recommendations constantly and quickly throughout the day. They assess clinical situations, try to identify a coherent picture of the case at hand, compare that picture to the pattern of similar cases from experience and didactics, and come up with a proposed treatment plan. Many, many times every day.
With time, and with the pressure of performance, numerous mental shortcuts can develop, often unconsciously. Learned paradigms are used as shortcuts – information-processing rules referred to as heuristics – and are helpful in moving quickly through cognitive processes all day long. However, a number of cognitive biases can emerge, and can lead clinicians into making erroneous conclusions that are often only seen in retrospect.
There are many different kinds of cognitive biases that affect clinical decision-making, similar to other fields that that also require rationality and good judgement. There are a few that are common in clinical medicine, which might be useful to describe, in order to see how we might build supportive information systems that can help overcome these biases:
1. Availability heuristic
Diagnosis of the current patient biased by experience with past cases. Example: a patient with crushing chest pain was incorrectly treated for a myocardial infarction, despite indications that an aortic dissection was present.
2. Anchoring heuristic
Relying on initial diagnostic impression, despite subsequent information to the contrary. Example: Repeated positive blood cultures with Corynebacterium were dismissed as contaminants; the patient was eventually diagnosed with Corynebacterium endocarditis.
3. Framing effects
Diagnostic decision-making unduly biased by subtle cues and collateral information. Example: a heroin-addicted patient with abdominal pain was treated for opiate withdrawal, but proved to have a bowel perforation.
4. Blind obedience
Placing undue reliance on test results or “expert” opinion. Example: false-negative rapid test for Streptococcus pharyngitis resulted in a delay in diagnosis and appropriate treatment.
These are examples of some of the kinds of biases that, when pointed out, are familiar to practicing clinicians. They are also the source of common medical errors.
AI systems can minimize these biases
Conceptually, an Artificial Intelligence (AI) system can overcome these cognitive biases, and deliver personalized, evidence-based rational recommendations in real time to clinicians (and patients) at the point of care. In order to do this, such a system would need of consider all the data about the patient – current complaints, physical findings, other co-morbid conditions present, medications being taken, allergies, lab and imaging tests done over time. In short, the automated system would need to take into consideration all the things clinicians use to make a recommendation.
Once the data about the individual is gathered, it is compared to the experience derived from a large base of clinical data in order to match patterns and predict outcomes. A differential diagnosis (the different possible diagnoses at the time of observation, in order of probability) can be created, further testing to better distinguish between the possibilities can be suggested, and a treatment plan can be proposed – without the cognitive biases which dog healthcare delivery currently.
Do such systems exist currently? No. But they are evolving. For example, at Flow Health, we are building an end-to-end service intended to resolve the kind of cognitive bias that clouds medical judgement – a “vertical AI” approach. The limitations of AI in healthcare has more to do with the availability of data than with the sophistication of the algorithms. Deep learning has advanced considerably, largely in other domains where data is more accessible. In healthcare, breaking down the institution-centric and provider-centric data silos is still a work in process, but is emerging. Access to data is key, in order for deep learning to yield truly important insights.
Nevertheless, some early gains are being made, even with relatively limited data sets. Mostly, in these early stages, machine learning is “taught” by human expert opinion. For example, when teaching a system to read images (such as mammograms), the system is taught by human experts – “this is normal, this is benign, this is suspicious for cancer.” Today, the benchmark is comparing machine learning to human doctors. The real goal should be to apply AI to identify patterns and associations not previously recognized, even by human doctors.
But the goal is in sight. Healthcare organizations need to break down barriers that prevent collaboration between AI researchers, organizations, and others. This is needed in order to aggregate enough data to drive AI-based knowledge discovery and overcome the human cognitive bias negatively impacting care quality.
Within the next few years, medical data will likely be sufficiently aggregated to allow good machine learning to take place. Data collection at the point of care – from clinicians, from patients, from laboratory findings, from imaging studies, and from genomics as this becomes more widespread – can flesh out a good case-definition which can then be matched against patterns extracted from large data sets, and logical recommendations can be presented. It can be integrated into the tools clinicians and patients can use (EHRs, portals, etc.), and can deliver on the vision of precision medicine. This is no longer science fiction. It will become our mainstream way of approaching healthcare in the near future.
This article is published as part of the IDG Contributor Network. Want to Join?