Radiologist: Artists, robots, and what is in between.

By Lior Gazit

Radiologists manifest their skills in reports. They observe, examine, assess, compare, opine, and apply unique expertise. Typically, all to be summed up in a written report. When putting together a radiology report, the radiologist maneuvers between two poles, the liberty to describe any phenomenon using free text, and the adherence to distinct and agreed upon terms. Artist vs. robot.

The ability to use free text in an unrestricted manner is important. It is free from any restriction of time, institute, education, etc. Conversely, when using agreed upon terms, they allow for a standard manner to communicate the assessment in a most objective measure. For instance, the term "likely" could have an agreed upon probability so that the sentence "...likely a new lesion in left lobe..." would communicate a specific level of statistical certainty.

The likes of me complement the radiology space by applying mathematical models that take advantage of the patterns in the text and derive decisions. Our expertise is developing solutions using Machine Learning.

As a machine learning professional, the "robotic" approach presents great virtues that can be easily picked up by a computer program and mapped to a particular derivation. However, when processing a report section of the opposite nature, there may be more ambiguity than the program is able to deal with, which leads to failure to properly perceive the text's content.

So there's a trade off between the two approaches, artist and robot. However, one of the radiologist's decisions may actually lend itself very well to the "robotic" approach, the prescription to a follow up exam.

The follow up recommendations often have a fixed set of components, each having a distinct set of values. These components are typically: The timeframe to follow up, the anatomy, and the imaging modality. "Recommend a follow up chest CT in 3-6 month to document resolution."

While this seems relatively simple to interpret from a human reading it, for computers to interpolate the intent of some of the more “fuzzy” language is more difficult. Instead, what if there was a national standard for writing a follow up prescription?

Over the recent years we’ve seen an adoption of standards across different institutions. Radiologists are referencing several such standards when writing their assessments. Some of these are the Bi-Rads, for imaging of the breast, Ti-Rads for imaging of the Thyroid, and so on.

These offer common guidelines for terminology and assessments. These allow for a more coherent communication between different medical institutions. Moreover, when applying computer methods for processing radiology reports, these referenced guidelines are identified and used as anchors for information which are similar across different radiologists and institutions.

The notion of “structured reporting” is floating around in the industry as a way to create a standard format for writing a follow up prescription. How can we enable the community for that to gain adoption amongst radiologists?

From an administrative process, blurring the line between radiologist artist and radiologist robot can in turn help the patient. Care coordinators will be able to identify radiologist’s recommendations with more ease and confidence and in turn alert the necessary stakeholders of what needs to happen next in a more streamlined manner. 

Next
Next

HL7 is dead, long live HL7