HL7 is dead, long live HL7

By Vernon Marshall

How we handle interoperability

Interoperability is always tricky in the medical space. What standards there are, are not universally applied. Furthermore every tool out there allows people to change and update information whether they should be or not.

We take the position that we use off the shelf interfaces, as well as highly scalable containerization strategy, and the latest in Amazon Healthcare services to create a normalization layer that will remain flexible while maintaining the ‘best available’ interoperability.

Once the data is ingested its original format is preserved, but we use data catalogs to make sure we are mapping consistently to internal system. In this was data provenance is maintained but we are not hampered by partners having inconsistent conformance levels. Furthermore this allows us to scale in a way that would be difficult if we used legacy systems alone.

With consistent data catalogs, we can alway roll-back to find where any new inconsistency crops up. This is one of the key ways we know our natural language processing system is working, we know exactly at which stage each event occurred so we can constantly be improving results.

Any search for Healthcare Interoperability will give you thousands of hits to companies having the ‘One True Way’, but the fact of the matter is that for the most part interoperability as currently defined has failed. I learned that the hard way when I was part of the team defining Yahoo Developer network’s API strategy. There are hundreds of ways to exchange data between systems. Some of them are brilliant. The history of SGML and HTML is a perfect example. Most people have forgotten but HTML is Data Type Definition of SGML. SGML is hard for systems to processes so it begat XML, and with it XHTML DTD. Beautifully elegant ways to design standards, but useless in the real world. Once you got hackers, competing companies, and people ‘just shipping code’ all that got thrown out the window.

What we were left with was HTML5 which is a DTD of nothing and is a hodgepodge of wishlists from various factions. It was difficult for developers to handle all the differences in HTML that various browsers would support, not to mention what happens when you throw Javascript into the mix, but we survived and the Internet still works. HTTP and JSON are simple and not terribly elegant, but it works.

If you look at the history of DICOM, it’s shares a lot of the same thinking as SGML, and it adheres to it’s own ruleset just about as well. It was kind of brilliant when it was designed, but these days, parsing DICOM is more an exercise on what rules you can ignore because a lot of the early assumptions are no longer true. It assumes a world of RS232 ports and dial-up modems. HL7 requires complex on-the-wire processing to get correct, and it’s passing messages like “Patient Arrived”, “Patient Left”, all things that can be perfectly handled by a simple JSON blob.

We will never have interoperability in any of the ways currently being discussed and that’s ok. We collectively already made that decision, but we haven’t told each other yet, and it leads to countless people ‘fixing’ data all along the way. There’s a few key things we can agree on, and ditch the rest and learn to embrace it. It worked for the Internet.

Watch this clip, and think HL7/DICOM when you see the printer

Previous
Previous

Radiologist: Artists, robots, and what is in between.

Next
Next

Bridging the gap _____ in care...