Tuesday, May 12, 2009

Clinical Research

Some in the health professions may be asking themselves to what extent a student's fitness record should be made public. How many push ups did Franny do, versus Zoe?

That shouldn't be public information necessarily, but then kids like to brag, talk about how much money their family racks in from owning a Jack in the Box or whatever.

So what's to stop Franny from registering her feats on a school intranet, as a part of her profile, or even on Facebook if she wants her reputation to spread, inter-school?

Similar questions
bedevil the medical community, as the concept of legal medical record (LMR) translates into software. Many doctors are ravenous for more control over this technology, as it impacts their practice rather considerably. In working with peers, you expect nuance and story to enter into it sometimes.

Charting a patient is sometimes a prolix process, especially in psychiatry, where personality disorders may be flagged in terms of quoted remarks.

So is some AI robot supposed to digest all this stuff and convert it to DSM codes on the fly, then bill insurance? That seems unlikely, but then a lot of doctors see the state of the art on Star Trek and become willing victims of snow jobs.

Big companies (even some little ones) are only too happy to promise the moon, will even open source their results, as long as they're able to gobble some grants in the process. "Beware of hand-wavy AI" would be my advice to the star struck, especially when considering the price tag (make sure they give you a demo up front, try before you buy).

If all you know about computers you learned from Hollywood movies, then for your own protection, don't sign any checks, especially if your name is Uncle Sam (you've got no more money to waste guy -- we want bang for that buck).

Abstracting meaningful clinical data, either in real time or post procedure, is a challenge for statistical researchers, not just medical doctors. Employing armies of people to sit down with the files, bubbling in scannable forms on the basis of what's been transcribed, is too slow and tedious a process.

Scannable forms, touch pad devices, indeed have a role to play (I've used both), but the game is less about chart abstracting than about harvesting data before it lands in the narrative format.

Current medications, blood pressures, what was done, contributing factors (smoking and drinking habits), height, weight and maybe some idea of the DNA (e.g. sex), are all going to be of interest to the statistician, whereas billing address will be of no concern whatsoever, even less so the patient's true identity.

Document management is not a new science. Xerox has a long history of photo-scanning, plus OCR has come a long way over the decades, even when based on handwriting (though doctors specialized in making theirs unreadable for a reason -- to cut down on forgeries based on notes to the pharmacy).

Much of the design work has to do with workflow, as much as with which tools to use. In support of the doctors, I advocate more self-sufficiency on the part of hospitals, less outsourcing to know-it-alls in faraway universities who like to promise the moon, but who never (or rarely) deliver.

As so often happens in medicine, a lot of the best practices get refined in triage situations where there's no time for frills. Basic charting still needs to happen, so what shall we use? Do we have a crackerjack geek team in the clinic? Was our recruiting drive successful in any way?

At a next level, pioneering research hospitals will afford a window into their processes, so that others might learn by example.

The idea that a busy gotham hospital might itself be a case study, in service of the advancement of health care, is not a new one. Indeed, hospitals have pioneered a lot of information technology (IT) over the years, including entire computer languages.

My focus has tended to be those clinical research records (CRRs), as distinct from legal medical records (LMRs). I'd get between a gotham hospital's database and some national registry and make sure all the submitted data was squeaky clean i.e. clinically significant without being traceable to specific patients (no social security numbers, only meaningless IDs).

Other people around me were doing the same thing, abiding by HIPAA while getting on with the value-adding business of outcomes research.

Sometimes this work took me into the operating room itself (CVOR), always properly sanitized and usually with no one else present. My data collection tools needed field testing in a real world environment, and this seemed the best way.