The NY Times article this week; The Computer Will See You Now written by a pediatric physician complains that the electronic medical record has depersonalized her interaction.
HISTalk commented on it in his morning update and highlighted the complaints:
As the author says:
It is important to not turn our clinicians into data entry clerks and utilizing the finely honed and developed skills of the medical editor/transcriptionist to convert this audio into the data necessary to drive the EMR. Technology can assist and provide some efficiency to the process and specifically Speech Understanding can automate some of this process. But this method of capturing the voice is repeatedly dropped or forgotten in this discussion. There are circumstances where this technique may not apply (public forum in earshot of nosy eavesdroppers fro instance) but for circumstances where it does voice provides a ready and efficient method. Historically this created text that the EMR systems had difficulty using (they are essentially data driven repositories) but with the addition of tagged information that is linked to the narrative all held in the complete Healthstory we bridge the gap. Not only allowing for the inclusion of the fine detail that is essential and influences care but linked and part of this same material is tagged structured and encoded data that can feed the data hungry EMR.
HISTalk commented on it in his morning update and highlighted the complaints:
- using the computer in front of patients is intrusive
- standard questions must be asked in order even when they clearly don’t apply
- the doctor might swear in front of patients when the computer does something wrong
- computers lose context because doctors can’t underline, write bigger, or otherwise highlight something important
As the author says:
The benefits (of the EMR) may be real, but we should not sacrifice too much for themAnd the end result for her is
In short, the computer depersonalizes medicine. It ignores nuances that we do not measure but clearly influence careBut the prescribed treatment of a hybrid using a tablet ignores most of the issues and concerns highlighted and forgets the relative difficulty of interacting with tablet or screen based technologies while facing and talking a patient. No doubt there are some circumstances where this does make sense but the key to success is the hybrid approach or blended model that does uses all the available methods and tools.
It is important to not turn our clinicians into data entry clerks and utilizing the finely honed and developed skills of the medical editor/transcriptionist to convert this audio into the data necessary to drive the EMR. Technology can assist and provide some efficiency to the process and specifically Speech Understanding can automate some of this process. But this method of capturing the voice is repeatedly dropped or forgotten in this discussion. There are circumstances where this technique may not apply (public forum in earshot of nosy eavesdroppers fro instance) but for circumstances where it does voice provides a ready and efficient method. Historically this created text that the EMR systems had difficulty using (they are essentially data driven repositories) but with the addition of tagged information that is linked to the narrative all held in the complete Healthstory we bridge the gap. Not only allowing for the inclusion of the fine detail that is essential and influences care but linked and part of this same material is tagged structured and encoded data that can feed the data hungry EMR.
No comments:
Post a Comment