Showing posts with label Speech recognition. Show all posts
Showing posts with label Speech recognition. Show all posts

Wednesday, February 24, 2010

Data Input Is Difficult

A recent survey by the Texas Medical Association (TMA) (one page summary here -pdf and the survey results here - doc) shows an increase in the number of people reporting use of an EMR(43% in 2009 up from 33% in 2007). There is also a continued trend of physicians expecting to implement an EMR that is being helped by the Health Information Technology for Economic and Clinical Health (HITECH) Act with 59% of respondents looking to qualify for these incentive payments.

But it is the likes and dislikes of existing users that makes for interesting reading. 76% of respondents like electronic charting which I interpret to be the accessing the clinical data in digital form, the ability to process and manipulate and re-purpose in different formats. And features clinicians don't like..........
data input difficult or time consuming
Shock horror - clinicians don't like being data entry clerks (I can't see my patient's because I am at the Screen Entering Data and Doctor Please look at me not Your EMR). None of this is surprising and this remains the most significant barrier to adoption of EMR's in the busy and complex clinical setting. Designing a brilliant user interface to capture clinicians input into discreet fields may suit the needs of the data driven EMR but it falls well short of the clinical needs and in particular the physicians need for information. Physicians are pushing the federal Health and Human Service department to include the narrative as part of the proposed regulations for electronic health records and rightly so. As the eWeek article "Doctors Say Narrative Missing from Proposed EHR Regulations" stated:
No matter how good [EHR records] are, you'll never get the flavored nuance of the patient's [situation] if you don't have an unstructured note," said Dr. Steven Schiff, the medical director and service chief of cardiology at Orange Coast Memorial Medical Center, in Fountain Valley, Calif
The comparison between a template generated note:
The occurrence was one hour prior to arrival. The course of pain is constant. Location of pain: Head leg. Location of bleeding: None. Location of laceration: None. The degree of headache is mild. The other degree of pain is moderate. The degree of bleeding is negative. Mitigating factor is negative. Immobilization no backboard in place and no cervical collar in place. Fall description tripped. Intoxication: No alcohol intoxication. Location accident occurred was home
and the narrative created by a physician:
The patient is a 74-year-old female who presents with a complaint of fall, 74-year-old female presents with complaint of neck pain, headache. She states that she had mechanical fall at home where she tripped and fell downstairs, approximately 9 steps and landed on her back. She complained of shortness of breath right after the event. She noted that she had pain in her left ankle and left knee. She is not sure whether she had loss of consciousness and the patient further complains of the pain in the right wrist
makes this point with 97% of the survey saying they prefer the human generated note. It's unlikely that that any EMR system will pass the Turing Test anytime soon!

Patients too will start to insist on getting the full Health Story as Steven Schiff points out in his article on in the Huffington Post "Have you Thought About Your Health Story?". As patients increasingly become partners int eh care process rather than the traditional bystander information will need to be transferred between the patient and the clinicians and computer template generated notes are not going to work. There's a good reason that the:
written patient medical record had its birth in the 19th century and as such, has remained almost entirely unchanged for well more than 100 years. During this time, literally everything else in medical care has evolved
It was a very effective means of communication and has served the healthcare providers well but the transition from paper to electronic remains a major issue and the importance of the narrative in the progress note is essential:
From the outset, we need to agree on the critical importance of such notes. It is necessary to tell a patients story, and to assess the significance of that history. At this time, it simply is unrealistic to think that all healthcare givers will develop the typing skills needed to function adequately in this environment. At best, it will require a full generation of doctors, nurses, technicians, and therapists to come and go before that is as ubiquitous a skill as handwriting is now. It is clear to me that the answer to many of the physician challenges that surround electronic medical record adoption and full patient utilization of these records lies in the use of voice recognition software

Electronic health records coupled with voice recognition technology allows me to document in the chart while I am seeing the patient. The note is often created with the active participation of patients and family members; and is then finished at the end of the patient encounter and is faxed to the referring doctor. Additionally, a copy is printed out at the checkout desk and handed to the patient as they leave the office. The notes are error-free for the most part, and are immediately available in the chart. There is no reading, correcting, signing, and mailing to be done. Most importantly, the notes can be highly descriptive, capturing not only the raw facts, but the nuanced details that are unique to that patient.

You're unique; your health record should be too
Right on! EMR's need speech has an integral part of capturing clinical data. Turning that into shareable information that can be accessed and consumed by the data centric EMR is the function of the Healthstory Project that sets out an open Clinical Documentation Architecture (CDA) standard for Common Document Types. Tied to speech understanding and you bridge the divide and overcome the 50% of physicians surveyed who say they dislike the EMR because:
"Data input difficult or time consuming"
The pieces are all in place - we just need to put them together intelligently into the existing workflow and healthcare process.

What are you doing to capture or collect patient clinical documentation. Do you use voice or templates. What do you love or hate about your clinical system?

Thursday, November 12, 2009

Moving Transcription Back Into the Hospital

What's old is new again......A recent article in USA Today (High-tech 'scribes' help transfer medical records into electronic form) highlighted the latest innovation in healthcare documentation - "High-Tech Scribes" who help "transfer medical records into electronic form. Is it just me or does that sound like something that is already going on in the electronic documentation industry with medical transcription and editing for the last 20+ years?

The challenge of capturing clinical documentation in digital format have remained the same and the continued struggle of adoption is highlighted by the poor adoption rate of electronic medical record systems
Today, only 1.5% of hospitals have a "comprehensive" electronic health record, and 8% have a basic version, according to Jha's March study in The New England Journal of Medicine. Most hospitals are intimidated by the cost, which can range from $20 million to $200 million.
Pretty poor given how long the industry has been working on this and despite the length of time still no clear exchange format or standard to facilitate the sharing of information
because there are no common standards for these records, doctors who do implement electronic charts may not be able to share them with a hospital across the street
But the value of digitizing medicine and implementing these systems has been clearly established and the terrifying level of errors that do occur as detailed in the landmark "To Err is Human - Building a Safer Health System" published in 2001 and many follow on reports and studies including he 2005 Health Affairs study that reported:
The country could eliminate 200,000 drug mistakes and save $1 billion a year if doctors in all hospitals entered their orders on computers
Despite these drivers we are still languishing in single digits of adoption and continue to struggle to roll out healthcare technology that effectively improves quality and safety without crushing efficiency and effectiveness of clinicians. So the University of Virginia Doctors elected to employ "scribes" to document the clinical encounter and these individuals such as:
Leiner, 22, a University of Virginia graduate who plans to apply to medical school
Who follow the doctors around and capture the clinical interaction on laptops. The consensus appears to be this will not catch on although a recent debate on the AMDIS list server offered some differing opinions that included some potential to reduce mistakes given the second set of eyes reviewing the documentation and the capture of the information real time with the patient. There is complexity in this approach made worse with gender conflicts (female patient and male scribe for example) but this approach appears to represent a modification of the current documentation process that uses dictation and transcription and perhaps offers some potential to free up clinicians to interact with patients rather than focusing on the clinical documentation and the electronic health record.

Moving the medical editor out of the bowels of the hospital just formalizes one of the well known methodologies in many transcription departments that attempts to link doctors to the same transcriptionists so they learn to "work together" (albeit remotely). This would make the bond stronger, the connection greater and the opportunity for error reduction using a trained and qualified "scribe" to document with the clinician. The medical editor is qualified, experienced and a highly knowledgeable resource that is currently disconnected from the clinical process.

Technology just becomes a facilitator in this process with speech understanding and speech recognition providing tools that can be used by one or both of the documentation team. The electronic medical record becomes integral to the documentation process. And although the real time aspects of alerts, evidence based medicine and the application of real time clinical knowledge to the interaction is still once removed from the physician the real time team documentation can provide direct access to the clinician through a combined approach to the capture and recording of the patient encounter.

How would you feel about a scribe being part of the clinician patient interaction - as the patient, as the clinicians or as the scribe?

Thursday, November 5, 2009

Is Speech Recognition Ready for Prime Time - You Bet

In a posting on the American Medical News site titled: Is Speech Recognition Ready for Prime Time - You Bet Pamela Dolan refers to the history of speech recognition and how the technology was cited as one of the best things to hit healthcare - 10 years ago. In fact in 2005 I wrote an article for Health Management Technology Magazine (now available for purchase through Amazon): "Is Speech Recognition the Holy Grail":
Speech recognition technology has been lauded as the best thing to happen to healthcare technology since the advent of the computer, but is it really the Holy Grail? Speech recognition has the potential to overcome one of the most significant barriers to implementing a fully computerized medical record: direct capture of physician notes. Industry estimates from physicians and chief information officers at hospitals suggest that 50 percent of physicians will utilize speech recognition within five years. Coupled with this is the growing demand for medical transcriptionists, which is projected to grow faster than the average of all occupations through 2010
In pulling up the original article from my archive it made for interesting reading and while there were still problems with the technology in 2005 it had reached a tipping point and the summary at the end was pretty much on the money:
Speech recognition is good technology, but it is neither a panacea nor the Holy Grail. Speech recognition has been two years away for the last 10 years, but we may be approaching the Grail — finally.
Developments over the last several years have incrementally improved speech recognition systems to the point that some have intelligent speech interpretation—extracting the meaning, not just the literal translation of words—and producing high-quality documents with minimal human intervention. Further integration and embedding speech recognition with mainstream EMR solutions will allow for expedited capture of documentation as part of the clinical care process, offering clinicians a choice of methods to document creation. The last significant development in speech recognition technology was the recognition of continuous speech. The next big leap in this technology will be the merger of NLP and CSR to create natural language understanding. This development will take the technology to the next level and will offer a realistic opportunity to make speech recognition the de facto method of data capture for the medical community. The question is, When?
As the article from the American Medical News says:
"It (speech recognition) wasn't ready for prime time," Dr. Garber pointed out. "Now it is. No question"
But I disagree on the impediment to EMR usage that is linked ot the lack of discreet data. This is true with old style speech recognition - the process of converting the spoken word into text
The problem is when you talk into it, the data is not discrete ... it's still like a Word document
but not for speech understanding which is the the merger speech recognition and natural language understanding - available today. Already in use in many sites and delivering data in Healthstory CDA4CDT format.

So to answer the question - Is Speech Recognition Ready for Prime Time: You Bet!

So are you using it, what are your experiences or would you rather be entering data using forms and computer screens?

Monday, July 20, 2009

Three Body Problem - Transcription Productivity and Speech Understanding


As an official Space Aficionado who "Applied to Ride" in an attempt to get a spot on a Russian rocket into space in the 80's and was beaten to that spot by the scientist from "Mars" - the confectionery maker I can't resist finding a link between current Apollo 11 memories and healthcare and clinical documentation........

The moon shot was a triumph in so many areas - the science alone was complex, challenging and with the level of computer sophistication at the time even more incredible for its success. Bear in mind that the Lunar Lander had a computer that had the same power as a wristwatch today (actually it was probably less). It is clear from this insightful Op-ed piece in the NY Times - "One Giant Leap to Nowhere" that much of the drive and success of the moon shot was less about the technology and more about the vision of one individual. Wernher von Braun was the philosopher who created the vision and orchestrated the various components into place to successfully place a man on the moon and return him safely to earth. The original drive was more military than scientific despite the fact that any possible attack from space remains challenging by virtue of the "three body problem".

Clinical documentation needs to solve an equally complex three body problem of Medical Editors, productivity and Speech Understanding. There are clear benefits to be had from implementing technology but these benefits accrue not just from the technology but from addressing all the elements. Imposing requirements on physicians on the way they dictate (pronunciation, terms, punctuation etc), on what they use to dictate (audio quality is a big contributor to ability of a speech understanding technology) and even simple workflow improvements that remove the necessity to dictate patient information or repeat information that is already captured and can included automatically are all key elements that can contribute to successfully using technology to improve efficiency. That said I would advocate some variations including less demand on changing physician behavior and having the technology adapt to the physician rather than the other way around - but not all technology is capable of this smarter approach.

In fact Jay Vance in his Blog The XY Files in an MT World talked about these points in a recent posting "Transitioning to Speech Recognition Editing". As he points out there is more than just technology at play. As he rightly points out:
This leaves the impression that 100% of the permanent physicians' dictations are being successfully recognized by the system....I've never seen this level of successful implementation, ever
And the point is well taken there is more at work here than just technology. The medical editor remains a key resource in this equation and part of the three body problem. But just applying technology won't make medical editors more efficient and more productive and importantly better compensated. Addressing the productivity gains and educating not just the clinicians but the editors and management is essential.

I'd add an additional element to this equation one I believe is essential to clinical documentation companies and specialists in this field.... this is not just documentation this is clinical knowledge and information. Generating "reports" or blobs of text be they in RTF, PDF, DOC, or TXT format is not solving the problem or addressing the needs of the sector. Clinical documentation specialists should be using their human intelligence and knowledge to generate "Meaningful Clinical Documents". We require vision and drive towards the creation of clinically actionable data from the documentation industry. We have the necessary infrastructure to help achieve that - I've talked extensively about Healthstory and the importance of preserving the narrative while making the information contained semantically interoperable or computer interpretable for consumption in our increasingly digitized world of medicine. The industry needs to rally around generating useful information not plain old text.

In many respects I think the industry needs the philosopher visionary who can, like Wernher von Braun, articulate the reason why transcription remains an essential component of healthcare delivery and not a dieing industry. His response to the frequently raised question of space exploration and why we Robots were not the solution to space exploration:
there is no computerized explorer in the world with more than a tiny fraction of the power of a chemical analog computer known as the human brain
Has much in common with healthcare, medicine and in particular the process of documenting and capturing clinical information where I would say:
There is no computerized system in the world with more than a tiny fraction of the power of a chemical analog computer known as the human brain, that can replace the knowledge workers in healthcare
Are you that resource and can you be part of that vision or even lead that vision. This is a rallying cry for Clinical Documentation to shoot for Mars and generate Meaningful Clinical Documents that contain the complete Healthstory.


Tuesday, March 24, 2009

Speech Recognition and MT Compensation

Speech Recognition and its relationship to compensation took on a life of its own over at the MTChat message board in this thread titled MT Exchange: MTs and "Speech Wreck". There were strong words and a concerted attack on Julie Weight....Yikes! The confusion that ensued linking and even blaming a technology with poor business practices and in particular poor compensation models that appeared to be unfair missed the point.

But it was the posting by Jay Vance of XY Files in an MT World who posted a thoughtful response to some of the criticism being leveled at Speech Recognition in this posting "Is Speech Rec Wrecked" that even featured actual data (thanks for sharing this!) from a survey he conducted in 2006 of Speech Recognition editors. In fact the data presented was helpful in assessing the actual benefits (back in 2006 - a long time ago in technology terms!) that even then showed:
a total of 51% of respondents - saw an average increase in productivity of between 25% and 50%. This confirms the anecdotal information I had collected via informal conversations with MTs working as SR editors in a variety of situations on a variety of SRT platforms.
I don't think it is a stretch to assume that this must have gotten better and productivity has improved beyond this and for a greater proportion of editors. The survey included some review of compensation changes (there was a reduction in rate but hard to determine if this was a real reduction or represented a reduction in rate that was offset by increased productivity) and a final question on satisfaction with the technology:
31% said they were somewhat satisfied
26% said they were very satisfied. These two categories totaled 57%
Not great but better than average. Overall
there is a wide spectrum in terms of the impact of SRT on productivity, compensation, and overall satisfaction among MTs working as SR editors. Consequently, I don't believe there is enough objective evidence to conclude that speech recognition has proven to be a widespread disaster for the MT working class. As with any scenario involving people, technology, and money, mileage is going to vary widely. In my experience, there are simply too many factors that can influence productivity, compensation, and overall satisfaction with speech recognition technology to draw hard and fast conclusions about the impact SRT is having on working MTs on the whole.
And this was in part the point that Julie Weight was trying to make on the MTChat board - there are many factors and there is no use trying to stall the implementation of Speech technology - that trains has left, like outsourcing.

Both Jay and Julie make the point that this technology is in use and although I probably am a stronger advocate and believer in the Speech technology I think the overriding point here is that this can and should be a good thing for the industry. Reducing the labor intensive element of producing a report has to be a good thing....freeing up the medical editor to add value to the clinical information as part of the process of review, editing and validation.

Recognizing this is old data this gives us a good reason to update this information and there is a survey currently ongoing from MTIA that can be taken here and I would encourage you to participate. This is an extensive survey and needs input but if you don't have the time I put a 4 question survey here that. If you can spare the time please take the full survey, but if not I'd welcome hearing your responses.



Monday, March 16, 2009

Reinvestment is not Just About Technology

There is lots of excitement or even frenzy over the wave of investment coming down the pipe towards healthcare technology but in this piece on the Huffington Post: Workforce Development Essential to Obama's Health Care IT Initiative Julian Alssid and Jonathan Leviss are quick to point out that there is an essential element that must be included - that of Human Capital. Healthcare is unique and transplanting technology from other industries is not a straightforward process
Hospitals are not banks, or insurance agencies, or hotels. Healthcare's unique workflows -- including many physicians and nurses sharing computers in a busy emergency room, the challenges of maintaining working hardware in an intensive care unit, and the vast realm of data accessed to care for a sick human being -- require novel technologies and processes that cannot be easily translated from other industries.
While I agree that some technologies have stalled many are being implemented and are delivering success today. Speech Recognition did suffer problems in noisy environments (that's why the early adopters of this technology are Radiologists who mostly work in quiet reading rooms). But newer Speech Understanding which is modelled on nature's success in speech understanding by not only using audio inputs but also getting information from the patent's previous history, demographics, prior reports and any other elements that will help in understanding what was said.

But that's not enough
Physicians, nurses, and other health care providers routinely learn new skills and adopt new technologies....What is missing, however, is a parallel training track for a sufficient workforce to develop, implement, manage, and support advanced information technologies in hospitals, doctors' offices, and other health care venues.
So providing the infrastructure is one thing but having the resources to support it is an essential part. This is especially true for the embattled medical transcription industry that has been fighting declining rates of pay as hospitals and healthcare providers continue to push for lower and lower line rates. All this is driven by the perception of the medical transcription is a cost, when in actual fact it is a value added service that frees up the clinical staff to focus on taking care of patients rather than the drudgery of data entry. There are lots of examples of systems trying to turn clinicians into data entry clerks and while there are instances where this methodology makes sense in many cases it does not. Technology will help (see above - Speech Understanding is moving speech into the 21st Century) but even with this technology there is still the requirement to provide support and expertise to facilitate the process of capturing information that is essential to the new age of data driven medicine. The Medical Transcriptionist is the knowledge worker who delivers the value add of helping turn clinical information into structured clinical data that includes the fine detail in the free form narrative that clinicians need and want to include while adding tagged structured data to deliver the full Healthstory for the patent's episode of care.


Tuesday, February 10, 2009

Why Speech Recognition is no Longer Sufficient

Speech recognition has been around for over 30 years and part of our consciousness since the mid 1960’s but it is only in the last 3-4 years that we have see the technology really start to deliver some value to the much beleaguered and over worked clinician. There are innumerable studies that demonstrate the savings linked to the efficiencies possible with faster report turnaround. Unfortunately producing more reports faster is not always the best answer and oftentimes this is simply making the patient information haystack larger. This tsunami of data is overwhelming even the best organized clinicians and many are struggling to keep up with this alongside the explosion of diagnostic and treatment choices. Keeping up with the medical knowledge is a full time job if anyone had the time – but they don’t.

Clinicians want to give great care - that's a universal maxim for the profession and anything that enables or facilitates this will be successful. But that's not what has been going on with speech recognition which has not only required a change in behavior to enunciate in special ways, dictate commands, speak slowly and add punctuation and in the ultimate punishment requiring the highly skilled and time pressured expert to review and correct poorly drafted content. The output is a blob of text that cannot be read or interpreted by the electronic medical record (EMR) since it is not machine readable.

Innovation in speech recognition was last made in 1993 when continuous speech recognition was rolled out. Since then the technology has stagnated and while allowing clinicians to type with their tongue has provided some efficiencies and improvements, speech recognition has failed to address the underlying challenges facing clinicians today. So now we have reached this point what’s next?

It is the capture of structured clinical data that can automatically feed the EMR that is the real goal. Achieving this requires an alternative approach to speech recognition, not just recognizing the words but actually understanding the meaning and context. Comprehending normal human speech is not a word recognition process but speech understanding process that takes as input not just the phonemes or parts of words but the complete context of a conversation including the intonation, the subject matter and relevant prior information which is all applied to the complete conversation. It is this process that enables humans to exhibit the “cocktail effect” which allows us to listen in to more than one conversation at a time even though we are not fully participating in either. The added knowledge allows for inferring of missed words and understanding the content allows us to complete the picture producing a fully understood interpretation of the speech. Speech understanding is the next frontier of innovation in clinical documentation.

This content can be stored as part of the full story - the Healthstory that contains the computer interpretable data AND the fine detail in the narrative that is the essence of clinical insight, judgment and essential to the transmission and flow of useful clinical information between all the team members delivering care in our multi disciplinary model.

Monday, January 12, 2009

Plans to Computerize the US Healthcare Records

CNN Money features an article today on the President-elect Obama's Digitizing the US Health Records System featuring the proposal to modernize the health care system by "making all health records standardized and electronic."

The plan calls for computerizing all records withing 5 years and is subject to much discussion in the various communities I participate in that is both positive (great investment and resources allocated to help fix a broken US healthcare system) to negative (are we just spending money on technology rather than spending money on
improving the outcomes and quality)

One observer put it this way:
this is a bit like watching a train wreck that is too late to stop
and more worryingly:
I don’t think that even a free EMR is attractive enough for most docs right now
One source cited came from information published by the AAFP (now restricted to members) that showed substantial variation in satisfaction with current implementations
....substantial variance in physician satisfaction with EMRs by product from “if I could get out I for zero cost I would” to “I’m not happy but my practice couldn’t live without it” to some actual satisfaction.....in large practices seldom rose above the “not happy, but …” level.
Current penetration and usage cited is at 8% of hospitals and 17% of physicians so there is a long way to go. Estimations for the price tag to achieve this range from $75 - 100 Billion. A Large percentage of any "bail out" that may or may not be approved but a small drop in the ocean of "$2 Trillion a year the industry spends" today.

But it is the usability that is required and ubiquitous access:
Doctors cannot spend hours and hours learning a new system," said Castillo. "It needs to be a ubiquitous, 'anytime, anywhere' solution that has easily accessible data in a simple-to-use Web-based application."
I agree but what is missing from this discussion is how to get this information into these systems. If we had a 100% adoption of EMR's today this would be an enormous mouth to feed with clinical data. It is no use implementing these systems if we don't have the data and the idea that clinicians will interact with the current technology, no matter how good it is with screens, feedback, menus and intuitive interfaces, is just not going to happen.

Providing the tools to capture the data naturally is going to be critical tot he success of these systems and there seems no better method that using voice. All our interactions are based on voice and capturing this as clinical data that can feed the data hungry EMR's. Speech recognition has gone some way to helping and automating this process but these older engines only output text which does not satiate the EMR's needs for structured and encoded clinically actionable data.

Ensuring that technology does not take over the practice of medicine and replace bedside skills is a major concern as detailed in this a New England Journal of Medicine article covered here where Dr Abraham Verghese says:
In short, bedside skills have plummeted in inverse proportion to the available technology. I truly believe that good bedside skills make residents more efficient," Verghese said. Doctors who rely on hands-on skills tend to order tests more judiciously, reducing the number of unnecessary and expensive trips to the radiology department.
To that point allowing for ready voice capture that generates the date required to make these clinical systems useful is essential and is precisely what speech Understanding does. Free form narrative that is converted into structured meaningful clinical documents that contain the full fine detail from the clinicians but also contains encoded structured data that is tagged against relevant controlled medical vocabularies including Snomed, RxNorm, RadLex, LOINC, ICD9 to name a few. All this can be output in CDA format for Common Document Types that has been defined and approved through the HL7 balloting process through the tremendous work being done by the Healthstory Project that creates one document that delivers multiple outputs for different purposes and retains complete and detailed clinical information. Due to the open nature and flexibility of the standard this format allows for ready adoption by multiple stake holders quickly creating immediate value to the participants by generating a flexible rich clinical document that provides useful output.

The conversation on Digital Health Records is going in the right direction and i think it is exciting but must include the capture of information and while speech understanding is not a panacea it is an essential contributor to the equation of making digital records work


Friday, December 19, 2008

Speech Recognition no Panacea to Change Work Habit

Study published at RSNA 2008 and reported on in Auntminnie: SR Technology no panacea for reporting work habit change (registration required) reviewed the implementation of speech recognition technology at University of North Carolina Hospitals in Chapel Hill. This was not a review of the effectiveness of Speech Recognition overall since:
It's a well-known fact that implementing speech recognition (SR) technology can revolutionize report turnaround time and dramatically enhance the workflow efficiency of radiology departments.
But the question for this study was "can it improve the work habits of individual radiologists?". Not surprisingly technology does not change work habits. Radiologists who were slow to report before the implementation of speech were slow to report after the implementation of speech. Installing technology that speeds up the overall process does not change reporting behavior. Rank order of turnaround times by radiologists did not change pre and post implementation

The learning point - using technology to change behavior tends not to be successful. Technology should adapt to individual behavior rather than trying to change the behavior. Providing tools and technology that does not require a change behavior is more likely to be successful. Often behavior has been refined over the course of time that is optimal for that individual and circumstance - change is not always better or more efficient.


Wednesday, December 17, 2008

Why Doctors Don't Like EMR's

Mr HISTalk is on the money in his latest blog
Doctors, like 99% of people, want to be consumers of information, not creators of it
Doctors want to give great care - that's a universal maxim for the profession and anything that enables or facilitates this will be successful and will get used. But that's not what has been going on:

The model of forcing doctors to share their thoughts through manual electronic documentation is fatally flawed. There is no industry … none … where someone with the education and time value of a physician is expected to peck on a computer, especially in front of a client who’s only going to get seven minutes of time (I’ve never seen a CIO typing meeting minutes into a PC, yet they’re often the ones beefing about computer-avoiding doctors).
and my personal favorite part of this piece - philosophic johad:
....trying to force those small business owners to use computers based on some kind of naive philosophic jihad against the inefficiency of paper-based recordkeeping
He is right "speech recognition" (or better yet the newer and more relevant speech understanding) is ready for prime time.....

Gathering the data should not be the focus - it should be a natural by product of the interaction and speech can help in achieving this. The real value comes with driving clinical information to support to decision making allowing clinicians to focus on the healthcare process


Monday, August 4, 2008

Medical Transcription Knowledge Based Workers - Increasing Demand

A working from Home blog "Undress4Success - Work From Home" posted an interesting article on the Medical Transcription industry and the increased demand for Medical Transcriptionists
.... (Overseas) rates are going up too, particularly in India, because they’ve realized that they can demand higher prices thanks to growing need and scarce availability of experienced MTs
The author is right on target - Medical Editors are going to be in high demand. They are and will become key knowledge workers in healthcare. As Tom Harnish says in the blog
...qualified medical transcriptionists (MTs) are in short supply
Good news for those who fear the flatening of the world and the application of technology. Speech recogntion will improve the productivity by automating the rote task of converting the spoken word into text:

The (speech recognition) technology may increase costs by 15% to 20%, but it can increase output 100% to 200% according to one MTSO owner
But to add even more value to this process knowledge based workers will need to do more than just listen to the audio and convert this into text (either by pure typing or editing/proofing a draft output from a speech recognition engine). Adding clinical data that is machine readable and semantically interoperable between all the clinical systems being implemented in our healthcare system will become a must. That process is mostly manual and much information is lost in the avalanche of text based documents that contain the information but only in human readable form. Knowledge based workers will need to provide data elements and structure to these documents turning them into data that can be fed into clinical systems.

CDA4CDT provides an ideal common environment that is designed to flexibly cope with the varied levels of data encoding but still provide the healthcare system with the text based document that can be printed and used as it is currently. But the additional information incorporated into this file allows for semantic interoperability and data exchange at a level that EMRs want and need turning the huge volume of clinical text documents into clinical data inputs to the medical record that can be shared and exchanged between systems

Medical Editors can provide this manually by tagging documents and encoding using the CDA4CDT standard or by using speech understanding technology. Speech understadning outputs a document that is tagged and structured with clinical data. This merges the role of medical editor with a true knoweldge based fuctnion of reviewing and correcting clincal data embedded in the file and clinical document.

Medical Editors are knowledge based workers and are in short supply......

Wednesday, July 2, 2008

Speech Understanding will Bring More Information to the Doctor

Came across an interesting post by Steven F. Palter, MD from the docinthemachine blog. Specifically the blog he wrote on EMR=Clonewars
He notes that there is a hidden danger in EMRs of the inadvertant cloning of patients.
I don't think it is so much hidden or inadvertent - it's human nature and doctors are like everyone else - we always look for the path of least resistance. Copying from a previous note especially one using templates with a series of choices can be helpful.

But what he gets in his practice
..... is EMR records from other practices .... and the patients look identical.....Instead of all the details of a past treatment cycle it will list drug dose and failure with no detail of WHY it did not work. The diseases all look the same. There is never any detail on the nuances and subtle aspects of that individual’s condition. So when a group uses these records and they review a treatment every single person with the same disease (the “patient clones”) end up looking identical and treated identically. Cookie cutter assembly line medicine.
There's hope - Speech Understanding and in particular the use of CDA4CDT documents which make narrative notes interoperable with electronic medical records - bridging the divide between where we are today:
  • More than 60% of clinical content produced, stored and locked in narrative documents
and where we want to get to
  • Structured encoded information that is semantically interoperable and can be automatically processed and used by computer systems to help apply the best knowledge of healthcare diagnosis and treatments available today
What this means is a at the most basic level virtually any clinician can produce a minimal CDA document utilizing the simplest form of the structure which includes all important uniform metadata for all documents that allows them to be indexed, searched and the content integrated in a meaningful way into the EMR.

And at the high end, lab systems, pharmacy systems and EMR's can produce richly-structured, fully machine-processable CDA documents that remain human-readable as well as machine readable which will satisfy Steven's needs of :
the nuances and subtle aspects of that individual’s condition
So as Steve rightly points out quoting from AHIMA 2006 study:
....65 percent of chief information officers planned to get it (Speech Recognition) by 2008. It’s being touted as a natural add-on to the electronic medical record, since doctors are used to recording their notes, says Harry Rhodes, director of practice leadership for the American Health Information Management Association.
Voice can help solve the cloning of patients and the technology and the standard is available today.

Saturday, June 28, 2008

Healthcare driving speech recognition technology growth

No big surprise here - healthcare is deriving huge benefits from speech recognition and a new report from DataMonitor just reaffirms this. You can see the press release here
Healthcare currently represents 85% of the market for PC- and server-based speech recognition technologies.
Good news for the providers of speech recognition and speech understanding:
Datamonitor estimates in its report, Automating and Enhancing Processes through Voice in Desktop and Back Office Environments, that the global market for speech recognition in healthcare is currently worth an estimated $170 million. It projects that between 2008 and 2013 the market will more than double in size.
Which seems very conservative when you consider the current size of transcription market estimated anywhere from 6 - 12 Billion dollars. And that industry and content is currently pouring documents into the Electronic Medicals Records (EMR) filling some 60% or more of the content in these systems today. The problem with documents is they are only really human readable (it is possible to apply some level of Natural Language Processing (NLP) to them but that process remains difficult and is not happening with any significant amount of market penetration today). What these systems need and are crying out for is data that is machine readable and therein lies the real opportunity for speech and in particular speech understanding to deliver clinical data directly into the EMR from the dictation process....

Member

medbloggercode.com