latest updates from easySERVICE™
In the fast-moving world of health IT, there can be too much of a good thing. John Mattison, chief medical information officer (CMIO) at Kaiser Permanente. Speaking recently at a health-tech conference, Mattison touted the potential for big data to improve patient outcomes and population health, while at the same time warning that without proper governance models, interoperability standards and developer platforms, the flood of medical information being collected and stored could become unmanageable.
“We have a lot of people who are traditional data scientists,” Mattison says, who are “freaked out about big data.”
“When I say ‘freaked out,’ that’s an understatement,” he adds.
That’s because data scientists know all too well the pitfalls that can emerge when the source of a dataset is uncertain, or, when multiple sets are pooled, the challenges of managing the identities of the individuals behind the numbers.
“You can get into a lot of trouble not paying attention to the details, the metadata,” Mattison says.
And with the proliferation of electronic health records (EHR) across the landscape of payers and care providers, the production of health data has been soaring. But how to make sense of all that data without getting buried by it?
Mattison cites a number of recent studies that analyzing the development and use of EHRs and other data-driven health initiatives. If there is a single lesson to be learned from all of those reports, he says, it’s that data alone isn’t sufficient to deliver on health IT’s promise of better patient outcomes. Instead, much like the developer platform reshaped the mobile world, health IT demands “open APIs so we can begin to tie together the ecosystem,” Mattison contends.
Interoperability has long been a central barrier to the expansion of EHRs across the industry. But that’s really only scratching the surface. With an ever-growing array of wearable devices and applications that can capture real-time data about any number of health issues, the utility of those processes hinges on the usability of the information they collect.
Mattison argues for a more sophisticated approach to the creation and management of data than many healthcare organizations have embraced in the past, one that relies heavily on metadata to help break down siloes between information assets that come from a disparate array of sources.
“It is a radical departure from the data warehousing models that we’re all used to using in the past,” he says.
So who is going to serve as the custodian of big data in a healthcare organization? Many enterprises have been creating new senior roles bearing titles like chief analytics officer. But to achieve the type of actionable insights health IT evangelists envision, is it enough simply to turn the information assets over to an army of data scientists?
“Not really,” Mattison says. “We need to have subject matter expertise.”
Ideally, then, it would be a team effort that would involve tech-savvy data experts working alongside the practitioners who had a hand in creating the datasets. And the various pools of health data would have subject-matter experts who could serve as a point person to help researchers, developers and others interpret that information and use it appropriately.
“What I propose is there should be a data concierge,” Mattison says, describing someone who knows the context of the datasets and their origins, and can serve as a resource “to prevent false conclusions from being drawn.”
“When you pool the data it’s really easy to misinterpret the data you’re not familiar with,” he says. “You really need to know where the data has been from source-generation.”
Source: Associated Press