Data Innovators Mollie Ullman-Cullere, co-chair of Health Level 7's Clinical Genomics program.

Published on August 30th, 2013 | by Travis Korte

0

5 Questions for Genomic Data Standards Expert Mollie Ullman-Cullere

With the costs of genomic data analysis plummeting, standards setting organizations like Health Level 7 (HL7) have a key role to play in ensuring that aggregate knowledge can be gleaned from the latest genomic research. Biomedical informaticist Mollie Ullman-Cullere, co-chair of HL7’s Clinical Genomics program, spoke with the Center for Data Innovation about the state of genomic data and what needs to be done at the federal and state levels to get the most utility out of ongoing standardization efforts.

The interview below has been lightly edited for conciseness.

Travis Korte: In your opinion, what is the current state of genomic data standards for clinical and translational medicine?

Mollie Ullman-Cullere: Currently, there are multiple data standards used, depending on the people, their background and tools involved. Fortunately, there is a federally mediated workgroup with broad stakeholder participation focused on developing a clinical grade VCF/GVF format [Variant Call Format and Genome Variant Format, standards for storing data on genetic variation]. Although focused on standards used within a portion of the total testing process, participants include many working on standards and public infrastructure for the extended genomic medicine workflow, and these collaborative conversations are helping to bridge gaps in these other areas.

TK: What are some of the problems this standardization effort is trying to solve?

MU: Lack of standardization increases likelihood for misinterpretation of findings, inability to associate findings with medical knowledge, and prohibits advanced EHR functions (e.g. clinical decision support and quality initiatives). Without standard representation of genomic data we can’t even accurately collect statistics on mutation frequency within specific patient populations (i.e. mutation x is found in 90% of patients with prognosis y).

TK: What sort of end state do you envision in the medical community once these standards have been settled?

MU: Standardization of genomic data within clinical and translational medicine will enable: 1) The diagnostic and patient care teams (including the patient) to know which variants/mutations were identified in the patient (either inherited, within a cancer or infectious disease), as well as public health, quality/performance, and research initiatives. 2) Association of these specific variants with medical knowledge (current and emerging – as it’s made available) 3) Greater efficiency, reduction of costs, and prevention of medical error through full integration of genomics into the EHR clinical workflow and 4) genomic-based healthcare reform.

TK: Can you list some applications you foresee in the future that will only be possible through genomic data standardization?

MU: High-throughput, broad use of genetics/genomics in patient care will only be economically feasible with genomic data standardization. Currently, significant resources are expended to transform and integrate data from even a couple disparate systems, and our volume and flow of genomic data is still very, very low. This isn’t scalable. For instance, a common clinical workflow for laboratory data involves instruments, analysis algorithms, reporting systems, EHRs, clinical knowledge resources, clinical and research data marts, specialized systems for care of specific patient populations, public health reporting, clinical decision support, problem lists, allergy lists, drug and follow-up procedure order entry systems, etc… Continual manual transcription and translation of this data is not economically feasible and would significantly increase risk of medical error.

TK: Are genomic data standards making it into federal legislation at the level you’d like to see?

MU: The Health Information Technology for Economic and Clinical Health (HITECH) Act and the Genetic Information Nondiscrimination Act (GINA) laid a critical foundation and enabling technology for genomic medicine. Today many states have genetic privacy laws enacted prior to GINA, at a time when genetic testing was primarily performed on patients from higher socio-economic classes. Across the nation these genetic privacy laws vary greatly. Not discussed is the impact these laws may have on patient care and EHR requirements. If clinically relevant genetic information cannot be shared across the clinical care team, this disrupts continuity of care over the patient’s lifetime. Yes, this information may be shared with patient consent; however, do EHR’s support consent-based viewing of genetic/genomic test results? If the test was initially ordered by a clinician the patient does not have an ongoing relationship with, who will help them make informed decisions on what information to disclose when? Will genetic exceptionalism, which places the burden of genetic healthcare data management on the patient, lead to greater healthcare disparity and unnecessary, repeat testing? Can clinical decision support rules be triggered if the clinical genetic findings are private and a reason for the guideline cannot be provided to the treating physician? Could the burden of adhering to these laws prevent genomic-based healthcare reform? For these reasons, the federal and state legislatures need to reevaluate genetic laws and impact on public health, and ONC needs to define universal requirements for managing clinical genomic data in the EHR.

Tags: , , , , , , , ,


About the Author

Travis Korte is a research analyst at the Center for Data Innovation specializing in data science applications and open data. He has a background in journalism, computer science and statistics. Prior to joining the Center for Data Innovation, he launched the Science vertical of The Huffington Post and served as its Associate Editor, covering a wide range of science and technology topics. He has worked on data science projects with HuffPost and other organizations. Before this, he graduated with highest honors from the University of California, Berkeley, having studied critical theory and completed coursework in computer science and economics. His research interests are in computational social science and using data to engage with complex social systems. You can follow him on Twitter @traviskorte.



Back to Top ↑

Show Buttons
Hide Buttons