As many as five out of 10 clinical trials fail to make it past phase three of research, even as it costs almost $40m–around 50% of the average trial budget–to reach that stage. And trial complexity continues to grow, with the number of procedures per protocol increasing significantly in the last few years.
To add to the woes of pharmaceutical and biotechnology companies, medical device manufacturers, and contract research organizations (CROs), regulations on trial data quality oversight keep getting stricter. For example, the so-called 21 CFR Part 11 rule introduced by the U.S. Food and Drug Administration (FDA) mandates sponsors and CROs to enact robust controls over electronic records and electronic signatures (ERES).
These controls, ranging from systems validation and authorized system access to secure and computerized audit trails, are designed to guarantee the integrity, authenticity and confidentiality of clinical data. A major reason for the tougher compliance regime is the recent failure of many submissions during the first-cycle review, on account of data discrepancies and high-profile fraud cases.
Clearly, life sciences organizations today face rising pressure from consumers and policy makers to make sure trial data and published findings are accurate and credible, and to protect the rights and privacy of subjects.
Data drives outcomes
For the industry to successfully address these imperatives, access to accurate information is crucial. Provisioning of high-quality data can help clinical teams take better informed and proactive decisions that shorten the trial lifecycle, and reduce associated risks.
Beside accuracy, integrity of data is vital, too. After all, data quality issues may compromise the reliability and validity of the study findings, which in turn makes decision making around medicine usage and marketing-related product evaluation trickier.
Moreover, a failure to effectively manage the subject and operational data used during the formulation of new medical approaches can significantly impact the overall design, execution, quality and management of trials.
All sponsors do realize the importance of institutionalizing robust trial data quality. However, many research teams typically manually compile crucial trial-related data obtained from disparate sources into reports, which become outdated by the time the Clinical Operations decision maker reviews them. Also, these disparate systems represent silos of information that are used by a variety of stakeholders for gathering, accessing and managing data in a duplicated manner.
The second notable challenge faced by clinical data managers is the recent explosion in the volume of unstructured data. Earlier, trials relied purely on structured, clinically-sourced data that was comparatively easy to process and analyze. However, companies are now increasingly leveraging a wide range of unstructured data from multiple real-world sources such as EMRs, genetic and phenotypic profiles, and connected mHealth devices. And, sourcing, aggregating and mining the same in an effective manner has become an arduous task for many organizations.
How can then pharma companies, medical device makers and CROs address these challenges, and mitigate various data quality issues arising out of fraud, misconduct, deliberate or inadvertent noncompliance, or significant carelessness?
- Identify resources: To begin with, companies need to locate all relevant trial data, and identify various enterprise users across various departments responsible for that data.
They should then compile a list of the diverse software, hardware and processes deployed for collecting, storing, mining and sharing the data. For the inventory of data management processes and tools to be comprehensive, any paper-based information collection methods based on manual data entry, as well as electronic data capture tools, must be included.
Finally, companies should use this information to establish a clinical data management (CDM) team for processing trial data, and supporting collection, cleansing and management of subject or patient data.
- Devise data management strategy: Data quality improvement projects must be underpinned by an overarching, organizational data management strategy. Such a strategy should assign ownership for the governance of specific data sets to different users, and also delineate responsibility for approving access to shared data.
By doing so, life sciences firms can fully comply with various regulations concerning trial data introduced by government agencies such as the FDA and the Medicines and Healthcare products Regulatory Agency (MHRA). The deployment of relevant technologies for fast and error-free electronic export of data can come in real handy in this regard.
- Integrate data silos: Unfortunately, many data management systems located in various functions or sites across the organization are not compatible with each other, rendering data interoperability challenging. One way companies can address this issue is by developing a technology-agnostic platform that supports the myriad of applications and systems, as well as structured and unstructured data, being used currently.
By facilitating hassle-free and on-demand aggregation, integration and harmonization of data across different systems, such a platform can empower enterprise users to easily create trial subject registries, clinical data repositories, and so on.
Clinical trials are only going to get more complex as the consumerization of health care gains further momentum. Life sciences companies must aggressively revamp core business processes and systems related to clinical data management, in order to reduce costs, shorten time to market, boost veracity of results, and conform to onerous regulations. Improving trial data accuracy and integrity should be the first step in that direction.