EDC Overview
Depending on your source, EDC systems have been in existence for over 20 years in a variety of different data capture modalities. By having the Investigator enter data directly into the clinical database the normal steps of send and receive created by paper were eliminated. It was postulated that the use of EDC technologies would yield cleaner data, and the source data would be more readily apparent. EDC systems began with terminal based clients and progressed to thick client applications and eventually to the web based applications that are most common today. Back end databases ranged from Oracle and Microsoft’s SQL server to a variety of homegrown flat files and even the flat files structures of SAS datasets. Although a variety of system types and architectures exist still, by last year’s review of EDC companies at the Drug Information Association (DIA) annual meeting most companies had moved to a web based architecture. The use of these Web enabled technologies can enable more complex and flexible trial designs.
ePRO Overview
Electronic Patient Reported Outcomes or ePRO is simply the collection of data directly from patients into a database. The idea is not necessarily new with innovators like Health Hero Network and a variety of handheld diary systems trying the market over the past 20 years but the adoption of ePRO technologies is still nascent and undergoes much scrutiny. At a recent ePRO conference an open discussion on ePRO methods was described as a Volleyball Match and handled as such by the attendees. The development and deployment of ePRO is also still fairly new, slowly replacing paper diaries, mailed forms, and Interactive Voice Response Systems (IVRS). ePRO adoption comes at a time when smart phones, tablets and the internet are readily available to the majority of the population. Internet availability in North America currently stands at over 75% penetration and cellular phone penetration now standing at over 80% for the same area. With this new technology study sponsors are pushing the market towards and inevitable domination of ePRO to save time and money and improve data quality.
Integration of Modular and Disparate Systems
Anyone searching the internet or available journals will find dozens of pages advocating the integration of EDC and ePRO data and giving sponsors exciting ways of implementing these integrations through new tools and a series of seemingly never-ending database manipulations. Although we will not be examining any specific integration it can be safely stated that any integration of technologies in a clinical system is not a simple mater. Approaches to integration are the topic of dozens of whitepapers and can be a validation specialists dream or nightmare. From a validation perspective integration causes a significant amount of testing, verification, and paperwork.
The general approach of integrating modular or disparate systems can be less complex when planning begins early. The necessity of a common data naming convention or at least well defined structures create a series of import maps than can be utilized to combine the two systems into a single clinical database to be made available for statisticians. Some large Pharma have created massive data definition libraries to address integration issues, while many companies are beginning to use standardized conventions (IE CDISC CDASH7).
Even with the necessary data mapping complete, the need for proper testing and validation still exists and can become costly if communication and planning are not well documented and well executed. Without the necessary proof that data mapped to a particular system is being passed correctly it is possible errors could be introduced into the clinical database which can create significant risk to the sponsor.
The process of data integration can also add considerable complexity to any study. Often the data from both systems is combined inside a single data warehouse repository with no clean cross reference to the source data, or the source records. Again, unless careful planning data hk is designed and executed there are massive risks. At the forefront is the time factor (When were both exports to the data warehouse made?) however other questions need to be considered in any type of data integration: What are the common data elements? Which data wins in the event of duplication? Is only clean data exported? How do we handle data which is out of range? How is free text data handled?
With a solid plan for integration the necessity of proving that each integration activity provided the same results this process can be daunting considering regulations may be in effect. A proper validation would consist of proving multiple integrations to ensure the import was consistent or reviewing documentation from a vendor that showed that a tool for integrations functioned flawlessly. Even if such documentation exists, it may still necessary to test on the study data at least once during the validation and UAT portion of study startup.
The end result of integration, though potentially accurate, is additional cost as the systems or modules are compared to the data warehouse.
The Single System Approach
With the need for more and more post market studies and the need for data as rapidly as possible to prove endpoints, publish and follow a requirement new technologies are being released that eliminate the initial need for data integration. These systems represent multiple technologies existing in a single implementation and sharing common definitions and a common data export.