Skip to content
20 Minute Read

What’s it like to be a Clinical Research Monitor? A Day-in-the-Life With a Top CRA

Disclaimer: The examples noted in the following article are all firsthand accounts of the author, but for the purpose of this article have been combined into a single site visit. The visit depicted below is an aggregate of many sites and studies and is not necessarily an exact representation. The original version was written entirely on a smartphone while flying from Philadelphia to Puerto Rico.

Day in the Life

As a clinical research monitor, a typical work week starts with me catching an Uber to the Philadelphia airport. I breeze through security with TSA pre-check and board early with my frequent flyer status. While I am never in a rush to sit on an airplane any longer than necessary, boarding early to avoid fighting for overhead space for my suitcase is an easy tradeoff. It’s a typical week for me, so I will only be in two cities this week: Miami then Charlotte. Both major cities with major airports so no connections to worry about and plenty to do if I happen to have some time to myself.

Spoiler alert, I won’t.

If there are no flight delays, I will get to my hotel at about 6 p.m. This will give me enough time to eat, hit the hotel gym to work off all the fast food I eat while traveling, and prepare for my site visit the next day. I’m preparing a bit more than usual considering I am covering tomorrow’s visit for another CRA. The research site’s original monitor recently quit– a common occurrence in a profession with a 29% turnover rate!1 I look over the report from the previous visit and everything at the site looks in order. The site had an issue in the beginning of the study with documenting the time of consent, but their monitor did some retraining and everything seems to be fine now. I have been on the study for several months, so I am confident the visit will go smoothly. I might even have enough time to eat lunch outside– wishful thinking.

I wake up at 7:30 a.m. and arrive to the site at the appointment time, only to find the site isn’t ready for me. Their previous monitor was usually an hour late, so they are surprised I arrived right at 9 a.m. About a half hour later, the site brings out the two 6-inch thick binders that make up their investigator site file and the 1-inch binders that they use for each of their 20 subjects.

Time to do the real work of a CRA…

On-Site

The previous monitor finished the first 6 subjects and only checked the next 3 subjects for enrollment eligibility, so I start with patient 10. I will have to go through the 3 subjects who were partially monitored, but I prefer to start my day with a fresh subject to familiarize myself with how the site captures their data.

The Case for Standardized Source

The CRO (Clinical Research Organization) I work for doesn’t provide source documents for the study, so every site does things a bit differently. The varied source creates inconsistencies in data capture and increases the monitoring burden by forcing the CRA (me) to learn how each site operates. This study is small for a phase III, with only about 50 research sites in the U.S. and Canada conducting visits and capturing data according to their own, different SOPs. Phase III studies are sophisticated operations and are increasing in complexity,2 so the chances that 50 different sites perfectly capture all the data we need are close to zero. In addition to complexities of the study itself, these 50 sites were trained by 10 different CRAs on a protocol that has been amended twice before the study even started– we are now focused on capturing just minimum data we need to have the test product approved.

This is not a criticism on my employer. I have worked for 1 small CRO and 2 of the largest CROs, and they are all essentially the same. Not providing source documentation to sites for consistent data capture is industry standard practice.

This is a criticism of the industry. There are two reasons I have identified for why CROs don’t provide source to site:
1. Less work for the CRO.
2. Liability– if there is a deficiency in a site’s source data, it is the site’s fault instead of the CRO’s.

Personally, I believe a CRO is hired for their expertise in clinical trials and should be responsible for providing adequate source documentation to ensure the study goes as smoothly as possible by promoting consistency in data capture. Adopting and providing standardized source to sites reduces the workload not only on research sites, but also on the CRO’s own staff of CRAs.

By having to familiarize myself with each site’s method of source data capture, I not only eat into my limited time on site, but also am less likely to recognize any trends across my sites, since they all capture the data differently. Noticing trends is a crucial part of clinical trial monitoring and vital to patient safety and good data quality. Because this is my first time at this site, it will be difficult to identify any trends while I acclimate myself to the site’s patient visit binders.

The Case for Technology in Patient Consent

I begin with subject 10’s consent form, which documents the procedures and study visits a patient will have to adhere to in the study. The consent form lists out the possible benefits, side effects, and alternative treatments. It also informs the patient he/she can withdraw at any point and provides contact information for the patient in case they need more information.

The consent form is usually around 20 pages but can vary depending on the complexity of the study. A patient must be consented prior to the completion of any study procedures.3 The consent process is a critical element of any legitimate research study, and ensuring proper consent is one of the main reasons I have a job. Proper consent protects patients by keeping them informed of risks and alternatives. When I monitor the consent process, I check that each page of the most recently approved consent form is present in the patient’s chart and that the signature date matches the patient’s first visit date. Hopefully the site jots down a note with some details about the consent conversation with the patient, or at least uses a checklist that hits the bare minimum points.

I can never truly know by simply checking a signature and a note if the proper consent process happened; there is no way to know if the dates on the signature are accurate. Even if the date is accurate, I have no way of knowing if the consent process happened before any other study procedures that day. As for the note about the consent conversation with the patient, it does not take a monitor long to notice that sites use standard language. I suspect many sites have become much more proficient in documenting a proper consent process than actually performing proper consent.

The previous monitor already identified an issue with this site’s consent process– the site did not include a note detailing the consent discussion with the patient. The site was retrained to include a note for each subject detailing this conversation. The patient I am looking at has the following note:

“Patient seen in office today for possible research study. Patient was given ample time to review the consent form. Patient did not have any questions. Consent signed and copy provided to patient. Consent performed prior to any study procedures.”

Not perfect, but better than a lot I have seen.

I take a quick peek back at patients 1 through 9 to see if a note for each of their consent discussions was added. No surprise, they each have the exact same note posted and dated to the day the previous monitor was last on-site. While these notes may be sufficient from a documentation standpoint, I find it hard to believe that the site was able to remember 9 different conversations spanning over a month. It is even harder to believe that the 10 patients didn’t have any questions about an investigational medicine for treatment of their HIV, especially since the treatment is for patients who were recently diagnosed with HIV and likely to be naive regarding the disease.

I can ask the site to clarify their consent process since I am having trouble believing the accuracy of the notes, but if the site says none of the subjects had any questions, I don’t have any evidence to the contrary. I am forced to accept the industry standards for documenting consent.3

India recently addressed the consent issue by requiring the consent process to be on video. Filming consent is much more effective at ensuring the process was followed since a monitor can easily re-watch the entire process. However, the revised video consent process had been met with resistance. Some doctors in Indian clinical trials argue that the requirement for being videotaped makes a patient less likely to enroll in a study and hurts clinical trial enrollment.

As an industry skeptic, I believe the push-back from doctors and the decrease in enrollment rates due to video consent have a different source: fraud.

It is much more difficult to fabricate patient data and entire patients when video consent is required. The truth is likely somewhere in between, but ample evidence exists across the globe proving that some patients are fabricated.4, 5 As a monitor I have seen patient fabrication first hand and suspect there have been instances I missed. Industry leaders may argue that a video consent process has potential to unblind patient data or increase the time of monitoring.

To those experts I pose the following questions:
1. Does adaptive and remote monitoring not address the issue of taking too much time to monitor the full consent process?
2. Are you willing to risk patient safety, rights and well-being by not having a complete video consent process in the interest of saving time/money and expediting enrollment?

Currently there are no video consent requirements in the U.S. or for FDA submission, so I am forced to accept paper documentation at face value and move on to the patient’s visit data.

The Case for BYOD

The study I am on-site for today uses an eDiary that is provided to patients to complete some assessments but does not track dosing or side effects. Both dosing compliance and side effects are essential data, so not capturing them as accurately as possible in real time can be problematic.

As I read through patient 10, I notice that at her last visit, she returned almost all of her study medication unused. There are 30 days between each study visit and the patient returned 28 pills. Proper dosing is once a day, so there is an obvious non-compliance in dosing but no way to determine exactly when the patient stopped properly dosing. The site reports that the subject stopped dosing 2 days after her previous visit and that the site was not aware until the patient came in for her most recent visit 30 days later.

If the study recorded patient dosing electronically, the system could have been set up to automatically notify the site of dosing noncompliance so the site could have followed up with the patient in real-time. Non-compliant dosing is particularly dangerous in studies such as HIV, as non-compliance in dosing can cause the patient to develop resistance to the treatment and potentially future treatments as well.8

I read on to determine why the patient stopped taking her medicine. At her 30 day visit, the patient reported that 28 days ago she felt that the medicine was making her nauseous. This side effect isn’t uncommon in the study, but can require some follow up. In this instance it will require a lot of follow up.

The site performs a pregnancy test at each visit and patient 10’s most recent test is positive. Her nausea was not due to the medicine, but caused by her pregnancy. Now the site has a pregnant woman at risk of developing resistance to HIV treatment for both herself and her unborn child. All of this could have been avoided if the eDiary reported dosing and side effects to the site in real-time.

The patient, site and CRO would have been aware of the pregnancy in 3 days and the patient would have had continued treatment.

At this point, many sites and CROs are familiar with some kind of eCOA (electronic clinical outcomes and assessments) device, but current solutions present their own challenges. Supplying a large volume of sites with adequate diaries in a study with unpredictable enrollment can result in supply shortages. There is an adage in clinical trials that 80% of study enrollment will come from 20% of the sites on the study. With such a discrepancy in enrollment between sites, it can be difficult to forecast accurately to ensure adequate supply of product. Study supply shortages delay enrollment and greatly increase the costs of the study.6

BYOD (bring your own device) mitigates the problems associated with supplying sites with an eCOA device.

Critics of BYOD will argue that many of the patients in clinical trials are economically disadvantaged and are unlikely to have a smartphone necessary for BYOD. However, data7 suggests that 50% of U.S. adults making less than $30,000 per year still own a smartphone.

Critics of eCOA argue that older patients have difficulties utilizing smart devices, but research shows 46% of U.S. adults 65+ own a smartphone.7

BYOD should further mitigate concerns with patients being unable to correctly capture eCOA by allowing them to use the devices they are already familiar with. Not having to carry two smart devices also improves the chance of patients remembering to complete their assessments as required. BYOD is not without its own challenges. Any BYOD application needs to have a tested and proven UI (user interface) to ensure a diverse patient population will be able to complete all required assessments. While data suggests most patients do have access to the required smartphones for BYOD, it is crucial to not exclude patients who do not own a personal smartphone. The best clinical trial management system should incorporate both BYOD and sponsor-supplied diaries to ensure all potential patients can enroll.

While eCOA does not yet have the capability to send out real-time alerts, early adoption of this technology is a step in the right direction. A workaround to temporarily solve real-time alerts could be text-message reminders sent directly to your patients, alerting them to take the medication and fill out their diaries (Editor’s note: check out our text message reminder feature!). Patients can respond to these text messages if they are experiencing any side effects, such as nausea, which will result in follow-up visits.

The Case for eSource

I will spend the bulk of my day going through every data point the site has collected for each subject. I verify that the information is complete, accurate, makes logical sense and was properly entered into the EDC (electronic data capture) system. When the EDC is properly set up prior to the study starts, this process can be as simple as just checking to make sure the numbers on the page match what’s in the EDC system.

However, it is seldom this easy. In my experience, sites rarely have a fully functional EDC with good data validation and system queries in place prior to study start. Due to poor study foresight, tight timelines result in the implementation of deficient study management systems. The CRA is responsible to work with the site to mitigate the errors incurred as a result of any shortfalls. Errors are compounded by the fact that sites often enter data into the EDC several days after patient visits occur. It is not uncommon for a site to miss crucial study data points at the beginning of enrollment.

The need for on-site monitoring of early study data is crucial to ensure research sites are capturing all required data. Industry standards tend to require a visit within the first 2 weeks of enrollment. Unpredictable enrollment and a large site load can make it difficult for a CRA to meet this crucial requirement. With delayed EDC entry and required on-site monitoring, it can be anywhere from several days to months before a research site is even aware of a data deficiency, which could potentially affect all of a site’s patients up to that point.

As I start to monitor patient 10’s study charts, I notice that the patient is missing a date for their last HIV viral load. The site has recorded values based on patient-reported data but does not have confirming lab reports with a viral load. While it is not uncommon to work from patient-reported medical history, this study requires lab confirmation prior to enrollment. I take a quick peek back through subjects 1 through 8 and see this happened for 5 patients. The EDC system required a value and the site did not notice the error since they used the patient-reported value. The previous monitor missed it because the original protocol did not specify that sites needed to confirm viral load prior to enrollment. The update in protocol revision 1 clearly specifies the change, but the site was activated on the original protocol and was never retrained by their previous monitor.

Because this error was not caught, I now need to tell the site that over half of their patients were enrolled in error and the site will not be paid for the work they did with those patients.

The news gets worse.

The site was already paid for the patients, so the CRO will now be asking the site for over half of that money back. Protocol enrollment requirements are designed very specifically to protect patient safety - any patient enrolled in error can potentially endanger the patient and study integrity.

The enrollment issue has now endangered 5 patients, provided unusable data, and cost the site and CRO a lot of time and money.

The worst part is that this problem can be easily avoided.

A relatively new solution has arrived on the clinical trials scene: eSource. eSource comes in many forms and there currently isn’t a one-size-fits-all solution. Each study in clinical trials presents unique problems that require unique solutions.

Recently, more complete eSource systems have emerged. New systems seek to eliminate all of the issues caused by the delay in CRA monitoring (and the costs they incur to both sites and CROs). The site I am at today does not utilize eSource, so I will be paging through multiple patient’s visit binders all day. Each error I find from, simple mistakes like using the wrong year in a date to larger issues such as missing data, needs to be addressed by the site staff while I am physically on-site. The site staff have a regular workload while I am on-site, a workload that will be continuously interrupted all day each time I find a new issue that needs to be addressed. This tension often leads to poor relations between a site and their CRA, thus reducing the CRA’s ability to serve effectively.

I notice that the site has incorrectly completed all the dates on every visit for every patient using 2017. Recording the incorrect year is a common error with no real impact on the data but still needs to be corrected. This error means I will be ruining the study coordinators’ lunch time by having her correct the date to 2018 on a couple hundred pages. These corrections will be a fruitless task that would never occur if the site used a good eSource solution. A good eSource enters timestamps automatically for each data point– the staff doesn’t even have to enter a date as the audit log captures all the required information.

Timestamps that include user signatures go a step further. With unique user accounts, every data point is traceable back to its originator. Site staff does not have to waste any time signing and dating, and instead they can focus only on performing the patient visit as quickly as possible. Expediting patient visits is critical as the industry moves towards more patient-centric trials.

Effective eSource reduces the workload on sites by decreasing visit time and transcription errors, thus freeing up site overhead to take on additional studies.

As I sit at the small desk in the makeshift office I was given, pouring through pages of data to ensure the site does not have any transcription errors, it occurs to me that eSource renders this entire process obsolete.

The greatest benefactor of eSource is the CRO. Too often a monitor’s time is spent fixing transcription errors that do not exist when eSource is properly implemented. eSource enables the data to be pulled directly into the EDC. This process dramatically reduces the workload on sites by eliminating data entry and the need for QC/QA for the data entry process. Source data verification can easily account for over 80% of a monitor’s time. After eliminating the need for source data verification on-site by making the source electronic, the monitor can focus on the larger issues of site enrollment, performance, and patient compliance, which can all be overlooked with a high source verification workload.

Because the site I am at did not utilize eSource, I will have to page through hundreds of papers to ensure nothing is missing or incomplete. As I scramble to ensure I checked the bare minimum before I need to rush off to catch my flight, only to do it all again tomorrow in another city, I am struck with this thought:

I love being a CRA, but the role as it exists today is obsolete.

By adopting completely electronic systems, 95% of the issues I address won’t exist. Say goodbye to transcription errors, say goodbye to follow-up calls for missing documentation. Most monitoring issues should be eliminated. The future of monitoring will be to focus on educating sites on the latest available systems developed to reduce workload, improve patient safety, and increase patient engagement.

To the sites and CROs that haven’t started to look at eSource, my advice is simply: Start today.

I believe in 10 years clinical trials will be completely paperless. Complete eSystems will eliminate the existing inefficiencies. Average study length will decrease, and study workload for sites and CROs will drastically decrease. Data safety and trends will be tracked in real-time using advanced analytics, making investigative trials as safe as possible for our patients. I believe the latest technology will significantly reduce the cost of bringing new treatments to market, and it is my sincerest hope that the savings generated by more efficient clinical trials will be passed onto the people who truly matter in clinical trials: Patients.

 

References:

  • Brennan, Zachary. “Survey: CROs See Rise in Employee Turnover Rate, Less Retention Bonuses.” Outsourcing-Pharma.com. William Reed Business Media SAS, 27 Mar. 2013. Web. 19 June 2017.
  • Getz K. Rising clinical trial complexity continues to vex drug developers. (www.acrpnet.org). ACRP Wire. 13 May 2010. Accessed 20 June 2017.
  • United States. FDA. 21 CFR Part 50 ? Protection of Human Subjects. FDA, n.d. Web. 26 June 2017.
  • Staff, RFA. “Chinese Clinical Trials Data 80 Percent Fabricated: Government.” Radio Free Asia. Radio Free Asia, 27 Sept. 2016. Web. 26 June 2017.
  • Patel M (2017) Misconduct in Clinical Research in India: Perception of Clinical Research Professional in India. J Clin Res Bioeth 8:303. doi: 10.4172/2155-9627.1000303
  • Alsumidaie, Moe. “Non-Adherence: A Direct Influence on Clinical Trial Duration and Cost.”Pardon Our Interruption. Applied Clinical Trials, 24 Apr. 2017. Web. 26 June 2017.
  • Pew Research Center. “Mobile Fact Sheet.” pewinternet.org/fact-sheet/mobile/. Pew Research Center Internet & Technology. Feb. 2018.
  • Smith, R. J. “Adherence to Antiretroviral HIV Drugs: How Many Doses Can You Miss before Resistance Emerges?” Proceedings. Biological Sciences. U.S. National Library of Medicine, 07 Mar. 2006. Web. 26 June 2017.

Author: Takoda Roland (CCRA, CCRP), Founder of Philadelphia Pharmaceutical Research, is a clinical trials futurist. He has experiencing working in multiple clinical trials positions and is a member of the of the SOCRA (Society of Clinical Research Associates) and the ACRP (Association of Clinical Research Professionals). His experiences have enabled him to provide consulting services to multiple clinical research start-ups.

Editor: Anna Krauss is a Project Manager at Clinical Research IO. She has experience in the health field through her work as a Research Assistant with the MaineGeneral Hospital system, Hospital General de Agudos Bernardino Rivadavia in Argentina, and through her experiences working as an EMT.

by Maxine Lai Customer Success Manager at CRIO
Share this post
You may also find interesting
Explore our Blog
21 cfr compliance update
Running a Site
5 Minute Read -

21 CFR Part 11 Regulation Compliance Update

Compliance Update:  The 21 CFR Part 11 Regulation is a cornerstone of conducting clinical trials in today’s world.  The release of the regulation 1997 established guidelines for the use of electronic records and electronic signatures in FDA-regulated industries and had a significant impact on the pharmaceutical and medical device industries. The FDA began working on...

eSource vs EDC what's the difference
Running a Study
7 Minute Read -

eSource vs EDC – What’s the Difference?

What is eSource? According to the FDA, eSource is defined as “data initially recorded in electronic format. They can include information in original records and certified copies of original records of clinical findings, observations, or other activities captured prior to or during a clinical investigation used for reconstructing and evaluating the investigation.” eSource is already...

Get articles delivered to your inbox, every week