Prakriteswar Santikary, Ph.D., VP of Engineering and Technology at ConcertAIResearchers might be skeptical and, with people’s lives at stake, that’s understandable. Here’s how to take advantage of the advances while ensuring safeguards are in place.
As artificial intelligence (AI) and machine learning drive innovation and disruption in clinical trials and precision medicine, questions around the responsible use of AI have moved to the forefront of debates among healthcare stakeholders – patients, providers, regulatory authorities, payors and sponsors. To what extent, for example, can we rely on AI algorithms when it comes to matters of life and death or personal health and well-being? How do we make sure that these AI algorithms are being used for their intended purposes? How do we monitor the results of these algorithms?
How do we ensure that AI does not inadvertently discriminate against specific cultures, minorities, or other groups and perpetuate inequalities in clinical trial participation including screening, recruitment, retention and adherence? Are we putting enough safeguards and governance in place for AI-driven decision-making, particularly in areas of automated machine learning? Are humans in the process loop to verify both inputs and outputs of AI predictions?
These and other questions are compelling us, as AI practitioners, to think differently in terms of how we can best advance clinical research and precision medicine using responsible AI that benefits patients, providers, payers, pharmaceutical companies and medical care professionals while avoiding unintended consequences, including bias.
Consequently, clinical researchers in general, and clinical oncology researchers in particular, have been understandably skeptical of using AI-enabled solutions. But even such AI skeptics acknowledge that real-world data (RWD) and clinical evidence generated from RWD contain a treasure trove of information and value that can help cancer patients significantly in terms of advancing precision medicine. Clinical researchers just want to make sure that this value is harnessed responsibly – as it impacts patient lives. They have a professional and moral obligation to be skeptical.
Still, the need to develop new cancer drugs and therapies in a cost-effective and expedited manner is more pressing than ever.
Clinical trials faced hurdles even before the pandemic. Eligibility criteria are complicated and narrow. Patient recruitment is difficult as is retention and adherence, especially because patient statuses, in oncology trials particularly, can change rapidly. Patient screening, inclusion/exclusion criteria and site selection are equally time-consuming, as is source data verification. Around five percent of cancer patients get a chance to join a trial. A successfully launched trial doesn’t mean it will prove that a drug or therapy is useful, either. Covid-19 has only made things more difficult as many cancer patients have delayed care. This industry is ripe for disruption.
When RCTs fall short, look to RWD
While randomized clinical trials (RCT) are still the gold standard in clinical research studies that seek to answer narrowly framed questions for selected study volunteers and patients in very controlled settings, they often fall short of informing clinical implementation of drugs and therapies in real-world environments. As a result, regulatory bodies, including the Food and Drug Administration and European Medicines Agency, often rely on post-marketing studies to guide clinical use and measure drug effectiveness across wider populations.
RWD plays a very important role in this regard. It collects data from real-world sources, including electronic health records (EHR), insurance billing and open claims, medical imaging (CT, MRI), disease and medication registries, patient-reported outcomes, biometric monitoring devices such as smartphones and watches, sleep tracker devices, smart glucose monitors, smart blood-pressure monitors, etc. Using these RWD sources, therapies are now evaluated under real-world conditions across broader patient populations at a much lower cost than is ever possible in conventional RCT.
That’s why the theme of a recent FDA-Project Data Sphere Symposium was entitled “The Art of the Possible.” It’s why the U.S. government has been incentivizing and streamlining the secure sharing of health data since 2016. More recently, regulators have been steadily addressing how RWD and AI fit responsibly into clinical research. The FDA is now seeking public comments, for example, on its draft guidance for using RWD to evaluate the effectiveness and safety of drugs and curating real-world data in registries to support regulatory decisions.
RWD data integration challenges and how AI is coming to the rescue
RWD data holds treasure troves of information and value. The primary challenge in working with EHR-derived RWD data, however, is that critical patient information is often buried only in unstructured documents such as clinical notes, images, and pathology reports, making it difficult to extract and analyze key outcomes of interest. Inclusion and exclusion criteria such as those based on histology, gene altercations and metastatic status are reliably found only in those unstructured documents.
Often, the most effective approach to leverage such unstructured data from EHR and EMR systems for cohort selection is to pair AI-driven technology, primarily Natural Language Processing (NLP) with human review. NLP can read and comprehend doctors’ notes and other texts from EHR/EMR systems. Optical technologies can similarly read radiological scans and other images at very high rates of accuracy and speed. Modern data processing platforms with the help of cloud computing, software-as-a-service (SaaS) and data-as-a-service (DaaS) technologies enable these innovations at scale. These data and AI platforms and cloud services are performing superior data ingestion, curation, enrichment and integration at scale and unlocking new life-saving drugs and therapies and bringing them to market more quickly and cheaply.
Responsible AI and best practices
As AI practitioners, we scrutinize AI algorithms and their behavior very closely. Based on experience, we laid out the following four principles for applying AI responsibly in clinical research. These principles, incidentally, apply to many other industries as well.
Data security, privacy, governance, and quality – Patient data must be always kept safe and secure and processed and enriched in compliance with all relevant regulatory and privacy regulations. Implementing a clear and informed data governance strategy focused on data quality, lineage, auditability and provenance is crucial.
Fairness, trust, and transparency – AI-enabled solutions in clinical research must be trained, developed and validated using patient data that is representative of the target group for the intended use. AI is only as objective as the data we feed into it. This approach minimizes bias and ensures fairness with fewer unintended consequences.
AI Training and education – Training and education go a long way in safeguarding proper use of AI in clinical research. It’s vital that every user has a keen understanding of the strengths and limitations of a specific AI-enabled solution and can explain the results.
Humans in the loop – The data-crunching power of AI and the accuracy of results derived from AI algorithms must be validated by human experts with domain knowledge in clinical science. AI-enabled clinical research solutions are here to augment and empower people with appropriate supervision.
Using RWD and responsible AI, powered by modern data platforms, cloud computing, SaaS and DaaS technologies, we now have a unique chance to transform clinical research and bring life-saving drugs and therapies to the market faster and cheaper in a safe and effective way. Let’s seize this opportunity to drive innovation in clinical research and precision medicine.
About Prakriteswar Santikary
Dr. Santikary is an accomplished technology executive with over 25 years of industry experience in areas of modern data architecture, distributed computing, cloud computing and AI/ML. He has worked in technology leadership roles across a variety of industries including pharmaceuticals, clinical trials, financial services, and e-commerce. In his current role as VP of engineering and technology at ConcertAI, a leader in enterprise artificial intelligence (AI) and real-world data (RWD) solutions for life science companies and health care providers, Dr Santikary is leading a global engineering team that is developing data products to transform clinical oncology research and precision medicine.
Dr Santikary earned his PhD in Computer Simulation at Indian Institute of Science, Bangalore, India, and post-doctoral research at The University of Michigan, Ann Arbor. Dr Santikary is a regular invited speaker at Chief Data Officer summits at MIT and IBM and speaks regularly at Big Data and AI/ML conferences around the world.