Home / ACCCBuzz Blog / Full Story

AI’s Changing Role in Oncology: Recapping the NCCN Summit

Rachel Radwan


October 3, 2025
AdobeStock_555257293

On September 9, the National Comprehensive Cancer Network (NCCN) hosted an oncology policy summit to explore the evolving landscape of artificial intelligence (AI) and its role in cancer care. The summit provided an opportunity for patients, providers, payers, patient advocacy organizations, and industry to discuss policy and strategies focused on the current application of AI in practice, its promise for care improvement, and strategies to mitigate the risks AI poses in the health care sphere. 

Read Part 1 of this blog series for more information about the morning panel and breakout group discussions. 

Navigating New Frontiers in Policy and Technology

The afternoon kicked off with a keynote address from Travis Osterman, DO, MS, FAMIA, FASCO, associate vice president of research informatics at Vanderbilt University Medical Center, where he outlined his experience with AI at his center, regulatory updates, challenges in implementation, and policy opportunities. 

In 2023, Vanderbilt underwent an internal AI inventory, which included 131 unique projects and systems that were being used across clinical operations and research. This technology was sorted into several categories: improving clinical operations, professional development and recruitment, platforms and frameworks (rather than individual applications), and expanding data assets with the intent of creating better models. 

Dr. Osterman discussed the most impactful implementations Vanderbilt has made so far, starting with the use of ambient scribe technology, which uses a live microphone to correctly attribute speech to each person in the room and summarizes it into a finished visit note. The center has also utilized AI for surgical planning—specifically, using augmented learning for surgeons in the operating room to communicate in real-time with pathologists during surgery to avoid re-resections of tumors. 

Another key innovation for Vanderbilt has been optimized infusion scheduling. “This is the biggest win for our nurses,” said Dr. Osterman. “They are no longer being asked to start or disconnect 2 patients at once, because workload is evened out throughout the shift.” Optimizing patient scheduling in advance also helps the center better determine staff needs 3 to 4 days out, rather than calling in a nurse the day of the shortage. Finally, the use of radiology critical alerts has played a key role in compensating for the high demand for radiologists. “Rather than using AI to interpret findings, we’re using it to examine incidental or planned findings to shorten wait times for these patients,” Dr. Osterman explained. 

Regulatory Considerations and Policy Opportunities 

In the realm of regulatory updates, Dr. Osterman shared the FDA’s recently released guidance on predetermined change control plans (PCCPs) tailored to AI-enabled devices. This guidance offers guardrails for AI models and aims to control algorithm drift without additional FDA approval for each new iteration of the model. According to the guidelines, a PCCP must describe the planned device modifications, the modification protocol, and an assessment of their impact. Dr. Osterman expressed the importance of the FDA’s confidence that AI models will be updated and improved over time: “This is a big step in helping AI get into clinic.” 

The keynote then shifted to challenges in AI implementation. Dr. Osterman discussed a recent study on endoscopy de-skilling, which found that continuous exposure to AI-assisted colonoscopy decreased providers’ adenoma detection rate when AI was removed. This research cautions against overreliance on AI due to the risk of reducing clinical skills and, ultimately, worsening patient outcomes. 

Dr. Osterman concluded by noting several opportunities for policymakers to extend the utility of AI safely. One example is licensing AI and granting certain models a higher level of practice rights, such as medication refills and other administrative tasks to free up provider burden. There is also the opportunity to “re-skill” with AI, by letting a model monitor procedures and practices, identify targeted knowledge gaps, and provide personalized updates on practice changes. 

The Policy Landscape for AI in Cancer Care 

Following the keynote, the final session of the NCCN Summit was a panel targeting challenges and opportunities in AI policies. Regina Barzilay, PhD, distinguished professor, AI and health, Massachusetts Institute of Technology School of Engineering, pointed out that few emerging AI technologies are translated into standard-of-care terms. “The gap is growing between the capabilities of AI and care guidelines, and technology is improving every day. If we don’t integrate it into the guidelines, the only ones suffering are the patients,” she argued.  

Another point of consideration is the type of evidence needed to verify the efficacy of different AI models. “Does it demonstrate clinical utility? Does it affect medical decision-making? These are the questions we need to be asking,” Dr. Barzilay said. 

To that end, Dr. Osterman echoed the importance of being responsive in creating regulations. “We need to move from regulation to legislation and memorialize some of these decisions, so that implementors can be confident that safeguards won’t change even as administrations change,” he stated. 

Eric J. Gratias, MD, FAAP, national physician executive, medical benefit services, EviCore by Evernorth, added that wording is particularly important to consider in legislation: “How do we make sure we word this in a way that solves the problem we’re getting at without creating new problems?” He also distinguished AI from augmented intelligence: the former autonomously replaces a process done by a human and the latter enhances the process while maintaining human involvement. Dr. Gratias argued that the goal of AI should be to continuously create more accurate, elegant models that remove barriers that slow down processes, getting the right information to decision-makers as quickly as possible. 

FDA Regulation and Its Impacts 

Shifting to the FDA’s role in AI policy, Warren Kibbe, PhD, FACMI, deputy director of data science and strategy, National Cancer Institute (NCI), acknowledged the agency’s help in determining approval pathways for integrating AI software and devices into research. Dr. Barzilay echoed this appreciation for the work of the FDA in AI regulation, while noting that there is a need to clearly define bias in AI models—namely, what constitutes a model that is not biased? “Right now, testing for bias is a process of negotiation,” she said. “We need to determine how much testing is enough.” 

It is also important to consider AI model drift: the decline of model performance due to changes in data or in the relationship between input and output variables, leading to faulty decision-making. Asserting that FDA regulations must take this decline into account, Dr. Barzilay added that AI models should alert humans when they are not equipped to make accurate predictions so that guardrails are in place. 

AI in Academic vs Community Settings 

Dr. Osterman acknowledged the widening gap between the use of AI in NCI-designated cancer centers compared with community cancer centers. “We already struggle to get oncologists to perform all available genomic testing,” he noted. “With AI, there’s so much effort involved in vendor reviews and working with revenue cycle teams. Smaller centers simply lack the resources to make the significant investments of time and effort that AI usage requires.”

Dr. Kibbe agreed, stating that when community cancer centers are given the resources to run a trial, they perform at the same level as academic cancer centers. “As we develop more technologies, we have to think about how we can make them available for everybody,” he said. “Otherwise, we only increase the gap.” This is where legislation plays a key role in deploying standard-of-care technology across the US. 

Looking to the Future 

In closing, the panelists shared what they would like to see AI accomplish in the next 5 years: 

  • A more sustainable workforce 
  • Increased adherence to screening guidelines 
  • Increased patient empowerment and engagement via patient-reported outcomes 
  • More personalized screening and treatment. 

While AI poses a unique set of risks and challenges to overcome, its opportunities to personalize and improve the patient experience are boundless. It is imperative, however, that providers and patient advocates remain at the forefront of policymaking conversations, so that patient safety and efficacy inform future regulations and legislation.  



We welcome you to share our blog content. We want to connect people with the information they need. We just ask that you link back to the original post and refrain from editing the text. Any questions? Email Rachel Radwan, Editorial Manager.

To receive a weekly digest of ACCCBuzz blog posts each Friday, please sign up in the box to the left.

 

More Blog Posts