Jul 12, 2023
The Cigna Group’s approach to ethical AI practices
artificial intelligence used to leverage data to improve patient care

Artificial intelligence has the potential to dramatically improve the experience for all those involved in the health care journey.

From clinical predictions to understanding health care utilization and trends, The Cigna Group has been using AI for a number of years. Our use of AI falls primarily into these three areas:

  1. Data and insights: AI allows us to better leverage our data to improve the care experience and health outcomes of those we serve.
  2. Empowering providers: We are leveraging AI to help reduce administrative burden, allowing providers to focus more on patients.
  3. Efficient operations: We are also using AI to transform the way we work, making the health care system as a whole more efficient.

One example of our use of AI is to identify certain cancer diagnoses earlier. Our breast cancer identification model helps to identify customers on average 27 days earlier, and our lung cancer model 22 days earlier, closer to the initial diagnosis. 1 This personalized, proactive support empowers patients to make more informed care decisions that result in better health outcomes and higher cost savings — for both customers and their health plans.

“AI gives us the power to improve the health care system and patient experiences, as well as patient outcomes,” said Katya Andresen, chief digital and analytics officer at The Cigna Group. “Just as there are tremendous opportunities with AI, there are also risks.” This is particularly true today, with the advent of generative AI and how accessible it has become, she added.

Operationalizing ethical AI

At The Cigna Group, we are committed to ethical AI practices. Our AI Center of Enablement assesses and governs guardrails, systemic controls, and processes — providing oversight to ensure the responsible use of AI. This cross-functional group brings together individuals from across technology, privacy, security, legal, compliance, marketing, and more to assess new use cases of AI and whether they meet our AI ethics principles:

Transparency: We must be able to identify, explain, and share how we are using artificial intelligence, and ensure there is collective understanding of our use of AI.

Accountability: We must ensure that we are using AI to benefit our stakeholders (customers, clients, providers, and our employees) in a meaningful way. We hold ourselves accountable to the thoughtful use of AI across our business.

Safety: We must guarantee our AI solutions are fair, compliant, safe, secure, auditable, and human-centered.

“Our AI ethics principles are a set of guardrails we’ve set up, which help ensure ethical use of AI in our organization,” Andresen said. “Our principles — which are transparency, accountability, and safety — help us ensure that we are using AI responsibly and in a way that upholds our mission to improve the health and vitality of all those we serve.”

These guardrails also ensure we are incorporating useful, meaningful, human interactions in the system.

“Above all else, a human must be in the loop,” said Andy Fanning, vice president of intelligent automation, AI enablement, and business transformation at The Cigna Group. “Our AI solutions must be used to augment, never replace, the human experience — allowing experts to spend more time in the areas where they can apply their expertise.”


1. Based on the analysis of a pilot of model performance conducted on a population of breast cancer customers identified for oncology case management from April 2021 – June 2021 using predictive model (experimental group) vs. control (standard identification triggers). Based on the analysis of a proof of concept of model performance conducted on a population of lung cancer customers identified for oncology case management in March 2021 using predictive model (experimental group) vs. control (standard identification triggers).