Davos, Agentic AI, and the Trust Problem We Keep Avoiding
I returned from Davos 2026, where as an AI keynote speaker, AI ethicist, and founder, I spoke on Agentic AI, women’s health, and trust. And if I’m honest, the most important thing I heard was not a technical breakthrough or a new model release. It was a recurring human question that kept showing up in different accents, industries, and agendas:
Who do we trust now?
Not “do we trust AI?” That framing is already too narrow.
Davos is a concentrated mirror of the world’s current operating system: institutions under scrutiny, leaders under pressure, citizens exhausted by misinformation, and technology accelerating faster than governance. In that environment, “trust” stops being a soft theme and becomes a hard constraint. Even the World Economic Forum’s own analysis of AI’s economic upside is explicit that the value is gated by whether we can build verifiable trust, with traceability, transparency, and accountability. (World Economic Forum)
And it was not only AI leaders saying this. In Davos remarks covered widely, BlackRock CEO Larry Fink made a blunt point: capitalism has not spread wealth equitably, and AI might not either. He also acknowledged something many people feel: Davos itself has lost public trust and needs to earn legitimacy again. (Business Insider)
That is the backdrop for Agentic AI.
Why Agentic AI Changes the Trust Stakes
Most organizations have grown comfortable with a certain type of AI: models that recommend, predict, summarize, and score. They can be wrong, biased, or messy, but the output is still “advice.” A human can ignore it, debate it, override it.
Agentic AI is different.
Agents do not just generate an answer. They can pursue an outcome. They can plan steps, call tools, retrieve information, trigger workflows, and initiate actions. In other words, agency moves from the user to the system.
That is exhilarating. It is also where trust stops being a slogan and becomes infrastructure.
At Davos, I kept hearing versions of the same concern: we are moving into an era where the question is not only “Is the model accurate?” but “Is the system safe to act in the world?”
Legal and policy voices at the forum put it plainly: the next era’s real challenge is trust and alignment with human values, not simply tighter control. (The Economic Times)
As an AI ethicist and keynote speaker, that message lands with me in a very practical way: trust is now a product requirement, a governance requirement, and a leadership requirement. It also has a data requirement, which is where my Davos message on women’s health becomes urgent.
The Photo With Yann LeCun and What It Represents
I heard Yann LeCun, former Chief AI Scientist at Meta. Whether or not you agree with his views on every frontier question, he embodies a core trust debate that is shaping how AI evolves: openness versus opacity.
LeCun has repeatedly argued for openness as a counterweight to concentrated power and as a mechanism for scrutiny, iteration, and shared progress. (Six Five Media)
That matters because trust is not only about whether a model can do something impressive. It is also about whether the ecosystem around it allows for independent evaluation, contestability, and accountability. When the systems steering decisions are closed, trust becomes a matter of belief. When they are open to meaningful review, trust becomes something you can earn.
That tension is alive in the Agentic AI conversation. The more capable agents become, the more stakeholders will demand to know: Where did that decision come from, and who is responsible for its downstream impact?
The Conversations I Facilitated and the Subtext That Could Not Be Ignored

Trust is becoming the most strategic asset in AI.
In consumer contexts especially, trust is fragile. If AI touches identity, appearance, health, or personal data, people don’t just evaluate outcomes. They evaluate intent. They evaluate respect. They evaluate whether the system is exploiting them or serving them.
This is where I think leaders get stuck. They want to move fast, because the market is moving fast. But they also know that one trust failure can erase months of progress. Agents make that risk nonlinear, because action compounds consequences.
So yes, Davos talked about trust as a concept. But underneath the concept is a blunt operational reality: without trust, adoption stalls, regulation hardens, and social backlash grows.
My Keynote Message: Women’s Health Data Is Not “Good Enough” Data
I gave a brief keynote on data disparities in women’s health, and why we cannot rely on existing data to build accurate models.
Let’s name the problem without euphemisms: women are missing from the data that shapes medicine.
The gender health data gap is not a niche issue. It is structural. One peer-reviewed analysis describes two major patterns: missing or incomplete evidence for conditions affecting women due to underfunding and lack of inclusion, and the interpretation of existing evidence using men’s symptoms as the default “textbook” standard. (PMC)
And it is not just historical. Nature has reported on continuing underrepresentation, highlighting how few trials actively recruit women in some contexts, and how glaring the gap becomes for pregnant and lactating women. (Nature)
Now put that into an AI pipeline.
If the training data underrepresents women, models can appear “accurate” while systematically failing women. If the labels and clinical baselines were built on male defaults, models can be “validated” against biased ground truth. If the datasets are outdated, models can amplify yesterday’s assumptions about today’s bodies.
This is why I used the phrase “fresh data.” Not because novelty is trendy, but because stale, incomplete, and biased data produces confident systems that are wrong in predictable directions.
The World Economic Forum has underscored the scale of the women’s health research gap and the economic and wellbeing cost of leaving it unresolved. (World Economic Forum)
For Agentic AI in healthcare, this becomes existential. When an agent can recommend, schedule, triage, route, or prioritize care, data gaps turn into decision gaps. And decision gaps become harm.
If we are serious about trustworthy AI, we have to be serious about the dataset’s omissions, not only the model’s outputs.
The Unstoppable Women in Web3 and AI Breakfast: Collaboration as a Trust Strategy

I spoke about something I feel deeply: we will not make meaningful progress in women’s health by competing with each other.
Competition can drive excellence, yes. But there is a kind of competition that comes from scarcity, the sense that there is not enough funding, attention, or oxygen to go around. That mindset does not just slow progress, it fragments it.
Women’s health needs collaboration because the problem is too distributed to solve in silos:
- Data is fragmented across systems that do not talk to each other.
- Women’s experiences are diverse across age, geography, race, income, and environment.
- Symptoms can present differently and are too often dismissed as “normal” or “stress.”
- Funding patterns have historically favored other priorities.
So if we keep competing for the same narrow lanes, we keep reinforcing the same narrow datasets.
Collaboration is not a feel-good value. It is a technical requirement for better models.
Because better models require better data. Better data requires coordinated collection, shared definitions, thoughtful privacy approaches, and inclusion across cohorts that have been historically invisible.
And that requires women working together.
My Opinion: Trust in AI Will Be Won or Lost on What We Choose to Measure
Here is my take coming home from Davos:
We are having a trust conversation that is still too model-centric.
We debate hallucinations, alignment, explainability, and safeguards, and those are real issues. But if we do not talk about the upstream truth, the truth that determines what the model can even learn, we will keep building systems that sound authoritative and behave inequitably.
Trust in AI models is built on four uncomfortable questions leaders need to face:
- Whose reality is represented in the data?
- Who decides what “normal” looks like?
- Who gets the benefit of automation, and who gets the risk?
- What happens when the system is wrong, and who pays the price?
Agentic AI intensifies every one of these.
Because when AI can act, errors do not stay in the chat window. They move into workflows, care pathways, financial decisions, hiring funnels, and real lives.
So when I say “trust,” I do not mean a brand statement. I mean an evidence-based confidence that the system is grounded in reality, accountable in practice, and built with the populations it will serve in mind.
What I Suggest Leaders to Do Next
If you are deploying Agentic AI, especially in health or wellness, here is the opinionated advice I am carrying from Davos:
- Treat data gaps as a safety issue, not a diversity issue.
- Invest in women’s health datasets with the same seriousness you invest in model performance.
- Build coalitions. If competitors can align on safety standards and shared measurement, women’s health deserves the same approach.
- Stop outsourcing trust to PR. Trust is earned through design choices and measurable behavior.
Davos reminded me that trust is the currency of the next decade. Not because leaders want it to be, but because the public is demanding it, regulators are responding to it, and markets will reward the companies that can prove it.
And in women’s health, trust will not come from louder claims. It will come from better data, better collaboration, and better outcomes for the people who have been treated as an afterthought for far too long.
I invite you to participate in this discussion by sharing this blog on your social media and tagging me.


