“Right now, the real frontier of big tech isn’t just the next AI model—it’s whether people can still trust these systems when they don’t understand them, and whether someone inside the room is willing to say no when that trust is at risk,” says Modupe Akintan, a Privacy and AI Engineer. It is the kind of remark that lands with unusual force in 2026, when trust has become a scarce resource in a digital economy that depends on ever‑larger data flows and increasingly opaque automated decisions.
From a distance, Modupe’s résumé, including a first‑class engineering degree in Nigeria, a security‑focused master’s from Stanford, privacy roles at major technology firms, and a growing portfolio of policy and standards work, resembles the classic big‑tech success story. Up close, the narrative is more complicated. Her chosen field, risk and privacy in data‑driven systems, is less about validation than about interrogating the infrastructures and incentives that make data‑hungry systems work.
Trust as a Design Constraint
Modupe’s field sits at the intersection of privacy engineering, AI governance, cybersecurity, and technology policy, with a practical focus: turning legal and policy obligations into technical and organizational controls that real systems can live with. That means translating abstract requirements from data‑protection law, AI regulations, and security standards into decisions about what data is collected, how long it is retained, who can access it, and which models are permitted to infer from it.
Industry roadmaps for 2026 emphasize the same convergence. Analysts argue that AI governance, privacy, and security must be treated as a single discipline if organizations hope to navigate tightening enforcement and increasingly complex threats. Pieces on “data privacy in 2025 and beyond” describe an evolving frontier shaped by evolving regulations, AI risks, and the growing importance of privacy‑by‑design approaches, warning that a single lapse can undermine confidence in entire classes of technology. For Modupe, these pressures are not reasons to slow innovation so much as arguments for treating trust as a design constraint from the outset.
From Nigerian Lecture Halls to Global Platforms
Her path began in far more traditional settings. In Nigeria, Modupe completed a Bachelor of Engineering in Computer Engineering at Afe Babalola University, graduating with first‑class honors and recognition as the best‑graduating student in her department, an achievement that drew local media attention and online celebration. Those accounts describe a student balancing heavy academic loads with leadership and editorial responsibilities, foreshadowing the multi‑front roles she would later assume in industry and policy.
A fully funded scholarship took her to Stanford University, where she earned a master’s degree in computer science with a concentration in computer and network security. At the Stanford Empirical Security Research Group, she evaluated risk‑rating methodologies used by third‑party risk management companies, examining how these vendors scored organizations’ security posture and how those scores shaped vendor relationships. The work highlighted how much institutional “trust” in complex ecosystems rests on models and metrics that outsiders find difficult to interrogate. “You realize that entire supply chains are leaning on scores they don’t fully understand,” she has said.
Frontiers of Big Tech, High-Level and High-Stakes
Today, Modupe works as a Privacy and AI Engineer at Amazon, in a role she describes in intentionally high‑level terms: focusing on privacy, AI governance, and risk management for data‑driven systems, translating regulatory and compliance expectations into practical guidance rather than disclosing details about internal tools or proprietary architectures. The work unfolds in a context where large cloud and AI platforms are both the infrastructure of the digital economy and the subject of intensifying regulatory and public scrutiny.
At the same time, she contributes to the Cloud Security Alliance’s AI Safety and Data Privacy Engineering Working Group, part of a broader AI Safety Initiative that seeks to develop best practices and standards for responsible AI adoption. CSA publications on trust and security trends emphasize the growing importance of data privacy engineering as a discipline, offering technical and practical methods for protecting sensitive data and ensuring compliance in cloud‑centric, AI‑rich environments. “Trust is becoming less about reassuring statements and more about whether your controls can withstand independent scrutiny,” she notes. “That’s where engineering and policy have to meet.”
Trust, Inference, and the New Harms
One reason trust has become harder to define and easier to lose is the rise of inferred data. Strategic briefings on the 2026 privacy landscape describe a “governance nightmare” in which AI systems derive sensitive attributes from seemingly innocuous inputs, raising questions about consent, fairness, and the scope of regulation. These inferences can shape credit decisions, health recommendations, targeted advertising, and content moderation, often without users realizing what has been inferred or how it is being used.
For a privacy strategist like Modupe, this is where frontiers become visible. “When a system can infer more about you than you ever told it, the old consent model starts to break down,” she says. “Trust at that point depends on whether we’ve designed limits into the system, not just whether we asked you to click ‘I agree’.” Emerging AI regulations, including risk‑based frameworks in Europe and principle‑driven guidance elsewhere, increasingly demand impact assessments, documented safeguards, and demonstrable human oversight for high‑risk use cases. Modupe’s work seeks to turn those expectations into operationalized outcomes for engineers and product teams.
Policy Fellowships and Contested Governance
As Director of Partnerships at the Paragon Policy Fellowship, she helped scope applied technology‑policy projects and foster collaboration with government agencies, academic institutions, and industry partners. She is a Fellow of CHAIRES, an initiative focused on AI, human rights, and emerging technologies, and a member of the Center for AI and Digital Policy’s AI Policy Clinic, which brings academic and practitioner perspectives to bear on global AI regulation.
Her contributions also include conference and standards work: serving on the program committee for the IFIP SEC conference, reviewing for IEEE initiatives such as IATMSI, and participating in the IEEE Tech Forum on Societal Harms, which examines how digital systems can amplify bias, manipulation, and other harms. When invited to judge early‑stage projects at events like Vibe Demo Day, she says she asks a simple question: “What happens to the most vulnerable person who touches this system?” If the answer is unclear or unsatisfying, she considers the design incomplete.
A Critic’s Challenge: Trust or Theater?
Yet as trust language proliferates, some observers warn that “trust” risks becoming a form of theater. “We’re seeing a wave of trust‑branded frameworks and privacy‑preserving technologies marketed as differentiators, but far less willingness to walk away from intrusive business models,” says a policy analyst.
The analyst points to predictions that, despite low baseline trust in AI for high‑risk scenarios, a growing share of consumers will still use generative tools for critical decisions in areas like finance and health. At the same time, privacy‑preserving technologies are being promoted as ways to protect data while still extracting value from it. “These tools are important,” the analyst says. “But if they’re layered on top of systems that still incentivize maximal data capture and opaque inference, they may end up extending the life of practices that should be questioned more fundamentally.”
Redefining Trust From the Inside
Modupe is not blind to this tension. “Trust can absolutely become theater if it’s just a new vocabulary on top of the same decisions,” she says. “The work, at least the way I see it, is to change which decisions feel possible.” In practice, that can mean advocating for shorter retention windows, narrower data use, or more conservative model‑deployment criteria, and accepting that some features may never ship if the risks cannot be responsibly managed.
She measures progress in small shifts: a system designed to avoid collecting sensitive attributes it does not need; a vendor contract that constrains secondary use; a governance process that forces discussion of societal harms before deployment. Those changes are rarely visible to users, but they shape the environment in which trust can either harden or evaporate. “The most meaningful trust work is often invisible,” she says. “A model we decide not to train, a pattern of inference we decide not to allow—those are the frontiers that matter.”
A Reflection on the Frontiers of Big Tech
When asked what it means to be described as a “rising privacy strategist” at the frontiers of big tech, Modupe pushes back on the heroic framing. “I don’t think of it as rising,” she says. “I think of it as joining a long line of people—lawyers, advocates, researchers, engineers—who have been arguing for a more honest relationship between technology and the people it governs.”
For her, redefining trust at those frontiers is less about personal trajectory than about collective responsibility. “If, ten years from now, people can rely on AI‑driven systems without feeling constantly watched, profiled, or manipulated, it won’t be because trust suddenly ‘improved’ on its own,” she reflects. “It will be because enough of us, in enough rooms, decided that risk and privacy were not afterthoughts but the rules of the game—and refused to move the frontier forward unless trust came with it.”
