The Invisible Builder Inspiring Deeper Conversations Around Global Poverty Issues

edited by Entrepreneur UK | May 11, 2026
Shekhar Natarajan

An AI ethicist is encouraging a broader conversation about how poverty is understood — and what questions the field may still need to ask.

Ask Shekhar Natarajan to define poverty, and he does not begin where audiences expect him to.

He does not begin with the streetlight in South Central India, the one he studied under as a boy, because his family had no electricity at home. He does not begin with the thirty rupees his mother received the day she pawned her wedding ring to pay his school fees, or with the thirty-four dollars he carried into the United States as a young man. Over the past several years, he has shared these ideas in a range of academic, policy, and industry settings, contributing to broader conversations about the ethics of artificial intelligence. The biographical arc travels well. He uses it when the room asks for it.

It is not, he says, what poverty actually means.

“Poverty is being completely invisible,” Natarajan says, returning to a definition he has been refining for some time. Not unseen in the casual sense, he clarified — overlooked at a meeting, passed over for a role — but unseen structurally. Invisible to the systems that decide whose problem deserves a solution. Invisible to the metrics that determine which lives a market is built to serve and which lives it is built to skip. A child without electricity, in his framing, is not poor because the lights are off. The child is poor because the grid was designed as if the child were not there.

That distinction, several people who have worked with him say, is the one to follow. It runs underneath the technical work, underneath the patent filings, underneath the company. It is, in his telling, the spine of his own story.

The Angelic Journey

He has a name for the arc now. He calls it the angelic journey, and he describes it as a single movement in one direction: from invisible to visible.

The phrase travels with the technical project. Orchestro.AI, the Dublin, California–based company Natarajan has been building since 2023, is organized around a system he calls Angelic Intelligence — twenty-seven specialized AI agents, each embodying a virtue drawn from a different cultural tradition, deliberating to consensus before the system acts. The architecture has been discussed in a range of academic, policy, and industry settings, where Natarajan has contributed to broader conversations about AI ethics over the past two years.

But when Natarajan talks about the angelic journey, he is not, in the first instance, talking about the architecture. He is talking about the personal arc that produced it. The technical project, in his account, is a generalization of the human one — an attempt to encode at scale what he learned about being seen.

“The journey did not require me to become someone else,” he has said. “It required the world to look at the person who had already been there.”

The Builder’s Mindset

If invisibility is the diagnosis, the builder’s mindset is what Natarajan calls the response. He returns to the phrase often, and he uses it in a specific way.

A builder, in his definition, is not a person waiting to be discovered. A builder is a person who refuses the terms of their invisibility by making something the world is eventually forced to see. A patent is a builder’s argument. A working system is a builder’s argument. A company is a builder’s argument. Each one, he says, is a refusal — repeated in materials — of the proposition that the builder does not exist.

It is the lens through which he reads his own corporate years. Across two and a half decades at Walmart, Coca-Cola, Disney, PepsiCo, Target, and American Eagle Outfitters, Natarajan accumulated more than seventy patents in supply-chain technology. He does not describe the patents as the rewards of having arrived. He describes them as the mechanism of arrival itself.

“I did not climb a ladder,” he says. “I built rungs the ladder did not have.”

The same disposition, he argues, is what is missing from much of the current AI conversation. The dominant builders of frontier AI, in his view, are people who have always been visible — institutionally, financially, geographically. They are building, however earnestly, from inside a system that already accommodates them. The questions they ask the technology to answer are, by virtue of their position, the questions the technology was always going to be asked. Whose problem deserves a solution? The answer drifts, almost without anyone choosing it, toward the problems of the visible.

Seeing Beyond What You Could See

There is a phrase Natarajan uses to define what super-human capability actually is, and the definition is not the one most of the field uses.

“Super-human is one thing,” he has said. “Seeing beyond what you could see.”

Not computing faster. Not remembering more. Not optimizing harder. Seeing beyond. Recognizing what the prior frame of reference would have skipped over. He often illustrates the point with a logistics example from his earlier career: a luxury handbag and an urgent medical parcel sitting side by side in a warehouse. Conventional orchestration software is often designed around efficiency and margin, which can shape how priorities are determined. In his description, an Angelic Intelligence system would be designed to account for broader human considerations, such as the urgency of medicine, rather than relying solely on commercial optimization.

It is, by the standards of the field, an unusually moral definition of super-human capability. Most contemporary benchmarks define super-human against measurable contests: the model that beats the doctor, the model that beats the lawyer, the model that beats the chess grandmaster. Natarajan’s definition is different. A system is super-human, in his framing, when it can see what its builders, on their own, could not.

It is, not coincidentally, the same thing he says about people. The builders he respects most, he has said, are the ones who saw something — a problem, a person, a possibility — that the room had been organized to overlook. The builder’s mindset, in his account, is finally a vision problem. It is the discipline of looking longer at the place where everyone else has stopped looking.

Why the Argument Cuts

For the past two years, the global AI conversation has run on a fairly narrow track. How fast. How big. How dangerous. How aligned. Natarajan does not dismiss the questions. He argues that they are, at root, questions asked by the visible about the visible — that they presume a world in which the system being built will be deployed across populations and institutions already legible to its makers, and that they do not seriously interrogate what happens to the people the makers cannot see.

His argument is that a different kind of intelligence is required precisely because the existing kind, no matter how aligned, will keep optimizing for what its builders can see. A frontier lab in San Francisco, he points out, can be staffed by the most thoughtful researchers in the world, and the system it produces will still inherit the visual field of that lab. Angelic Intelligence, as he describes it, is an attempt to engineer a wider field — to build into the architecture itself a deliberation across moral traditions that no single team, however well-intentioned, would generate on its own.

The pitch is grand, and Natarajan does not pretend otherwise. He has called the project a thousand-year endeavor, a language better suited to cathedrals than to quarterly earnings. But the thousand-year frame is, in his telling, the only frame honest to the problem. A machine designed to see beyond what its builders could see, he says, is not going to be finished in a release cycle. By definition, it will be finished by people who are not yet in the room.

The Builder Who Was Once Invisible

What gives the argument its force, in the rooms where it is delivered, is that the speaker is the proof of the case.

Natarajan is, demonstrably, a builder who was once invisible — to the system that designed schools without electricity, to the system that did not anticipate the immigrant arriving with thirty-four dollars, to the supply-chain orthodoxy that did not know it was missing the patents he eventually filed. He did not wait for the system to see him. He built until it had to.

The angelic journey, in his own definition, is that movement: from being unseen to being unable to be unseen, accomplished not by petition but by construction. The architecture he is now trying to build into AI is, in a real sense, a generalization of that journey. A machine designed to see the people that the previous machines could not. A system whose super-human quality is not its speed, but the width of its attention.

Whether Angelic Intelligence will work at the scale he is proposing is a question the field will eventually have to answer. The architecture is built. Forty-three patents have been filed, by the company’s count. Deployments are coming. What is already true, regardless of how the technology resolves, is that Natarajan has reframed the question that surrounds it.

The question is no longer how powerful the machine can become.

The question is what, and whom, the machine is finally able to see.

Natarajan is raising this question across academic, policy, and industry settings as a builder whose perspective is shaped in part by his own lived experience.

An AI ethicist is encouraging a broader conversation about how poverty is understood — and what questions the field may still need to ask.

Ask Shekhar Natarajan to define poverty, and he does not begin where audiences expect him to.

He does not begin with the streetlight in South Central India, the one he studied under as a boy, because his family had no electricity at home. He does not begin with the thirty rupees his mother received the day she pawned her wedding ring to pay his school fees, or with the thirty-four dollars he carried into the United States as a young man. Over the past several years, he has shared these ideas in a range of academic, policy, and industry settings, contributing to broader conversations about the ethics of artificial intelligence. The biographical arc travels well. He uses it when the room asks for it.

It is not, he says, what poverty actually means.

Related Content