普通视图

Received before yesterday

AI Leaders Discuss How to Foster Responsible Innovation at TIME100 Roundtable in Davos

2026年1月22日 14:30

Leaders from across the tech sector, academia, and beyond gathered to explore how to implement responsible AI and ensure safeguarding while fostering innovation, at a roundtable convened by TIME in Davos, Switzerland, on Jan 21.

In a wide-ranging conversation, participants in the roundtable, hosted by TIME CEO Jess Sibley, discussed topics including the impact of AI on children’s development and safety, how to regulate the technology, and how to better train models to ensure they don’t harm humans.

[time-brightcove not-tgx=”true”]

Discussing the safety of children, Jonathan Haidt, professor of ethical leadership at NYU Stern and author of The Anxious Generation, said that parents shouldn’t focus on restricting their child’s exposure entirely but on the habits they form. He suggested that children don’t need smartphones until “at least high school” and that they don’t need to be exposed to the technology to be able to learn how to use it at the age of 15. “Let their brain develop, let them get executive function, then you can expose them.” 

Yoshua Bengio, professor at the Université de Montreal and founder of LawZero, said that scientific understanding of the problems posed by AI is necessary to solve them. He outlined two mitigations: first, designing AI that has built-in safeguarding to avoid harming a child’s development. This could be brought about by demand, noted Bengio, who is known as one of the “godfathers of AI.” Second, he said, governments should play a role; they could potentially implement mechanisms such as using liability insurers to indirectly regulate AI developers by making insurance mandatory for developers and deployers of AI. 

While the U.S. AI race with China is often cited as a reason to support limiting regulation and guardrails on American AI companies, Bengio argued: “Actually, the Chinese also don’t want their children to be in trouble. They don’t want to create a global monster AI, they don’t want people to use their AI to create more bio-weapons or cyberattacks on their soil. So both the U.S. and China have an interest in coordinating on these things once they can see past the competition.” Bengio said international cooperation like this has happened before, such as when the U.S. and the USSR coordinated on nuclear weapons during the Cold War. 

The roundtable participants also discussed the similarities between AI and social media companies, noting that AI is increasingly in competition for users’ attention. “All the progress in history has been about appealing to the better angels of our nature,” said Bill Ready, CEO of Pinterest, which sponsored the event. “Now we have, one of the largest business models in the world has at its center engagement, pitting people against one another, sowing division.” 

Ready added: “We’re actually preying on the darkest aspects of the human psyche, and it doesn’t have to be that way. So we’re trying to prove it’s possible to do something different.” He said that, under his leadership, Pinterest has stopped optimizing to maximize view time and started optimizing to maximize outcomes, including those off the platform. “In the short term, that was negative, but if you look long term, people would come back more frequently,” he said.

Bengio emphasized the importance of finding a way to design AI that will “provide safety guarantees as the systems become bigger and we have more data.” Setting sufficient conditions for training AI systems to ensure they operate with honesty could also be a solution, Bengio posited. 

Yejin Choi, professor of computer science and senior fellow at the Institute for Human-Centered Artificial Intelligence (HAI) at Stanford University, added that AI models today are trained “to misbehave, and by design, it’s going to be misaligned.” She asked: “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs [large language models] on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Responding to the question of whether AI can make us better humans, Kay Firth-Butterfield, CEO of the Good Tech Advisory, pointed to ways we can make AI a better tool for humans, including by talking to the people who are actually using it, whether that’s workers or parents. “What we need to do is to really think about: how do we create an AI literacy campaign amongst everybody and not have to fall back on organizations?” she said. “We need that conversation, and then we can make sure AI gets certified.”

Other attendees at the TIME100 Roundtable included Matt Madrigal, CTO at Pinterest; Matthew Prince, CEO of Cloudflare; Jeff Schumacher, Neurosymbolic AI Leader at EY-Parthenon; Navrina Singh, CEO of Credo AI, and Alexa Vignone, president of technology, media, telco and consumer & business services at Salesforce, ​​where TIME co-chair and owner Marc Benioff is CEO.

TIME100 Roundtable: Ensuring AI For Good — Responsible Innovation at Scale was presented by Pinterest.

At Davos, Business Leaders Seek a Human-Centered AI Future

2026年1月21日 01:47

Leaders from Dow Chemical Company, EY, and NTT Data Inc. shared their perspectives on the impact of scaling up new technologies like AI during a TIME100 Talks panel discussion in Davos on Jan. 20. 

The panel took place on the sidelines of the World Economic Forum’s annual meeting, which kicked off on Jan. 19 in Davos, drawing around 3,000 high-level participants from business, government, and beyond, in addition to many more observers, journalists, activists, and others.

[time-brightcove not-tgx=”true”]

During the panel, titled “Innovation in a Multipolar Era,” the participants discussed the benefits of integrating AI, and its potential in areas such as health care and education, as well as some of the challenges of integrating the technology at scale within businesses. 

“We…see enormous benefits, whether it’s discovery of new materials, new drugs, or tech-driven productivity,” said Abhijit Dubey, CEO and chief artificial intelligence officer at NTT Data. “But at the same time we really have to watch out for what we’re doing.” 

He added that, unlike all other innovations before it, AI is the “first technology that will actually be non-human driven.” Not only can this lead to unexpected outcomes, but the technology requires vast amounts of energy and water, in addition to mining of rare earth minerals that are in some cases leading to tensions over resources

Another concern is the “paradox of massive abundance at the same time [as] a massive market labor dislocation,” said Dubey, noting it is “something that we really have to watch out for.”

“The pain is not the destination, it is in the transition,” added Debra Bauler, chief information and digital officer at Dow, who explained how the company is approaching its workforce during the AI transition. “We think about the way we work with our team members. We also want to move them from doers of tasks to directors of systems,” she said. “There will be job impacts, but we also think where we’re going, the destination is worth this transition period.”

In any tech transition, when it comes to jobs, “you lose one, you generate one-to-two,” noted Dubey. Protecting those who are negatively impacted can’t be entirely left up to the private sector, he argued. In addition to publicly backed mechanisms like universal basic income, he noted that a solution to generate funding that has been discussed would be imposing a tax on AI agents, the same way people are taxed. “There have to be structural mechanisms that need to be thought through right now, because we can’t do this reactively on the spot,” he said, adding, “There’s no government in the world that’s set up to do this.”

Raj Sharma, global managing partner for growth and innovation at EY, said that in order for AI to usher in an era of what he has called “super-fluid enterprises,” the key ingredients would be trust, tools, and talent. “You have to balance the equation between [the] three to make sure that AI is adopted.” 

TIME100 Talks: Innovation in a Multipolar Era was presented by Philip Morris International.

❌