close
close
blog

Scarlett Johansson’s AI dispute has echoes of the good old days of Silicon Valley

Image source, fake images

Scarlett Johansson’s AI dispute has echoes of the good old days of Silicon Valley

“Move fast and break things” is a motto that continues to haunt the technology sector, some 20 years after it was coined by a young Mark Zuckerberg.

Those five words came to symbolize the worst of Silicon Valley: a combination of ruthless ambition and some pretty impressive arrogance: profit-driven innovation without fear of consequences.

I was reminded of that phrase this week when actress Scarlett Johansson clashed with OpenAI. Ms. Johansson claimed that both she and her agent had turned down her being the voice of her new product for ChatGPT, and then when she showed up, it sounded just like her anyway. OpenAI denies that it was an intentional imitation.

It’s a classic example of exactly what the creative industries are so worried about: being imitated and eventually replaced by artificial intelligence.

Last week, Sony Music, the world’s largest music publisher, wrote to Google, Microsoft and OpenAI demanding to know if any of its artists’ songs had been used to develop artificial intelligence systems, saying they did not have permission to do so. .

There are echoes in all this of the macho giants of Silicon Valley of yesteryear. Seeking forgiveness instead of permission as an unofficial business plan.

But the tech companies of 2024 are keen to distance themselves from that reputation.

OpenAI did not emerge from that mold. It was originally created as a non-profit organization that would invest any additional profits invested back into the business.

In 2019, when it formed a profit-oriented arm, they said the for-profit side would be led by the nonprofit side and that a cap would be placed on the returns investors could earn.

Not everyone was happy with this change; was said to have been a key reason behind original co-founder Elon Musk’s decision to retire.

When OpenAI CEO Sam Altman was suddenly fired by his own board of directors late last year, one of the theories was that he wanted to move further away from the original mission. We never knew for sure.

But even if OpenAI has become more profit-driven, it still has to shoulder its responsibilities.

In the policymaking world, almost everyone agrees that clear boundaries are needed to keep companies like OpenAI in line before disaster strikes.

Until now, the AI ​​giants have largely played on paper. At the world’s first AI Safety Summit six months ago, a group of tech chiefs signed a voluntary commitment to create responsible, safe products that maximize the benefits of AI technology and minimize its risks.

Those risks, originally identified by event organizers, were the stuff of nightmares. When I asked back then about the more practical threats to people posed by AI tools that discriminate against or replace them in their jobs, I was told quite firmly that this meeting was dedicated to discussing absolute worst-case scenarios. – this was Terminator, Doomsday, the territory of AI going rogue and destroying humanity.

Six months later, when the summit reconvened, the word “security” had been removed entirely from the conference title.

Last week, a draft UK government report by a group of 30 independent experts concluded that there is “still no evidence” that AI can generate a biological weapon or carry out a sophisticated cyber attack. The possibility of humans losing control of AI was “very controversial,” he said.

Some people in the field have been saying for quite some time that the most immediate threat from AI tools is that they will replace jobs or be unable to recognize skin colors. AI ethicist Dr. Rumman Chowdhury says these are “the real problems.”

The AI ​​Safety Institute declined to say whether it had tested the safety of any of the new AI products launched in recent days; in particular OpenAI’s GPT-4o and Google’s Project Astra, both of which are among the most powerful and advanced generative AI systems available to the public that I have seen so far. Meanwhile, Microsoft has unveiled a new laptop containing AI hardware – the beginning of the physical integration of AI tools into our devices.

The independent report also states that there is currently no reliable way to understand exactly why AI tools generate the result they do (even among developers) and that the established security testing practice of Red Teaming, in which testers they deliberately try to get an AI tool. misbehave, has no best practice guidelines.

At this week’s follow-up summit, co-hosted by the UK and South Korea in Seoul, the companies pledged to shelve a product if it does not meet certain safety thresholds, but these will not be set. until the next meeting in 2025.

Some fear that all these commitments and promises are not enough.

“Voluntary agreements are essentially just a means for companies to tick their own boxes,” says Andrew Straight, associate director of the Ada Lovelace Institute, an independent research organization. “Fundamentally, it is no substitute for the legally binding and enforceable rules that are required to incentivize the responsible development of these technologies.”

OpenAI just released its own 10-point security process that it says it’s committed to, but one of its senior security-focused engineers recently resigned, writing on X that his department had been “sailing against the wind” internally.

“In recent years, safety culture and processes have taken a backseat to shiny products,” Jan Leike posted.

Of course, there are other teams at OpenAI that continue to focus on safety and security.

However, there is currently no official, independent oversight of what they are actually doing.

“We have no guarantee that these companies will deliver on their promises,” says Professor Dame Wendy Hall, one of the UK’s leading computer scientists.

“How can we hold them accountable for what they say, like we do with pharmaceutical companies or in other sectors where there is high risk?”

We may also find that these powerful tech leaders become less docile once the going gets tough and voluntary agreements become a little more enforceable.

When the UK government said it wanted the power to pause the rollout of security features by big tech companies if there was a chance they would compromise national security, Apple threatened to remove services from Britain, describing it as an “overreach.” unprecedented” by legislators. .

The legislation was passed and, so far, Apple is still here.

The European Union AI Act has just become law and is both the first and most stringent legislation in existence. There are also harsh penalties for companies that do not comply. But it creates more legwork for AI users than the AI ​​giants themselves, says Nader Henein, vice president analyst at Gartner.

“I would say that most (AI developers) overestimate the impact the Act will have on them,” he says.

Any company that uses AI tools will have to categorize and rate them according to their risk, and the AI ​​companies that provided them will have to provide them with enough information to be able to do so, he explains.

But this does not mean that they are blameless.

“We need to move towards legal regulation over time, but we can’t rush it,” says Professor Hall. “Establishing global governance principles that everyone subscribes to is really difficult.”

“We also need to make sure we are protecting everyone and not just the Western world and China.”

Those who attended the AI ​​Summit in Seoul say they found it useful. It was “less flashy” than Bletchley but with more discussion, one attendee said. Interestingly, the final declaration of the event was signed by 27 countries, but not by China, although it had representatives there in person.

The bottom line, as always, is that regulation and policy move much more slowly than innovation.

Professor Hall believes “the stars are aligning” at a government level. The question is whether the tech giants can be persuaded to wait for them.

BBC in depth is the new home on the website and app for the best analysis and expertise from our best journalists. Under a distinctive new brand, we will bring you fresh perspectives that challenge assumptions and deep reporting on the biggest issues to help you make sense of a complex world. And we’ll also be showcasing thought-provoking content from BBC Sounds and iPlayer. We’re starting small but thinking big, and we want to know what you think. You can send us your comments by clicking the button below.

Get in touch

InDepth is the new home for BBC News’ best analysis. Tell us what you think.

Related Articles

Back to top button