(Bloomberg) -- Never miss an episode. Follow The Big Take daily podcast today.

Generative AI is back in the news — and not for a good reason. First, OpenAI made headlines after the voice it introduced as part of its latest GPT update sounded eerily like Scarlett Johansson. Then, Google’s newly-introduced “AI Overview” feature started returning some questionable results — like that eating rocks might be good for you, or that cheese can help prevent cavities. 

On today’s Big Take, host David Gura speaks to Bloomberg AI reporter Rachel Metz and Bloomberg Opinion columnist Dave Lee to get to the bottom of just what’s been happening in the world of generative AI — and what these latest headlines mean for the way we’re all getting our information now and in the future.Listen to the Big Take podcast every week day and subscribe to our daily newsletter

Here is a lightly edited transcript of the conversation:

David Gura: Is it just me or is AI suddenly … everywhere?

Jen Zhu Scott: Right now, both in the US and China ecosystem, the AI space is racing at a breathtaking speed.

David Westin: AI is everywhere. It's not that big, scary thing in the future. AI is here with us

Gura: You open up Google, and instead of getting a list of websites, there is now often an “AI Overview” – a summary scraped together from a sometimes-strange variety of sources. 

One recently told me that cheese … was great at preventing cavities, citing the Cleveland Clinic, the British Heart Foundation and the noted Vermont cheesemaker, Cabot, as sources.

And it’s not just Google.

Jon Erlichman: Apple wants to power its iPhones with Google’s Gemini generative AI.

Lisa Abramowicz: Alphabet and Meta are offering millions to Hollywood to partner on artificial intelligence

Gura: From Instagram, to customer service, even to healthcare, there’s a generative A-I bot that seemingly has all the answers, if not necessarily the correct ones. 

All of this has led to big questions recently about what’s going into these models.

On today’s episode, what’s behind the recent AI headlines – and what does all of this mean for our ability to opt in – or opt out – of using AI in the future.

I’m David Gura, and this is the Big Take, from Bloomberg News.

Gura: We’re going to get back to those AI Overview search results a little bit later in this episode. But first we’re going to start unpacking all of this by looking at what’s been going on with another company: OpenAI. 

Six months ago, the company behind chatGPT was in the headlines after a leadership struggle broke out between the company’s founders. Now, OpenAI is back in the news for a whole different reason. 

Metz: So a lot's been going on in the last few weeks. Um, sometimes in my head I call it as the OpenAI turns. 

Gura: Rachel Metz is an AI reporter with Bloomberg. She says the OpenAI soap opera got a new plotline a few weeks ago, when the company released an update to chatGPT.  

Metz: They introduced their latest flagship AI model. This is GPT 4o. 

Gura: OpenAI unveiled human-sounding voices for ChatGPT in September of last year. But the update really only made headlines a few weeks ago, after the company hosted a livestream to show off what it considers to be a big leap forward.

Mira Murati: We'll be showing some live demos today to show the full extent of the capabilities of our new model... 

Gura: It was during the demonstration, when OpenAI’s Mark Chen unveiled GPT 4o’s life-like voices…

Mark Chen: Hey, ChatGPT. I'm mark. How are you?>>  

Sky Voice Clip: Oh, Mark. I'm doing great. Thanks for asking. How about you?

Gura: …that a lot of people noticed that this voice sounded remarkably like the voice of the actress Scarlett Johansson.

Sky Voice Clip: Oh, you're doing a live demo right now.  That’s awesome! Just take a deep breath.  And remember, you're the expert here. 

Gura: That perception was furthered when Sam Altman, OpenAI’s reinstated CEO, posted a single word to X after the event. Her – which some took as a nod to the 2013 Spike Jonze film in which Johansson voices the “open operating system” Samantha. 

Metz: And it turns out Scarlett Johansson thought it also sounded a lot like her.

Gura: The thing is, Altman had approached Johansson twice about potentially working with OpenAI. But Johansson turned him down. 

To some watching, it felt like a bunch of Silicon Valley types had ignored the wishes of one of Hollywood’s most famous actresses just so they could make the plotline of a movie a reality. 

Metz: She seemed quite genuinely upset about it and thought it sounded like her even though she had specifically said, I don't want to participate in this project on multiple occasions.

Gura: For anyone feeling anxious about the ubiquity of AI – and about the information these generative models are hoovering up – the dust up struck a nerve. 

Metz: I think there's a few different things that go into making people feel really alarmed by this. One is people really like Scarlett Johansson. 

Gura: Fair enough. 

Metz: She is an iconic actress. Another thing that I think is interesting about her in particular is. Um, like you're, like yourself, David, as you were saying, you, you talk for a living. Your voice is, is valuable. Her voice is valuable as an actor, but it is also iconic. People know her voice, in part because of the movie Her, but also just because she happens to have a voice that is very easily recognizable. I think a less recognizable voice, people might not be taking as much to this issue.

Gura: OpenAI says it never intended to use Johansson’s voice as one of its assistants.

Metz: Basically what they said is they started working on the voice feature way back in spring of 2023, and they cast a number of voice actors for five different voices. And said that they were planning on her being a sixth voice. So it's not, they're saying, look, it's not that she would be that voice or that this voice is meant to sound like her. This is another person. 

Gura: But after the backlash from Johansson and the public, OpenAI removed the voice. Which Rachel says is something of a turning point in terms of our ability to opt out – or at least push back – against these generative models that seem to be pushing forward with seemingly few guardrails in place. 

Metz: A lot of consumers and a lot of artists are saying, wait a minute, that's not what I signed up for when I put this and that on the Internet.

Gura: Just this week, OpenAI introduced a new safety board — led by CEO Sam Altman. The move came after the company disbanded a team that had been created to focus on the long-term threat AI could pose to humanity. But some former insiders are saying that probably won’t be enough – including former OpenAI board member Helen Toner. Toner told the Ted AI show podcast this week that she had concerns about Altman’s commitment to safety based on his past behavior. 

Helen Toner on podcast: On multiple occasions, he gave us inaccurate information about the small number of formal safety processes that the company did have in place, meaning that it was basically impossible for the board to know how well those safety processes were working or what might need to change.

Gura: And it’s not just OpenAI that’s been getting some pushback. After the break, we take a look at another sector that’s grappling with how and when to fight developments in generative AI. And we weigh up whether or not any of this will be enough to shake-up what some are calling the “raw deal” at the center of AI.

Gura: OpenAI is certainly not the only company racing to release new AI products. There are start-ups like Anthropic and Elon Musk’s xAI which just announced this week it had raised $6billion, giving the company a valuation of $24 billion. 

There’s also Meta – which recently rolled out its AskMeta feature to Facebook and Instagram and WhatsApp. And then there’s Alphabet, which just added those AI Overviews to Google.

Dave Lee: So AI Overviews that they announced this at their developers conference.

Gura: Dave Lee is a Bloomberg Opinion columnist. He recently attended the Google developer conference where the company unveiled this latest change to its search engine.

Sundar Pichai at conference: The result is a product that does the work for you. Google search is generative AI at the scale of human curiosity, and it's our most exciting chapter of search yet.

Lee: And basically what it does is it sort of builds on something that's already been on Google. If you search for, you know, uh, who are the Beatles, you'll get like a sort of fact box and it will have pictures and a quote from Wikipedia and all that kind of stuff. And AI Overviews is, is sort of that and then some.

Gura: AI Overviews synthesizes information from a host of different sources into a generated answer similar to something you’d get if you typed a question into, say, OpenAI’s ChatGPT. 

At the bottom, there’s some links to where that information came from if you’re curious. Which is all well and good… for Google. But critics of this approach point out that it deprioritizes going directly to a source. And that means less traffic to those sites.

Lee: If you're the website that's creating the information, why on earth would you continue to do it? If, you know, for all these popular searches, you're not going to get that kickback of traffic, which means ad revenue, which means, you know, continuing to exist. And that's a really, really big fear with AI Overviews.

Gura: Google hasn’t rolled out this feature to every search just yet. But the mere prospect that it will do so has created an existential threat for news publishers. But Dave says it’s not just news publishers who should be worried.

Lee: I also think there's a big problem for Google as well, because, because Google needs this web ecosystem to exist. And I didn't get the sense, and I talked to a Google executive afterwards at their event, I didn't get the sense that they really had quite comprehended how damaging and how quickly damaging it might be.

Gura: This “web ecosystem” can only exist if publishers can stay in business. One way that AI companies have been attempting to solve this puzzle is by cutting publishers into some of their current – and potentially future – profits. Take OpenAI again.

Lee: So what they've been doing is, is going out to as many publishers as they can, trying to come up with these deals, um, to allow OpenAI or whoever, um, use that information from those publishers on their large language models.

Gura: And some publishers have agreed to take these deals – including The Atlantic and Vox Media, both of which inked deals on Wednesday.

Gura: What are the terms of the deals? What's the incentive for them to do this?

Lee: I wish I'd seen the terms of the deals. I mean, this is one of the criticisms of these deals is that we don't know a great deal of what, you know, the nitty gritty here of what's happening.

Um, some publishers, yes, have, have, have made deals. The most notable one is News Corporation.

Gura: For a lot of money right?

Lee: For a lot of money, uh, 250 million over five years. Now, that doesn't necessarily mean they're getting a nice big check for 250 million because part of that deal is using OpenAI's technology within News Corp. And what that looks like, we don't know. We don't know whether OpenAI has said, guess what? Using our technology is worth 100 million dollars a year. We don't have those details, which is very frustrating. Um, but they, but you know, there is, there is a willingness to, um, to make friends with OpenAI because there's a general feeling, um, and the chairman of Le Monde, the French, uh, newspaper, uh, I'm paraphrasing here, but he basically said, look, either we do this, uh, with them and we get some money. Or they do it anyway with our, with our content and we have no control over it and, and we don't, we don't benefit and it's going to harm us as a publisher.

Gura: Dave says that he believes publishers are making a big mistake by agreeing to sell their content to OpenAI – even if he understands the economic incentives pushing them to strike deals that could help insulate them against potential revenue shortfalls in the future. And there are some hold outs.

Lee: The big one at the moment is, um, the New York Times. they sued OpenAI, uh, they sued them for, for copyright infringement, which is going to be an interesting one to make in court, because their argument is that, um, when you said certain things to ChatGPT, it would recreate bits of Times journalism, and that, that was copyright infringement., there won't be a preliminary hearing on that case for, for a number of months, which is, you know, just kind of goes to show, I mean, think how, you much has changed in AI in just the last two years or so, you know, with every passing day, it feels like this challenge is getting bigger and bigger and bigger.

Gura: There’s also another group called Alden Global Capital, which owns a bunch of titles like the New York Daily News and the Chicago Tribune. They’re using the same sort of logic to sue OpenAI – and adding another element. It has to do with a well-known flaw in a lot of generative AI tools we’ve been using – which is their propensity to “hallucinate” or make up answers.

Lee: that's another interesting thing, you know, because these hallucinations can be damaging to these news brands as well if these machines are saying, you know, crunching two bits of information and saying, well that came from such and such and it might not.

Gura: What does that mean for the news industry if publishers are reluctant to or unable to opt out from what these AI companies are trying to do?

Lee: That's a profound question, right? Publishers have had perilous business models for so long now and sort of one by one they've been, they've been sort of removed by bits of tech, you know, whether it was Craigslist for classifieds or, you know, Google just getting a lot of the advertising revenue from, from all over the place. You've got this situation where I think already, and we've seen this from the reaction to AI overviews from just regular users, there is a suspicion of these sort of computer generated bits of information. I saw a great quote and it's gone around the internet so much now that I have no idea who first said it, which is kind of good because it kind of just belongs to us all now. Uh, and one of them was, you know, why would I be bothered to read something you couldn't be bothered to write? And I kind of hope, I hope that sort of, attitude is going to persist. I hold, I hope that, um, people will care about human made things, whether it's newspapers or movies or books or whatever.

Gura: But for now, at least, there isn’t an easy way to opt out. Here’s Bloomberg’s Rachel Metz again.

Metz: These companies are ingesting tons and tons and tons of data and using that to train their AI systems. And if we want these AI systems to get better, the current prevailing thought is we need more data and more compute to make them more better. That is how it’s working right now. I have my own thoughts about whether that will be true in the future but right now that’s what we’re seeing is we’re getting better results from more data and more compute. And in that sense, we're all sort of opted in, in various ways, depending on how much of our lives have been lived on the internet and how much of that data these companies are using.

Gura: But just because we’re all effectively opted-in, whether we like it or not, doesn’t mean that we don’t have a say in how our data – our written words and our voices – are being used. Even if we’re not all as famous as Scarlet Johansson, Rachel says the fact that OpenAI listened – that it pulled the Sky voice in the face of a public outcry – means there is room to well, have a voice in how generative AI gets developed and used.

Metz: I don't like to think that anything with technology is inevitable. I like to think that we have a lot of control over what happens. And I think that what we're seeing with some of this pushback recently against this voice that people feel and Scarlett Johansson feels sounds like Scarlett Johansson, I feel like that's a really good example of it. So I feel like that's sort of a hopeful sign, right? That we still have agency and we still have control.

Gura: This is The Big Take, from Bloomberg News. I’m David Gura.


©2024 Bloomberg L.P.