Tecnologia do Blogger.
RSS

🥸 Labeling isn't working

Plus: OpenAI's safety vs. speed vs. safety test | Friday, October 04, 2024
 
Axios Open in app View in browser
 
 
Axios Codebook
By Sam Sabin · Oct 04, 2024

Hello, Friday! Welcome back to Codebook.

  • 🏝️ Sam is out today, but her editor (me, Megan) is here in her place. Our newsletter will be dark next week.
  • 📚 Our first pick for our book club is "Sandworm" by Wired reporter Andy Greenberg. Here's an excerpt.

📬 Have thoughts, feedback or scoops to share? codebook@axios.com.

Today's newsletter is 1,445 words, a 5.5-minute read.

 
 
1 big thing: AI's detection gap opens new vulnerabilities
By
 
Illustration of a speech bubble wearing a disguise amongst a crowd of other speech bubbles.

Illustration: Lindsey Bailey/Axios

 

Humans are not great at detecting AI-generated text, images and video right now, and as AI models improve, detection will only get harder.

Why it matters: If we can't separate AI-made content from human-created material, governments and businesses will find themselves increasingly vulnerable to new kinds of attacks — both narrowly targeted operations and broader misinformation offensives.

  • Bad actors succeed when the public believes it's impossible to know what's true and what isn't.

What they're saying: "We are now well past the point where humans can reliably distinguish between synthetically generated text, audio and images," Syracuse University research professor Jason Davis tells Axios.

  • Davis focuses on detecting misinformation in his role leading the Semantic Forensics program, funded by DARPA.
  • "While the capabilities in synthetic video generation are still a few steps behind these other media types, it is moving very quickly and I expect these capabilities to be on par with other media types in a matter of months, rather than years."
  • Even when we are able to detect AI-generated content, Davis says, we do it "after the fact in reactive mode" when the damage is already done.

Case in point: Commercially available tools to detect AI-generated content don't yet work very well, despite what their makers claim.

  • Watermarking technology favored by big tech companies is particularly unsuitable for preventing the spread of misinformation.
  • No one has figured out how to create watermarks that can't be circumvented by determined bad actors.
  • For watermarks to succeed, everyone creating generative AI tools must agree to implement them.
  • Criminals can also use off-the-shelf generative AI tools that don't produce watermarks.

Zoom in: Instead of focusing on detection tools, tech companies have focused on labeling AI content — but labeling is fundamentally flawed, since so much content today is already a collaboration between human and machine.

  • In July, Meta changed the wording on its labeling from "Made with AI" to "AI info" after complaints that its algorithm labeled content as AI-generated when photographers had just made minor edits with retouching tools.
  • "Like others across the industry, we've found that our labels based on these indicators weren't always aligned with people's expectations and didn't always provide enough context," Meta announced in an update to a blog post.

Zoom out: Adobe focuses its labeling efforts on provenance, which means making it clear where a piece of content was generated and what happened to it along the way.

  • In 2019, the company launched the Content Authenticity Initiative, which focuses on free provenance tools like Content Credentials.
  • Adobe calls Content Credentials a "nutrition label" for digital content that includes the date and time the content was created and edited and signals "whether and how AI may have been used."
  • "Content Credentials aren't meant to be a detection tool for whether an image is fake or not," Adobe spokesperson Andrew Cha tells Axios. "It is used as a tool to surface provenance information to consumers so that they can make an informed decision about the trustworthiness of the content."

Between the lines: Lawmakers have introduced policies around the use and disclosure of synthetic material in political ads.

  • For disclosure rules to work, they must "have teeth," Subramaniam Vincent, director of journalism and media at Santa Clara University's Markkula Center for Applied Ethics, tells Axios. And "they need real and timely enforcement."
  • While there are no federal laws around the use of AI in campaign ads, about 20 states have passed their own laws around AI use and disclosure.

Yes, but: While Indiana is one of those states — its law says AI-generated content in campaign materials must be disclosed — that didn't seem to deter Republican Sen. Mike Braun from running a campaign ad depicting his opponent standing at a podium surrounded by supporters holding signs that read "No Gas Stoves!"

  • Politico reports that the ad was digitally altered and that gas stoves were not discussed at the event.
  • Braun did not apologize for the ad, per Axios Indianapolis, but has replaced it with a version that doesn't include the manipulated image. "No campaign is going to be perfect," Braun said.
  • The campaign ad, including the "No Gas Stoves!" image and a label that says the video was digitally altered or artificially generated, is still posted on Braun's Facebook page.
Share on Facebook Tweet this Story Post to LinkedIn Email this Story
 
 
2. OpenAI speeds ahead as it shifts to for-profit
By
 
Illustration of a robot hand about to flick a safety cone.

Illustration: Allie Carl/Axios

 

OpenAI's developer release on Tuesday of a powerful new speech technology highlights the rapidly changing dynamics within the ChatGPT creator — a shift that some say is putting the release of products above safety concerns.

The big picture: OpenAI is in the midst of a transformation from a nonprofit lab to an increasingly product-focused company seeking to attract and please investors.

The big picture: OpenAI used a developer event in San Francisco on Tuesday to announce several capabilities, the most provocative of which allows developers to make use of the same real-time speech capabilities recently added to ChatGPT.

  • In showing off the product to the press on Monday, OpenAI chose a demo of the capability paired with Twilio's cloud communications platform.
  • In the demo, OpenAI showed its Realtime API powering an AI agent to make a call on its own to a fictional candy store and order 400 chocolate-covered strawberries.

Zoom in: The demo highlighted the tantalizing possibilities of AI taking action in the real world, but it also prompted a flurry of questions focused on the potential for havoc if the technology were used with malicious intent.

  • OpenAI executives seemed surprised that press questions focused less on the capability itself and more on what guardrails had been put in place.
  • Asked about safeguards, OpenAI said it wasn't watermarking its AI voice to allow it to be detected and wasn't mandating how people disclose their use of AI.

Yes, but: OpenAI said it would enforce its terms of service, which prevent spam and fraud, and noted it could add new rules as needed.

  • A spokesperson added that the company's rules also "require developers to make it clear to their users that they are interacting with AI, unless it's obvious from the context."

Zoom out: In the 10 months since the firing and rehiring of Sam Altman, debates about speed and safety within the company have grown louder and more frequent. The product teams seem to be winning most battles.

  • The Wall Street Journal reported last week that OpenAI released its GPT-4o model this year despite concerns that it had not received sufficient testing and was too risky to deploy safely.
  • Meanwhile, Fortune reported Tuesday, another fierce debate took place within OpenAI over whether the company's o1 reasoning model (previously code-named Strawberry) was ready for release.

The other side: OpenAI, for its part, rejects the idea that safety is taking a back seat. "We are deeply committed to our mission and are proud to release the most capable and safest models in the industry, as evidenced by o1," an OpenAI spokesperson tells Axios.

  • On 4o, OpenAI said in a statement that it "followed a deliberate and empirical safety process."
  • "GPT-4o was determined safe to deploy under our Preparedness Framework, rated medium on our cautious scale and did not reach high risk levels," OpenAI said.
  • OpenAI tells Axios, "Since its release in May, it has been safely used by hundreds of millions of people, millions of developers, and enterprises worldwide to help in daily life and solve problems ... affirming our confidence in its risk assessment."
Share on Facebook Tweet this Story Post to LinkedIn Email this Story
 
 
3. Catch up quick
 

@ D.C.

⭐️ The Department of Justice and Microsoft's Digital Crimes Unit have taken action to disrupt the technical infrastructure used by Russian nation-state actors reportedly working for Star Blizzard. (Bloomberg)

🚗 A cyberattack has shut down all government websites in Detroit, affecting approximately 1.75 million residents in Wayne County. (The Record)

🧑‍⚖️ After an X user spreading parody videos of Vice President Kamala Harris sued California to prevent the state from enforcing its new AI law, a federal judge blocked the law, saying the state can't force people to take down election deepfakes. (Politico)

@ Industry

😎 Two Harvard students added facial recognition software PimEyes to Meta's smart Ray-Ban glasses, showing that it's possible to use the glasses to instantly dox someone. The students did not release their code. (404 Media)

@ Hackers and hacks

🔓 Another Ivanti flaw is under attack, per the Cybersecurity and Infrastructure Security Agency. This one allows threat actors to gain remote code execution on vulnerable Endpoint Manager appliances. (TechCrunch)

Share on Facebook Tweet this Story Post to LinkedIn Email this Story
 
 

A message from Axios HQ

Your comms tool to navigate workplace change
 
 

Successful orgs stay nimble — adapting their operations as business needs evolve.

Successful leaders:

  • Earn buy-in at every level.
  • Ensure teams are informed and supported.

Whether you're shifting goals or restructuring teams, Axios HQ's software will keep comms clear and actionable.

🤝 Book a demo

 
 
4. 1 fun thing
 
Dog on a couch

Photo: Megan Morrone

 

Twice a week I edit this newsletter and eagerly anticipate some dog content in this section. But it's always cats!

Here is a picture of my dog Gilbert, who passed away of old age last spring. Don't be sad, he had a long and excellent life, fully embodying the spirit of "1 fun thing."

Share on Facebook Tweet this Story Post to LinkedIn Email this Story
 
 

A message from Axios HQ

Your comms tool to navigate workplace change
 
 

Successful orgs stay nimble — adapting their operations as business needs evolve.

Successful leaders:

  • Earn buy-in at every level.
  • Ensure teams are informed and supported.

Whether you're shifting goals or restructuring teams, Axios HQ's software will keep comms clear and actionable.

🤝 Book a demo

 

☀️ See y'all on Oct. 15!

Thanks to Megan Morrone for editing and Khalid Adad for copy editing this newsletter.

If you like Axios Codebook, spread the word.

HQ
Are you a fan of this email format?
Your essential communications — to staff, clients and other stakeholders — can have the same style. Axios HQ, a powerful platform, will help you do it.
 

Axios thanks our partners for supporting our newsletters.
Sponsorship has no influence on editorial content.
Advertise with us.

Axios, PO Box 101060, Arlington VA 22201
 
You received this email because you signed up for newsletters from Axios.
To stop receiving this newsletter, unsubscribe or manage your email preferences.
 
Was this email forwarded to you?
Sign up now to get Axios in your inbox.
 

Follow Axios on social media:

Axios on Facebook Axios on X Axios on Instagram Axios on LinkedIn
 
 
                                             

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

0 comentários:

Postar um comentário