Navigating the rapidly evolving AI landscape is a formidable task. Until AI can manage this itself, here’s a concise roundup of recent developments in machine learning, alongside notable research and experiments not covered independently.
By the way, TechCrunch is set to launch an AI newsletter soon. Stay tuned. Meanwhile, we’re increasing the frequency of our semi-regular AI column, previously bi-monthly, to a weekly cadence. So, keep an eye out for more updates.
This week in AI, OpenAI once again captured the headlines (despite Google’s vigorous efforts) with a significant product launch and some internal drama. The company introduced GPT-4o, its most advanced generative model yet, and shortly thereafter, disbanded a team dedicated to developing controls to prevent “superintelligent” AI systems from going rogue.
The disbandment of the team generated widespread attention, predictably. Reporting, including ours, suggests that OpenAI deprioritized the team’s safety research in favor of launching new products like GPT-4o, leading to the resignation of the team’s co-leads, Jan Leike and OpenAI co-founder Ilya Sutskever.
Superintelligent AI remains more theoretical than actual at this stage; it’s uncertain when—or if—the necessary breakthroughs to create AI capable of performing any task a human can will be achieved. However, this week’s coverage suggests one thing: OpenAI’s leadership—particularly CEO Sam Altman—has increasingly prioritized products over safeguards.
Altman reportedly “infuriated” Sutskever by rushing the launch of AI-powered features at OpenAI’s first developer conference last November. Additionally, he’s said to have criticized Helen Toner, director at Georgetown’s Center for Security and Emerging Technologies and a former OpenAI board member, over a paper she co-authored that critiqued OpenAI’s safety approach—going so far as to attempt to remove her from the board.
Over the past year or so, OpenAI has allowed its chatbot store to become inundated with spam and allegedly scraped data from YouTube against the platform’s terms of service while expressing ambitions to enable its AI to generate depictions of pornography and gore. Clearly, safety has taken a backseat at the company, and a growing number of OpenAI safety researchers have concluded that their work would be better supported elsewhere.
Here are some other notable AI stories from the past few days:
OpenAI + Reddit: In further OpenAI news, the company reached an agreement with Reddit to use the social site’s data for AI model training. Wall Street welcomed the deal, but Reddit users may not be so pleased.
Google’s AI: Google hosted its annual I/O developer conference this week, unveiling a plethora of AI products. We’ve rounded them up here, including the video-generating Veo, AI-organized results in Google Search, and enhancements to Google’s Gemini chatbot apps.
Anthropic hires Krieger: Mike Krieger, co-founder of Instagram and, more recently, personalized news app Artifact (recently acquired by TechCrunch’s corporate parent Yahoo), is joining Anthropic as the company’s first chief product officer. He’ll oversee both consumer and enterprise initiatives.
AI for kids: Anthropic announced last week that it would begin allowing developers to create kid-focused apps and tools built on its AI models—provided they adhere to certain guidelines. Notably, rivals like Google prohibit their AI from being integrated into apps aimed at younger users.
AI film festival: AI startup Runway held its second-ever AI film festival earlier this month. The takeaway? Some of the most compelling moments in the showcase were driven by human elements, not AI.
More machine learnings
AI safety is a major focus this week with the OpenAI departures, but Google DeepMind is moving forward with a new “Frontier Safety Framework.” Essentially, it’s the organization’s strategy for identifying and hopefully preventing any runaway capabilities—it doesn’t have to be AGI; it could be a malware generator gone awry or something similar.
The framework has three steps: 1. Identify potentially harmful capabilities in a model by simulating its development paths. 2. Regularly evaluate models to detect when they’ve reached known “critical capability levels.” 3. Apply a mitigation plan to prevent exfiltration (by another or itself) or problematic deployment. There’s more detail here. While this may seem like an obvious series of actions, formalizing them is crucial to avoid ad-hoc responses. That’s how you get rogue AI.
A different risk has been identified by Cambridge researchers, who are concerned about the proliferation of chatbots trained on a deceased person’s data to provide a superficial simulacrum of that individual. While the concept might be useful in grief management and other scenarios, it’s fraught with ethical concerns. The problem is that we are not being careful enough.
“This area of AI is an ethical minefield,” said lead researcher Katarzyna Nowaczyk-Basińska. “We need to start thinking now about how to mitigate the social and psychological risks of digital immortality, because the technology is already here.” The team outlines numerous scams, potential negative and positive outcomes, and discusses the concept generally (including fake services) in a paper published in Philosophy & Technology. Black Mirror once again predicts the future!
In less eerie applications of AI, physicists at MIT are exploring a useful tool for predicting a physical system’s phase or state, a statistical task that becomes onerous with more complex systems. Training a machine learning model on the right data and grounding it with some known material characteristics of a system results in a much more efficient approach. Just another example of how ML is finding niches even in advanced science.
At CU Boulder, researchers are discussing how AI can be utilized in disaster management. The technology could be valuable for quickly predicting where resources will be needed, mapping damage, even aiding in training responders. However, there’s understandable hesitation to apply AI in life-and-death scenarios.
Professor Amir Behzadan is trying to advance this field, stating, “Human-centered AI leads to more effective disaster response and recovery practices by promoting collaboration, understanding, and inclusivity among team members, survivors, and stakeholders.” They’re still in the workshop phase, but it’s crucial to deeply consider these issues before attempting to automate aid distribution after a disaster.
Lastly, some intriguing work from Disney Research, which looked into diversifying the output of diffusion image generation models. These models can produce similar results repeatedly for some prompts. Their solution? “Our sampling strategy anneals the conditioning signal by adding scheduled, monotonically decreasing Gaussian noise to the conditioning vector during inference to balance diversity and condition alignment.” I couldn’t have put it better myself.
This article was originally published on techcrunch. Read the orignal article.
FAQs
- Why did OpenAI disband its AI safety team? OpenAI prioritized product development over safety research, leading to the disbandment of its AI safety team.
- What is superintelligent AI? Superintelligent AI refers to an artificial intelligence that surpasses human intelligence across all domains, though it remains a theoretical concept at present.
- What are the ethical concerns with AI trained on deceased individuals’ data? Training AI on deceased individuals’ data raises ethical concerns about privacy, consent, and the potential for misuse, requiring careful regulation.
- How can AI aid in disaster management? AI can predict resource needs, map damage, and train responders, improving the efficiency and effectiveness of disaster response efforts.
- What is Google’s Frontier Safety Framework? Google’s Frontier Safety Framework is a strategy to identify, evaluate, and mitigate harmful AI capabilities, ensuring the safe development and deployment of AI technologies.