Skip to main content

Filed under:

From ChatGPT to Gemini: how AI is rewriting the internet

Big players, including Microsoft, with Copilot, Google, with Gemini, and OpenAI, with ChatGPT-4, are making AI chatbot technology previously restricted to test labs more accessible to the general public.

How do these large language model (LLM) programs work? OpenAI’s GPT-3 told us that AI uses “a series of autocomplete-like programs to learn language” and that these programs analyze “the statistical properties of the language” to “make educated guesses based on the words you’ve typed previously.” 

Or, in the words of James Vincent, a human person: “These AI tools are vast autocomplete systems, trained to predict which word follows the next in any given sentence. As such, they have no hard-coded database of ‘facts’ to draw on — just the ability to write plausible-sounding statements. This means they have a tendency to present false information as truth since whether a given sentence sounds plausible does not guarantee its factuality.”

But there are so many more pieces to the AI landscape that are coming into play (and so many name changes — remember when we were talking about Bing and Bard last year?), but you can be sure to see it all unfold here on The Verge.

  • Emilia David

    TODAY, Two hours ago

    Emilia David

    Microsoft’s new safety system can catch hallucinations in its customers’ AI apps

    Microsoft logo
    Illustration: The Verge

    Sarah Bird, Microsoft’s chief product officer of responsible AI, tells The Verge in an interview that her team has designed several new safety features that will be easy to use for Azure customers who aren’t hiring groups of red teamers to test the AI services they built. Microsoft says these LLM-powered tools can detect potential vulnerabilities, monitor for hallucinations “that are plausible yet unsupported,” and block malicious prompts in real time for Azure AI customers working with any model hosted on the platform. 

    “We know that customers don’t all have deep expertise in prompt injection attacks or hateful content, so the evaluation system generates the prompts needed to simulate these types of attacks. Customers can then get a score and see the outcomes,” she says. 

    Read Article >
  • Emma Roth

    Mar 27

    Emma Roth

    Amazon has poured $2.75 billion more into AI startup Anthropic.

    This comes as part of Amazon’s deal to invest up to $4 billion in Anthropic, starting with an initial $1.25 billion last September. Anthropic is the AI company behind the Claude 32 family of models, which the company claims outperforms ChatGPT and Google Gemini.


  • Adobe’s new GenStudio platform is an AI factory for advertisers

    Red artwork of the Adobe brand logo
    Adobe is courting enterprise customers with its new GenStudio platform.
    Illustration by Alex Castro / The Verge

    Adobe has announced a new AI-powered ad creation platform that aims to make it easier to use the company’s generative AI tools for building marketing campaigns. The new GenStudio application was introduced on Tuesday during Adobe’s Summit event, alongside an AI assistant for Adobe Experience and updates around Adobe’s Firefly generative AI model.

    GenStudio effectively serves as a centralized hub for promotional campaigns, providing things like brand kits, copy guidance, and preapproved assets alongside a bunch of generative AI-powered tools that can generate backgrounds and ensure the overall tone remains on-brand. These tools also allow users to quickly create ads for emails and social media platforms like Facebook, Instagram, and LinkedIn. GenStudio will then show users which attributes, generated assets, and campaigns are performing best, which Adobe suggests can then be used to direct AI prompts for other campaigns.

    Read Article >
  • BBC will stop using AI in Doctor Who promos.

    The marketing team for the longtime sci-fi series halted its use of generative AI for Doctor Who marketing emails following complaints, reports Deadline.

    The response from the BBC:

    As part of a small trial, marketing teams used generative AI technology to help draft some text for two promotional emails and mobile notifications to highlight Doctor Who programming available on the BBC. 

    We followed all BBC editorial compliance processes and the final text was verified and signed-off by a member of the marketing team before it was sent. We have no plans to do this again to promote Doctor Who.


  • Not even Meta can pay AI talent enough.

    The Information reports Meta has had issues keeping AI talent and has resorted to hiring researchers without an interview. Meta has been losing researchers to Google’s DeepMind, OpenAI, and Mistral, which was founded by former Meta engineers.

    One reason could be the salaries AI researchers could earn. Meta reportedly pays AI researchers up to $2 million, which is less than the $5 million to $10 million paid by OpenAI.


  • Wes Davis

    Mar 23

    Wes Davis

    WhatsApp is working on putting Meta AI right in the search bar.

    Instead of opening a Meta AI conversation, users would be able to start typing in the search bar to get answers from the chatbot. WABetaInfo spotted the feature in a new Android WhatsApp beta update (2.23.25.15).


  • Wes Davis

    Mar 23

    Wes Davis

    AI-generated blues misses a human touch — and a metronome

    An image showing a slightly off-kilter grid of happy-looking robot faces with\ speech bubbles containing music notes.
    Suno will happily sing you a tune.
    Image: The Verge / Shutterstock

    I heard a new song last weekend called “Soul Of The Machine.” It’s a simple, old-timey number in E minor with a standard blues chord progression (musicians in the know would call it a 1-4-5 progression). In it, a voice sings about being a trapped soul with a heart that once beat but is now cold and weak. 

    “Soul Of The Machine” is not a real song at all. Or is it? It’s getting harder to say. Whatever it is, it’s the creation of Suno, an AI tool from a startup of the same name focused on music generation. Rolling Stone said this song’s prompt was “solo acoustic Mississippi Delta blues about a sad AI.” And you know what? I doubt I’d glance askance at it if I heard it in a mix of human-recorded Delta blues tunes. The track is technically impressive, fairly convincing, and not all that good.

    Read Article >
  • OpenAI is pitching Sora to Hollywood.

    The AI company is scheduled to meet with a number of studios, talent agencies, and media executives in Los Angeles next week to discuss partnerships, sources familiar with the matter tell Bloomberg. Getting more filmmakers familiar with Sora, OpenAI’s upcoming text-to-video generator, is a major goal of the meetings.

    Although Sora is still awaiting a public release later this year, Bloomberg reports that a few A-list directors and actors have already been given access.


    AI generated video clip of a woman walking in Tokyo
    Sora-generated clip of a woman walking down a street in Tokyo.
    Image: OpenAI
  • Emma Roth

    Mar 22

    Emma Roth

    Apple’s plans for AI in China could involve Baidu.

    WSJ reports Apple’s held talks with Baidu to power AI technology for iPhones in China. That’s not surprising, given China’s strict rules for AI bots covering their output, data used for training, and storage of user data.

    So instead of ChatGPT or Google Gemini (like Apple reportedly discussed using everywhere else), Samsung similarly partnered with Baidu to bring its AI chatbot Ernie to the Galaxy S24 in China earlier this year.


  • Emma Roth

    Mar 19

    Emma Roth

    Google DeepMind co-founder joins Microsoft as CEO of its new AI division

    A photo showing Mustafa Suleyman during the World Economic Forum 2024
    Image: Getty

    Microsoft has hired Google DeepMind co-founder Mustafa Suleyman. In a post on X, Suleyman announced that he’s joining Microsoft as the CEO of a new team that handles the company’s consumer-facing AI products, including Copilot, Bing, and Edge.

    Suleyman will also serve as executive vice president of Microsoft AI and join the company’s senior leadership team that reports directly to CEO Satya Nadella. Suleyman co-founded the AI lab DeepMind in 2010, which was later acquired by Google in 2014.

    Read Article >
  • OpenAI’s custom chatbots are easier to make than market.

    The Information reports on GPT Store developers disappointed by a lack of customers for their ChatGPT-style products and limited analytics support. One developer claims his role-playing chatbot “could have gotten more traffic by partnering with a small influencer on TikTok” after being featured for two weeks.

    Verge reporter Emilia David had questions about the value of the GPT Store after struggling to “find a use” for chatbots made by other users, and it’s not clear if there are any great answers yet.


  • Adobe Substance 3D’s AI features can turn text into backgrounds and textures

    A screenshot taken of Adobe Substance 3D showing the new Text to Texture beta feature.
    The new “Text to Texture” feature is now available in the Adobe Substance 3D Sampler 4.4 beta.
    Image: Adobe

    While many developers are working out how generative AI could be used to produce entire 3D objects from scratch, Adobe is already using its Firefly AI model to streamline existing 3D workflows. At the Game Developers Conference on Monday, Adobe debuted two new integrations for its Substance 3D design software suite that allow 3D artists to quickly produce creative assets for their projects using text descriptions.

    The first is a “Text to Texture” feature for Substance 3D Sampler that Adobe says can generate “photorealistic or stylized textures” from prompt descriptions, such as scaled skin or woven fabric. These textures can then be applied directly to 3D models, sparing designers from needing to find appropriate reference materials.

    Read Article >
  • xAI open sources Grok

    Vector illustration of the Grok logo.
    Image: The Verge

    On March 11th, Elon Musk said xAI would open source its AI chatbot Grok, and now an open release is available on GitHub. This will allow researchers and developers to build on the model and impact how xAI updates Grok in the future as it competes with rival tech from OpenAI, Meta, Google, and others.

    A company blog post explains that this open release includes the “base model weights and network architecture” of the “314 billion parameter Mixture-of-Experts model, Grok-1.” It goes on to say the model is from a checkpoint last October and hasn’t undergone fine-tuning “for any specific application, such as dialogue.”

    Read Article >
  • Perplexity is ready to take on Google

    Aravind Srinivas.
    Aravind Srinivas.
    Perplexity, Illustration by William Joel / The Verge

    It’s hard to have a conversation about AI startups these days without Perplexity coming up. 

    Nvidia CEO Jensen Huang professes to using the AI search engine “almost every day,” Shopify CEO Tobi Lütke says it has replaced Google for him, and I’ve heard Mark Zuckerberg is also a user. I’ve been testing Perplexity in place of Google the past couple of months and have found it to be better for some searches, like ones with a very specific answer I’m looking for. But I’m not ready to completely switch. 

    Read Article >
  • Anthropic just released the smallest and fastest Claude 3 model.

    Claude 3 Haiku can now be accessed through claude.ai, Amazon Bedrock, and Perplexity Labs. It will be available on Google’s Vertex AI soon.

    Anthropic previously said that even though Haiku was smaller than Claude 3 Opus and Sonnet, it still reads dense research papers “in less than three seconds.”


  • Copilot upgrade.

    Windows Central points out that a switch to GPT-4 Turbo on the free tier of Microsoft’s AI assistant means searches on Copilot will be more current, and answers can be more thought out. GPT-4 Turbo was trained on data until April 2023 and can understand more complex questions.

    However, for those who prefer the older model, Pro accounts can toggle between GPT-4 Turbo and GPT-4.


  • Microsoft’s AI Copilot for Security launches next month with pay-as-you-go pricing

    Illustration of Microsoft Copilot for Security
    Image: Microsoft

    Microsoft is launching its Copilot for Security next month, bringing its generative AI chatbot to the cybersecurity space. Copilot for Security is designed for cybersecurity professionals to help them protect against threats, but it won’t be a one-off monthly charge like Copilot for Microsoft 365. Instead, Microsoft will charge businesses $4 per hour of usage as part of a consumption model when Copilot for Security launches on April 1st.

    Powered by OpenAI’s GPT-4 and Microsoft’s own security-specific model, Copilot for Security is essentially a chatbot where cybersecurity workers can get the latest information on security incidents, summaries of threats, and more. Microsoft first started testing this chatbot nearly a year ago, and it includes access to the latest information on security threats and Microsoft’s 78 trillion daily signals that the company collects through its threat intelligence gathering.

    Read Article >
  • Perplexity brings Yelp data to its chatbot

    Illustration of a line art pixel brain filled with computer commands, files, and folders.
    Cath Virginia / The Verge | Photos from Getty Images

    Perplexity CEO Aravind Srinivas tells The Verge that many people are using chatbots like regular search engines. It makes sense to offer information on things they look for, like restaurants, directly from the source. So it’s integrating Yelp’s maps, reviews, and other details in responses when people ask for restaurant recommendations.

    “Our underlying motivation is that we started off as this tool that appealed to people doing fact checks and research. But research can be on anything, like what if you’re a coffee addict, you’d want to know where the good coffee shops are,” says Srinivas.

    Read Article >
  • “If you give people the chance to make an AI girlfriend, they will make an AI girlfriend.”

    I stand by this statement every time I read about AI companion platforms, even if that’s not their stated goal. I talked to Verge EIC Nilay Patel on Decoder about AI dating, its potential pitfalls, and why people still fall in love with chatbots.


  • Now you can make Gemini’s AI responses more precise.

    9to5Google points out a recent update to Google’s Gemini chatbot that lets people edit and fine-tune its responses directly on the chatbox.

    The Gemini update page explains the addition from March 4th:

    We’re launching a more precise way for you to tune Gemini’s responses. Starting in English in the Gemini web app, just select the portion of text you want to change, give Gemini some instruction, and get an output that’s closer to what you are looking for.


    Screenshot of the new updates to Gemini
    Users can modify Gemini’s response.
    The Verge
  • OpenAI says Elon Musk wanted ‘absolute control’ of the company

    An image of Elon Musk on a background with a repeating pattern of folded dollar bills
    Illustration by Kristen Radtke / The Verge; Getty Images

    OpenAI has responded to Elon Musk’s lawsuit by saying that he at one point wanted “absolute control” of the company by merging it with Tesla.

    In a blog post published on Tuesday, OpenAI said it will move to dismiss “all of Elon’s claims” and offered its own counter-narrative to his account of the company abandoning its original mission as a nonprofit.

    Read Article >
  • Microsoft invokes VCRs in motion to dismiss The New York Times’ AI lawsuit

    Microsoft logo
    Illustration: The Verge

    The VCR will always evoke a sense of nostalgia, but Microsoft hopes it can help in court; it’s citing the tech in its attempt to dismiss three claims in The New York Times’ copyright infringement lawsuit against OpenAI and Microsoft.

    The Times sued Microsoft for allegedly copying its stories and using that data to mimic its style, but Microsoft’s lawyers are now arguing that OpenAI’s large language models are just the latest in a long line of technologies that are considered legal despite their potential for copyright abuse. “Despite The Times’s contentions, copyright law is no more an obstacle to large language model than it was to the VCR (or the player piano, copy machine, personal computer, internet or search engine),” reads one passage.

    Read Article >
  • ChatGPT can read its answers out loud

    A rendition of OpenAI’s logo, which looks like a stylized whirlpool.
    Illustration: The Verge

    OpenAI’s new Read Aloud feature for ChatGPT could come in handy when users are on the go by reading its responses in one of five voice options out loud to users. It is now available on both the web version of ChatGPT and the iOS and Android ChatGPT apps.

    Read Aloud can speak 37 languages but will auto-detect the language of the text it’s reading, and the feature is available for both GPT-4 and GPT-3.5. It’s an interesting example of what OpenAI can do with multimodal capabilities (the ability to read and respond through more than one medium) revealed soon after a competitor, Anthropic, added similar features to its AI models.

    Read Article >
  • Anthropic says its latest AI bot can beat Gemini and ChatGPT

    Photo illustration of a brain made of data points.
    Image: The Verge

    Anthropic, the AI company started by several former OpenAI employees, says the new Claude 3 family of AI models performs as well as or better than leading models from Google and OpenAI. Unlike earlier versions, Claude 3 is also multimodal, able to understand text and photo inputs.

    Anthropic says Claude 3 will answer more questions, understand longer instructions, and be more accurate. Claude 3 can understand more context, meaning it can process more information. There’s Claude 3 Haiku, Claude 3 Sonnet, and Claude 3 Opus, with Opus being the largest and “most intelligent model.” Anthropic says Opus and Sonnet are now available on claude.ai and its API. Haiku will be released soon. All three models can be deployed on chatbots, auto-completion, and data extraction tasks.

    Read Article >
  • Brave brings its AI browser assistant to Android.

    The privacy-focused Brave browser launched its AI assistant, Leo, last year on the desktop, and now it’s available for Android, following other mobile AI-connected browsers like Edge and Arc (only on iOS).

    Leo promises summaries, transcriptions, translations, coding, and more (while acknowledging that LLMs may “hallucinate” erroneous info). As for privacy, Brave claims, “Inputs are always submitted anonymously through a reverse-proxy and are not retained or used for training.”