In a video announcement shared on Instagram, Meta's CEO and founder, Mark Zuckerberg, highlighted the significant existing user base of Meta AI across the company's various platforms. "A billion people are using Meta AI across our apps now, so we made a new standalone Meta AI app for you to check out," he stated, underscoring the rationale behind this dedicated offering.
Zuckerberg positioned the new application as a "personal AI," emphasizing its capability for voice-driven interactions tailored to individual users. Initially, the AI will operate with a basic understanding of user interests. However, the long-term vision includes the potential for users to grant Meta AI access to a wealth of information from their activities across Meta's suite of apps, allowing for increasingly personalized assistance.
Leveraging its social media roots, Meta has integrated a unique social feed within the app. This feature enables users to discover and engage with AI-generated posts created by others. Meta's chief product officer, Chris Cox, unveiled this aspect at the company's LlamaCon developers gathering, an event centered around its open-source AI model. Cox explained the thinking behind this social integration: "We learn from seeing each other do it, so we put this right in the app... You can share your prompts. You can share your art. It’s super fun."
Furthermore, the new Meta AI application supersedes Meta View as the companion app for Ray-Ban Meta smart glasses. This integration promises a fluid conversational experience spanning across the smart glasses, the mobile application, and desktop interfaces, reinforcing Meta's commitment to voice as a primary mode of interaction. Cox elaborated on this focus, stating, "We were very focused on the voice experience; the most natural possible interface... like a phone call."
Adding another layer of innovation, Meta has introduced an experimental mode designed to facilitate more human-like conversations within the AI app. According to Cox, this mode allows for natural conversational elements such as interruptions, laughter, and genuine dialogue, mimicking the nuances of a phone call. However, he noted that this experimental feature currently lacks web search capabilities, limiting its ability to answer real-time queries on topics like sports updates or current events.
Users will have the option to grant Meta AI permission to learn from their activity on Instagram and Facebook, enabling the assistant to gain deeper insights into their preferences and needs. Cox also pointed out the AI's ability to remember specific details provided by the user, such as "your kids’ names; your wife’s birthday and other things you want to make sure your assistant doesn’t forget," highlighting its potential for personalized long-term assistance.
This launch arrives at a pivotal moment in the AI landscape, with OpenAI's ChatGPT establishing itself as a frontrunner in direct-to-consumer AI assistants through continuous updates and feature enhancements. Meta is positioning its open-source Llama model as a key differentiator, emphasizing the advantages it offers to developers.
The concept of open source, championed by Meta at its LlamaCon event, grants developers the freedom to customize core aspects of the software to align with their specific requirements. This contrasts with OpenAI's closed model, which keeps its underlying mechanisms proprietary. Zuckerberg articulated the value proposition of open source to the developers, stating, "Part of the value around open source is that you can mix and match... You have the ability to take the best parts of the intelligence from the different models and produce exactly what you need, which I think is going to be very powerful."
By launching a dedicated AI assistant app and emphasizing the flexibility of its open-source model, Meta is clearly signaling its intent to be a major player in the rapidly evolving field of generative artificial intelligence, directly challenging the current market leader and offering a distinct approach to AI integration.