Meta announced an enhancement to its AI capabilities during the Meta Connect 2024 developer conference held in Menlo Park on Wednesday morning.

The upgraded Meta AI can now verbally respond to user inquiries across platforms such as Instagram, Messenger, WhatsApp, and Facebook. Users have the option to select from a variety of voices, including AI-generated replicas of celebrities like Awkwafina, Dame Judi Dench, John Cena, Keegan-Michael Key, and Kristen Bell.

While this new voice feature aims to improve user engagement, it is distinct from OpenAI’s Advanced Voice Mode for ChatGPT, which is recognized for its expressive and emotive delivery. In contrast, Meta’s solution resembles Google’s Gemini Live, which transcribes spoken words and vocalizes responses using synthetic voices.

Meta has made significant investments to secure the likenesses of these celebrities, reportedly spending millions for their usage. Nevertheless, some industry experts express skepticism regarding the effectiveness of this approach, with many preferring to observe the feature in practice before reaching a judgment.

In addition to the voice capabilities, Meta AI can now analyze images, enabling users to share photos and obtain relevant information about them.

Furthermore, Meta is testing a translation tool that aims to automatically translate voices in Instagram Reels. This tool will dub a creator’s speech into another language while synchronizing lip movements, thereby enhancing the viewing experience for multilingual audiences. Currently, the tool is in the testing phase with select creators’ videos from Latin America, focusing on English and Spanish translations.

As Meta continues to expand its AI functionalities, the potential effects on user engagement and content creation are yet to be determined.