There is a surprising aspect to ChatGPT’s meta AI challenge, even if it is not anticipated.

If Meta AI doesn’t replicate the recent, well-publicized errors made by Google Gemini, it will perform better right away. Meta is aware that using all of their applications is the easiest approach to connect with its 3 billion daily consumers.

Meta AI


You’re probably in the middle of the greatest balancing act, waiting to make a decision. That is, which chatbot or assistant with artificial intelligence (AI) should start using regularly? a dilemma that many people find difficult. Add Meta AI to the list if the options already available—OpenAI’s ChatGPT, Microsoft’s Copilot, Google Gemini, and Perplexity AI’s curation of several AI models with the Pro subscription—were insufficient. The smooth integration of Mark Zuckerberg’s Meta into Facebook, Instagram, WhatsApp, and Messenger may be simplifying adoption in this way.

The foundation of Meta AI is the Llama 3, the company’s most recent AI model. The firm claims this is the reason they are optimistic about this fundamental open-source model’s performance compared to competitors. There are two Llama 3 models that Meta is releasing. One will be the underlier for the AI chatbots in Meta’s apps that interact with customers.

Larger and multimodal versions of AWS, Databricks, Google Cloud, Hugging Face, Kaggle, IBM WatsonX, Microsoft Azure, NVIDIA NIM, and Snowflake are just a few of the platforms where the other versions can soon be found. extensive support for hardware, including AMD, AWS, Dell, Intel, NVIDIA, and Qualcomm CPUs.


The range of capabilities offered by Meta AI is broad, even if capability may differ slightly based on the application or service you use to access it. For instance, the search feature, which should be common to all of Meta’s apps, can direct you to online search results from Microsoft Bing and Google Search; to the best of my knowledge, Meta AI is the only AI assistant that can do this at this time. Copilot and ChatGPT depend on Microsoft

Beyond that, there is a text-to-image generator application called Imagine that can create artificial intelligence (AI) graphics while you type a message on WhatsApp. This product is now only available in the US for beta testing. To get further context or details about anything you recently saw in a post, there is connectivity with the Facebook Feed.

META AI

There is some degree of assurance that Meta is unmistakably projecting, as evidenced by benchmark tests they cite, which suggest Meta AI’s Llama 3 performs better in the 8B and 70B parameters in the instruct model and pre-trained model tests when compared to Google’s Gemini Pro 1.5 and Claude 3 Sonnet, respectively. The parameters for GPT-4 were never made public by OpenAI, and the upcoming release of GPT-5, the next iteration, is anticipated soon.

Strong foundations and accuracy are crucial for Meta because it wants to avoid the mistakes Google made with Bard and Gemini in the past. Earlier this year, for example, the latter made a mistake with the diversity balance in image generations, including images of the American Founding Fathers and Nazis as people of color. Later, Prabhakar Raghavan of Google provided background and warning to clarify many “tuning” difficulties. Microsoft’s Copilot assistant and OpenAI’s GPT have both been accused of providing inaccurate information or little context.

With 8 billion parameters utilized for training, 8B is considered a very big model, as suggested by the naming convention for AI model parameters, while 70B is an even larger model with 70 billion parameters. As a general rule, an AI model’s potential to learn complicated patterns increases with the number of parameters it can handle.


Our post-training techniques have been enhanced, leading to a significant decrease in false rejection rates, better alignment, and a greater diversity of answers from the models. The business provides technical information. “We also saw greatly improved capabilities like reasoning, code generation, and instruction following making Llama 3 more steerable,” the statement reads.

According to Meta, the training data for Llama 3 included data sets that were seven times bigger and contained four times as much code as the data sets used for Llama 2. Meta claims that all of the data was gathered from “publicly available sources,” yet they don’t elaborate.

With models as big as 400B parameters under training, it seems as though Llama 3’s 8B and 70B models are only the beginning. These should be released in the upcoming months for particular use cases.

It appears that Meta is gradually increasing its availability. Beyond Imagine’s beta release in the US, more details indicate that English language support is being rolled out in the following countries: the US, Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia, and Zimbabwe.

When HT asked the Meta team for advice on the availability of Meta AI in India, the tech behemoth refused to give a timeframe for when Instagram, Facebook, WhatsApp, and Messenger will be integrated. “We keep learning from the exams that our users in India take. We test them in a limited capacity and at different stages for the public, much like we do with a lot of our AI features and products. I have nothing else to share with you today.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top