Meta Reportedly Partnering With Arm to Bring Advanced AI Capabilities to Smartphones

by rajtamil
0 comment 3 views
A+A-
Reset

Meta Reportedly Partnering With Arm to Bring Advanced AI Capabilities to Smartphones

Meta and Arm reportedly want to build AI inference to on-device and edge use cases on smartphones.

Meta Reportedly Partnering With Arm to Bring Advanced AI Capabilities to Smartphones

Photo Credit: Meta

The vision behind using small language models is to make the AI inference faster

Highlights

  • Meta is said to be planning to use smaller language models
  • The partnership aims to support developers in building new experiences
  • Both companies have also partnered to build new SLMs

Meta Connect 2024, the company's developer conference, took place on Wednesday. During the event, the social media giant unveiled several new artificial intelligence (AI) features and wearable devices. Apart from that Meta reportedly also announced a partnership with the tech giant Arm on building special small language models (SLMs). These AI models are said to be used to power smartphones and other devices and introduce newer ways of using these devices. The idea behind it is to provide on-device and edge computing options to keep AI inference fast.

Meta and Arm Partner to Build AI Models

According to a CNET report, Meta and Arm are planning to build AI models that can carry out more advanced tasks on devices. For instance, the AI could act as the device’s virtual assistant and can make a call or click a picture. This is not too far-fetched as today, AI tools can already perform a plethora of tasks such as editing images and drafting emails.

However, the main difference is that users have to interact with the interface or type particular commands to get AI to do these tasks. At the Meta event, the duo reportedly highlighted they wanted to do away with this and make AI models more intuitive and responsive.

  • Meta Announces New AI Features for Ray-Ban Smart Glasses

One way to do this would be by bringing the AI models on-device or keeping the servers very close to the devices. The latter is also known as edge computing and is used by research institutions and large enterprises. Ragavan Srinivasan, vice president of product management for generative AI at Meta told the publication that developing these new AI models is a good way to tap into this opportunity.

For this, the AI models will have to be smaller in size. While Meta has developed large language models (LLMs) as large as 90 billion parameters, these are not suitable for smaller devices or faster processing. The Llama 3.2 1B and 3B models are believed to be ideal for this.

  • Meta AI Can Now ‘Imagine’ You In Different Avatars
  • Meta Orion AR Glasses With Holographic Displays Unveiled at Meta Connect

However, another issue is that AI models will also have to be equipped with newer capabilities beyond simple text generation and computer vision. This is where Arm comes in. As per the report, Meta is working closely with the tech giant to develop processor-optimised AI models that can adapt to the workflows of devices such as smartphones, tablets, and even laptops. No other details about the SLMs have been shared currently.

You may also like

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.

© RajTamil Network – 2024