META AND AI

 

Meta is taking a big step toward the next generation of artificial intelligence, unveiling V-JEPA 2, a new “world model” designed to help machines understand, predict and act in the natural world.

Unlike the language models that dominate today, this new system focuses on understanding the motion, interaction and logic of the 3D environment – ​​a critical development for applications such as robotics and autonomous vehicles.

The tech giant, which owns popular social networking apps Facebook and Instagram, said that the new open-source AI model V-JEPA 2 can understand, predict and plan in the natural world. Known as a world model, these systems draw on the logic of the physical world to create an internal simulation of reality, allowing AI to learn, plan, and make decisions in a way that more closely resembles human-like behavior.

For example, in the case of Meta’s new model, V-JEPA 2 can recognize that a ball rolling off a table will fall or that an object hidden from view has not simply disappeared.

Artificial intelligence is a strategic priority for Meta CEO Mark Zuckerberg as the company steps up its efforts to compete with giants like OpenAI, Microsoft, and Google.

As CNBC reports , Meta is set to invest $14 billion in AI firm Scale AI and hire its CEO Alexandr Wang to bolster its AI strategy.

Meta demonstrated the capabilities of its new V-JEPA 2 model in applications such as delivery robots and self-driving vehicles, which require real-time understanding of their environment in order to navigate safely and accurately in the physical world.

Unlike other models that rely on vast amounts of labeled data or video, V-JEPA 2 operates within a simplified “latency” space, where it analyzes and understands how objects move, interact and react, according to Meta.

“Allowing machines to understand the physical world is very different from allowing them to understand language,” said Yann LeCunn, Meta’s chief artificial intelligence scientist.

"A world model is something like an abstract digital twin of reality, which an artificial intelligence can refer to to understand the world and predict the consequences of its actions and therefore be able to plan a course of action to accomplish a given task," he added.
The next big thing in AI?


World models have garnered a lot of attention in the AI ​​research community, as scientists look beyond big language models—like OpenAI’s ChatGPT and Google’s Gemini—to ways to understand and simulate the natural world.

In September of last year, top AI researcher Fei-Fei Li raised $230 million for a new startup called World Labs, which aims to create what it calls “big world models” that can better understand the structure of the natural world.

Meanwhile, Google’s DeepMind unit is developing its own world model called Genie, which it says can simulate games and 3D environments in real time.


YOU MIGHT ALSO LIKE

No comments:

Post a Comment