AI at Meta

AI at Meta

Research Services

Menlo Park, California 906,732 followers

Together with the AI community, we’re pushing boundaries through open science to create a more connected world.

About us

Through open science and collaboration with the AI community, we are pushing the boundaries of artificial intelligence to create a more connected world. We can’t advance the progress of AI alone, so we actively engage with the AI research and academic communities. Our goal is to advance AI in Infrastructure, Natural Language Processing, Generative AI, Vision, Human-Computer Interaction and many other areas of AI enable the community to build safe and responsible solutions to address some of the world’s greatest challenges.

Website
https://ai.meta.com/
Industry
Research Services
Company size
10,001+ employees
Headquarters
Menlo Park, California
Specialties
research, engineering, development, software development, artificial intelligence, machine learning, machine intelligence, deep learning, computer vision, engineering, computer vision, speech recognition, and natural language processing

Updates

  • View organization page for AI at Meta, graphic

    906,732 followers

    🎥 Today we’re excited to premiere Meta Movie Gen: the most advanced media foundation models to-date. Developed by AI research teams at Meta, Movie Gen delivers state-of-the-art results across a range of capabilities. We’re excited for the potential of this line of research to usher in entirely new possibilities for casual creators and creative professionals alike. More details and examples of what Movie Gen can do ➡️ https://go.fb.me/00mlgt Movie Gen Research Paper ➡️ https://go.fb.me/zfa8wf 🛠️ Movie Gen models and capabilities • Movie Gen Video: A 30B parameter transformer model that can generate high-quality and high-definition images and videos from a single text prompt. • Movie Gen Audio: A 13B parameter transformer model can take a video input along with optional text prompts for controllability to generate high-fidelity audio synced to the video. It can generate ambient sound, instrumental background music and foley sound — delivering state-of-the-art results in audio quality, video-to-audio alignment and text-to-audio alignment. • Precise video editing: Using a generated or existing video and accompanying text instructions as an input it can perform localized edits such as adding, removing or replacing elements — or global changes like background or style changes. • Personalized videos: Using an image of a person and a text prompt, the model can generate a video with state-of-the-art results on character preservation and natural movement in video. We’re continuing to work closely with creative professionals from across the field to integrate their feedback as we work towards a potential release. We look forward to sharing more on this work and the creative possibilities it will enable in the future.

  • View organization page for AI at Meta, graphic

    906,732 followers

    Together with Reskilll, we hosted the first official Llama Hackathon in India. This hackathon brought together 270+ developers & 25+ mentors from across industries in Bengaluru. The result? 75 impressive new projects built with Llama in just 30h of hacking! Read the full recap, including details on some of the top projects like CurePharmaAI, CivicFix, Evalssment and Aarogya Assist ➡️ https://go.fb.me/0n8xkz

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
      +13
  • View organization page for AI at Meta, graphic

    906,732 followers

    Join us in San Francisco (or online) this weekend for a Llama Impact Hackathon! Teams will be spending two days building new ideas and solutions with Llama 3.1 + Llama 3.2 vision and on-device models. Three challenge tracks for this hackathon: 1. Expanding Low-Resource Languages 2. Reducing Barriers for Llama Developers 3. Navigating Public Services Join us and build for a chance to win awards from a $15K prize pool ➡️ https://go.fb.me/vnzbd3

    • No alternative text description for this image
  • View organization page for AI at Meta, graphic

    906,732 followers

    Today at Meta FAIR we’re announcing three new cutting-edge developments in robotics and touch perception — and a new benchmark for human-robot collaboration to enable future work in this space. Details on all of this new work ➡️ https://go.fb.me/vdcn2b 1. Meta Sparsh is the first general-purpose encoder for vision-based tactile sensing that works across many tactile sensors and many tasks. Trained on 460K+ tactile images using self-supervised learning.  2. Meta Digit 360 is a breakthrough artificial fingertip-based tactile sensor, equipped with 18+ sensing features to deliver detailed touch data with human-level precision and touch-sensing capabilities. 3. Meta Digit Plexus is a standardized platform for robotic sensor connections and interactions. It provides a hardware-software solution to integrate tactile sensors on a single robot hand and enables seamless data collection, control and analysis over a single cable. To make these advancements more accessible for different applications, we’re partnering with GelSight and Wonik Robotics(원익로보틱스) to develop and commercialize these touch-sensing innovations. Additionally, looking towards the future, we’re releasing PARTNR: a benchmark for Planning And Reasoning Tasks in humaN-Robot collaboration. Built on Habitat 3.0, it’s the largest benchmark of its kind to study and evaluate human-robot collaboration in household activities. We hope that standardizing this work will help to accelerate responsible research and innovation in this important field of study. The potential impact of expanding capabilities and components like these for the open source community ranges from medical research to supply chain, manufacturing and much more. We’re excited to share this work and push towards a future where AI and robotics can serve the greater good.

  • View organization page for AI at Meta, graphic

    906,732 followers

    We previously shared our research on Layer Skip, an end-to-end solution for accelerating LLMs from researchers at Meta FAIR. It achieves this by executing a subset of an LLM’s layers and utilizing subsequent layers for verification and correction. We’re now releasing inference code and fine-tuned checkpoints for this work. Model weights on Hugging Face ➡️ https://go.fb.me/6wza9p   More details ➡️ https://go.fb.me/vv3mta We hope that releasing this work will open up new areas of experimentation and innovative new research in optimization and interpretability.

  • View organization page for AI at Meta, graphic

    906,732 followers

    New open source release — Meta Open Materials 2024: a new open source model and dataset to accelerate inorganic materials discovery. • Open Materials 2024 models: Deliver results putting them at the top of the MatBench-Discovery leaderboard. They use the EquiformerV2 architecture and come in three different sizes: 31M, 86M and 153M. Get the models on Hugging Face ➡️ https://lnkd.in/eMzmfE6W • Open Materials 2024 dataset: Contains over 100 million Density Functional Theory calculations focused on structural and compositional diversity — making it one of the largest open datasets of its kind to train these types of models. Get the dataset on Hugging Face ➡️ https://lnkd.in/eMzmfE6W We’re happy to share this work openly with the community excited for how this work could enable further research breakthroughs in AI-accelerated materials discovery.

    • No alternative text description for this image
  • View organization page for AI at Meta, graphic

    906,732 followers

    We want to make it easier for more people to build with Llama — so today we’re releasing new quantized versions of Llama 3.2 1B & 3B that deliver up to 2-4x increases in inference speed and, on average, 56% reduction in model size, and 41% reduction in memory footprint. Details on our new quantized Llama 3.2 on-device models ➡️  https://lnkd.in/g7-Evr8H While quantized models have existed in the community before, these approaches often came at a tradeoff to performance and accuracy. To solve this, we performed Quantization-Aware Training with LoRA adaptors as opposed to only post-processing. As a result, our new models offer a reduced memory footprint, faster on-device inference, accuracy and portability — while maintaining quality and safety for developers to deploy on resource-constrained devices. The new models can be downloaded now from Meta and on Hugging Face — and are ready to deploy on even more mobile CPUs thanks to close work with Arm, MediaTek and Qualcomm. https://lnkd.in/g7-Evr8H

    • No alternative text description for this image

Affiliated pages

Similar pages