← 回總覽

Ropedia Xperience-10M:大规模第一视角多模态具身 AI 数据集

📅 2026-03-18 00:28 AK 人工智能 3 分鐘 3557 字 評分: 83
Ropedia Xperience-10M 多模态数据集 第一视角 AI 具身 AI 机器人学
📌 一句话摘要 Ropedia Xperience-10M,一个全新的大规模第一视角多模态数据集,已在 Hugging Face 发布,它为具身 AI、机器人学、世界模型和空间智能研究提供了 1000 万条经验数据。 📝 详细摘要 这则推文宣布了 Ropedia Xperience-10M 的发布,这是一个对 AI 研究至关重要的新数据集。它被描述为一个大规模、以第一视角捕捉人类经验的多模态数据集,包含 1000 万次交互和 10000 小时的同步第一视角记录。该数据集融合了多种数据流,例如六路视频、音频、立体深度信息、相机姿态、手部和全身动作捕捉、IMU 数据以及分层语言标注。这使其在开
Skip to main content ![Image 1: LogoBestBlogs](https://www.bestblogs.dev/ "BestBlogs.dev")Toggle navigation menu Toggle navigation menuArticlesPodcastsVideosTweetsSourcesNewsletters

⌘K

Change language Switch ThemeSign In

Narrow Mode

Ropedia Xperience-10M: A Large-Scale Egocentric Multimodal Dataset for Embodied AI ==================================================================================

Ropedia Xperience-10M: A Large-Scale Egocentric Multimodal Dataset for Embodied AI ================================================================================== ![Image 2: AK](https://www.bestblogs.dev/en/tweets?sourceId=SOURCE_1b8811) ### AK

@_akhaliq

Ropedia Xperience-10M is out on Hugging Face

a large-scale egocentric multimodal dataset of human experience for embodied AI, robotics, world models, and spatial intelligence

It contains 10 million experiences (interaction) and 10,000 hours of synchronized first-person recordings with six video streams, audio, stereo depth, camera pose, hand mocap, full-body mocap, IMU, and hierarchical language annotations

dataset: huggingface.co/datasets/roped…

!Image 3: 视频缩略图

01:32

Mar 17, 2026, 4:28 PM View on X

7 Replies

7 Retweets

36 Likes

3,957 Views ![Image 4: AK](https://www.bestblogs.dev/en/tweets?sourceid=1b8811) AK @_akhaliq

One Sentence Summary

Ropedia Xperience-10M, a new large-scale egocentric multimodal dataset, has been released on Hugging Face, offering 10 million experiences for embodied AI, robotics, world models, and spatial intelligence research.

Summary

This tweet announces the release of Ropedia Xperience-10M, a significant new dataset for AI research. It is described as a large-scale egocentric multimodal dataset designed to capture human experience, comprising 10 million interactions and 10,000 hours of synchronized first-person recordings. The dataset includes diverse data streams such as six video feeds, audio, stereo depth, camera pose, hand and full-body motion capture, IMU data, and hierarchical language annotations, making it highly valuable for developing embodied AI, robotics, world models, and spatial intelligence systems.

AI Score

83

Influence Score 13

Published At Today

Language

English

Tags

Ropedia Xperience-10M

Multimodal Dataset

Egocentric AI

Embodied AI

Robotics HomeArticlesPodcastsVideosTweets

Ropedia Xperience-10M: A Large-Scale Egocentric Multimoda... ===============

查看原文 → 發佈: 2026-03-18 00:28:13 收錄: 2026-03-18 04:00:42

🤖 問 AI

針對這篇文章提問,AI 會根據文章內容回答。按 Ctrl+Enter 送出。