← 回總覽

Google 发布 AppFunctions,旨在连接 AI 智能体与 Android 应用

📅 2026-03-30 04:00 Sergio De Simone 人工智能 10 分鐘 11600 字 評分: 91
Android AppFunctions AI 智能体 Gemini Jetpack API
📌 一句话摘要 Google 推出了 AppFunctions,这是一个 Jetpack API,使 Android 应用能够向 AI 智能体开放功能,以实现设备端任务执行,并辅以 UI 自动化作为备选方案。 📝 详细摘要 Google 正在通过引入 AppFunctions 和 UI 自动化平台,将 Android 系统向「智能体优先」的操作系统转型。AppFunctions 是一个 Jetpack API,允许开发者将应用功能作为自描述模块开放给 Gemini 等 AI 智能体进行本地调用。这种设备端方案通过最小化网络延迟,优先保障了隐私和速度。对于尚未进行原生集成的应用,全新的 UI
Skip to main content ![Image 1: LogoBestBlogs](https://www.bestblogs.dev/ "BestBlogs.dev")Toggle navigation menu Toggle navigation menuArticlesPodcastsVideosTweetsSourcesNewsletters

⌘K

Change language Switch ThemeSign In

Narrow Mode

Google Unveils AppFunctions to Connect AI Agents and Android Apps

I InfoQ @Sergio De Simone

One Sentence Summary

Google introduces AppFunctions, a Jetpack API enabling Android apps to expose capabilities to AI agents for on-device task execution, alongside a UI automation fallback.

Summary

Google is shifting Android toward an 「agent-first」 OS with the introduction of AppFunctions and a UI automation platform. AppFunctions is a Jetpack API that allows developers to expose app capabilities as self-describing modules for AI agents like Gemini to invoke locally. This on-device approach prioritizes privacy and speed by minimizing network latency. For apps without native integration, a new UI automation layer acts as a fallback, allowing agents to perform complex tasks—such as ordering food or coordinating rideshares—by interacting with existing app UIs without additional developer effort. Currently in beta on the Galaxy S26, these features are slated for a wider rollout in Android 17.

Main Points

* 1. Introduction of AppFunctions for agent-app synergy.A new Jetpack API that lets Android apps expose functional building blocks to AI agents, allowing them to perform specific tasks within apps seamlessly and programmatically. * 2. On-device execution for privacy and performance.Unlike cloud-based solutions, AppFunctions runs locally on the device, which reduces network latency and ensures that sensitive user interactions remain private. * 3. UI automation as a zero-code fallback mechanism.For apps that have not yet integrated the AppFunctions API, Android provides a platform-level automation layer that allows agents to navigate app UIs to complete complex user requests. * 4. Focus on user control and transparency.The architecture includes mandatory confirmations for sensitive actions and live visibility of agent activities through notifications to ensure the user remains in command.

Metadata

AI Score

91

Website infoq.com

Published At Today

Length 364 words (about 2 min)

Sign in to use highlight and note-taking features for a better reading experience. Sign in now

In a move to transform Android into an "agent-first" OS, Google has introduced new early beta features to support a task-centric model in which apps provide functional building blocks users leverage through AI agents or assistants to fulfill their goals.

The foundation for this new model is provided by AppFunctions, a Jetpack API that allows developers to expose self-describing capabilities within their apps for seamless integration with AI agents. By running on-device, these interactions offer improved privacy and faster performance by minimizing network latency.

> Mirroring how backend capabilities are declared via MCP cloud servers, AppFunctions provides an on-device solution for Android apps. Much like WebMCP, it executes these functions locally on the device rather than on a server.

For example, a user might ask Gemini Assistant to "Show me pictures of my cat from Samsung Gallery". The assistant would interpret the user's request, retrieve the relevant photos, and present them in its own interface. Those images can then persist in context, allowing the user to reference them in follow-up requests, such as editing, sharing, or taking further action.

As not all apps will support AppFunctions, especially in this early stages, Google has also introduced an UI automation platform in Android that provides a fallback when apps aren't integrated. This automation layer makes it possible for users to "place a complex pizza order for their family members with particular tastes, coordinate a multi-stop rideshare with co-workers, or reorder their last grocery purchase" all through the Gemini Assistant without additional developer effort.

> This is the platform doing the heavy lifting, so developers can get agentic reach with zero code. It’s a low-effort way to extend their reach without a major engineering lift right now.

In its announcement, Google emphasized that privacy and user control are central to the design of AppFunctions. All interactions are built for on-device execution with full user visibility through live view and/or notifications, the ability to manually override the agent's behavior, and mandatory confirmation required for sensitive actions such as purchases.

As noted, AppFunctions and the UI automation platform are still in early beta, currently available on the Galaxy S26 series, with a wider rollout of these features planned for Android 17.

I InfoQ @Sergio De Simone

One Sentence Summary

Google introduces AppFunctions, a Jetpack API enabling Android apps to expose capabilities to AI agents for on-device task execution, alongside a UI automation fallback.

Summary

Google is shifting Android toward an 「agent-first」 OS with the introduction of AppFunctions and a UI automation platform. AppFunctions is a Jetpack API that allows developers to expose app capabilities as self-describing modules for AI agents like Gemini to invoke locally. This on-device approach prioritizes privacy and speed by minimizing network latency. For apps without native integration, a new UI automation layer acts as a fallback, allowing agents to perform complex tasks—such as ordering food or coordinating rideshares—by interacting with existing app UIs without additional developer effort. Currently in beta on the Galaxy S26, these features are slated for a wider rollout in Android 17.

Main Points

* 1. Introduction of AppFunctions for agent-app synergy.

A new Jetpack API that lets Android apps expose functional building blocks to AI agents, allowing them to perform specific tasks within apps seamlessly and programmatically.

* 2. On-device execution for privacy and performance.

Unlike cloud-based solutions, AppFunctions runs locally on the device, which reduces network latency and ensures that sensitive user interactions remain private.

* 3. UI automation as a zero-code fallback mechanism.

For apps that have not yet integrated the AppFunctions API, Android provides a platform-level automation layer that allows agents to navigate app UIs to complete complex user requests.

* 4. Focus on user control and transparency.

The architecture includes mandatory confirmations for sensitive actions and live visibility of agent activities through notifications to ensure the user remains in command.

Key Quotes

* In a move to transform Android into an 「agent-first」 OS, Google has introduced new early beta features to support a task-centric model. * AppFunctions, a Jetpack API that allows developers to expose self-describing capabilities within their apps for seamless integration with AI agents. * This automation layer makes it possible for users to 「place a complex pizza order... coordinate a multi-stop rideshare... or reorder their last grocery purchase」 all through the Gemini Assistant without additional developer effort. * All interactions are built for on-device execution with full user visibility through live view and/or notifications.

AI Score

91

Website infoq.com

Published At Today

Length 364 words (about 2 min)

Tags

Android

AppFunctions

AI Agents

Gemini

Jetpack API

Related Articles

* Engineering Speed at Scale — Architectural Lessons from Sub-100-ms APIs * OpenAI Introduces Harness Engineering: Codex Agents Power Large‑Scale Software Development * 4 Patterns of AI Native Development * Architecting Agentic MLOps: A Layered Protocol Strategy with A2A and MCP protocol for inter-agent communication with the Model Context Protocol (MCP) for standardized tool and data access.") * Major Move: OpenClaw Creator Joins OpenAI * Harness design for long-running application development for long-running autonomous coding, detailing how to overcome context anxiety and self-evaluation bias to build complex full-stack applications.") * Andrej Karpathy on Code Agents, AutoResearch, and the Loopy Era of AI * The Ideal Micro-Frontends Platform * Anthropic Releases Claude Sonnet 4.6 with 1M Context Window * Anthropic Introduces Claude Opus 4.6 with 1M Token Context HomeArticlesPodcastsVideosTweets

Google Unveils AppFunctions to Connect AI Agents and Andr...

查看原文 → 發佈: 2026-03-30 04:00:00 收錄: 2026-03-30 06:00:46

🤖 問 AI

針對這篇文章提問,AI 會根據文章內容回答。按 Ctrl+Enter 送出。