CVPR 2026

GUIDE: A Benchmark for Understanding and Assisting Users in Open-Ended GUI Tasks

1KAIST  ·  2Carnegie Mellon University  ·  3University of Oxford  ·  4Konkuk University  ·  5Google Inc.  ·  6SkillBench
An example of the GUIDE benchmark showing Behavior State Detection, Intent Prediction, and Help Prediction tasks
Figure 1. An example of the GUIDE benchmark, which jointly models three tasks: Behavior State Detection, Intent Prediction, and Help Prediction, to interpret what the user is doing, aiming to achieve, and whether and what they may need assistance with during open-ended software tasks.

Abstract

Graphical User Interface (GUI) agents have the potential to assist users in interacting with complex software (e.g., PowerPoint, Photoshop). While prior research has primarily focused on automating user actions through clicks and keystrokes, this paradigm overlooks human intention, where users value the ability to explore, iterate, and refine their ideas while maintaining agency. To move beyond automation and toward collaboration, GUI agents must understand what users are doing and why. We introduce GUIDE (GUI User Intent Detection Evaluation), a benchmark that evaluates AI models on their ability to perceive user behavior, infer intent, and provide assistance in open-ended GUI tasks. GUIDE consists of 67.5 hours of screen recordings from 120 novice user demonstrations with think-aloud narrations, across 10 software. GUIDE defines three tasks—(i) Behavior State Detection, (ii) Intent Prediction, and (iii) Help Prediction that test a model's ability to recognize behavior state, reason about goals, and decide when and how to help. Evaluations across eight state-of-the-art multimodal models reveal that all models struggled, achieving only 44.6% and 55.0% accuracy on behavior state and help prediction. However, providing user context significantly improved the performance, raising help prediction by up to 50.2pp, highlighting the critical role of structured user understanding in effective assistance.

Dataset

🗂️ GUIDE collects screen recordings from 54 novice users across 10 widely used applications spanning five categories: Photo Editing (Photoshop, GIMP), Graphic Design (Figma, Canva), Presentation Design (PowerPoint, Google Slides), Video Editing (Premiere Pro, CapCut), and Data Analysis (Google Sheets, Microsoft Excel). Each session captures both screen recordings and think-aloud narrations that surface users' underlying intentions and cognitive states.

👨‍💻 Unlike instructional videos that capture experts' workflows, our dataset captures the authentic challenges and exploratory behaviors that novices exhibit during task completion, serving a crucial role in building collaborative agents.

67.5h
Screen Recordings
120
Demonstrations
10
Applications
40
Open-Ended Tasks

Example Demonstrations

Below are sample screen recording clips from the GUIDE dataset, illustrating novice users performing open-ended tasks across different applications.

Photoshop Photo Editing
Google Slides Presentation Design
Premiere Pro Video Editing

Benchmark Tasks

GUIDE defines a unified three-stage evaluation framework: Understanding → Reasoning → Assisting, progressing from interpreting user behavior to inferring intentions and ultimately providing helpful assistance.

Task 1

Behavior State Detection

Classify a video segment into one of 9 behavior states (e.g., Exploration and Decision-Making. See our taxonomy below).

Task 2

Intent Prediction

Infer the user's immediate, short-term goal from a video segment. Evaluated as a 4-way Multiple-Choice Question.

Task 3

Help Prediction

(3-1) Help Need Prediction: Determine whether the user needs help or not (binary classification).
(3-2) Help Content Prediction: Determine what kind of help is most appropriate if needed (4-way MCQ).

⚠️ Models are provided with only the video snippets (representative screenshots shown below), without the user narrations.

Example screenshot

Our Taxonomy

We propose a taxonomy of user behavior states in GUI-based software tasks, organized into four main phases: Planning, Execution, Problem-Solving, and Evaluation.

Planning Execution Problem-Solving Evaluation
Task Understanding and Preparation Focused on logistics, interpreting tasks, gathering assets, and configuring environment. Exploration and Decision-Making Experimenting with options to understand effects and decide which to use. Frustration Encountering blockers, showing signs of being stuck, confused, or annoyed. Waiting and Monitoring Passive state, waiting for system-controlled processes to complete.
Ideation and Planning High-level conceptual work, brainstorming ideas, and outlining structure. Performing Actions Confidently using software with purposeful actions executed with little hesitation. Debugging Actively investigating problem causes, forming and testing hypotheses. Assessment Intentionally pausing to review and evaluate work quality and accuracy.
Seeking External Help Recognizing knowledge gaps and turning to external resources for guidance.

Results

We evaluate eight state-of-the-art multimodal LLMs in a zero-shot setting. All models struggled with Behavior State Detection and Help Prediction (peaks of 44.6% and 55.0% accuracy), but performance improved substantially when structured user context (behavior state and intent) was provided — boosting help prediction by up to 50.2 percentage points.

Model (1) Behavior Detection (2) Intent Prediction (3-1) Help Need Detection (3-2) Help Content Prediction
+Prev. +Behavior +Behv.+Behv.+Intent +Behv.+Behv.+Intent
Gemini-2.5-Flash 36.9138.19 65.4066.77 53.6476.3378.07 49.5353.7578.59
Gemini-2.5-Pro 42.4443.79 67.8070.16 69.8284.7382.38 52.7457.0379.69
GPT-4o-mini 17.6517.07 60.7662.19 46.0578.9282.26 31.3242.8679.84
GPT-4o 36.3237.24 61.1962.58 49.6987.7987.91 45.9548.3779.78
Claude-4.5-Sonnet 44.6145.63 71.3972.62 39.4958.5659.43 55.0062.1782.79
Qwen3-VL-8B 37.9738.13 62.7064.03 52.8370.3977.36 46.0650.6380.11
InternVideo2.5-8B 21.5727.02 43.7945.13 34.3635.3535.25 23.6729.1573.86
InternVL3-8B 22.5724.90 46.1146.97 34.9443.7346.82 27.0332.2072.97

Table: Accuracy across all tasks and conditions. Bold = best in column. Context augmentation (+Behavior, +Intent) consistently improves performance, especially for help-related predictions.

Online setting accuracy trends
Figure 4. Accuracy trends in the online setting, where models receive progressively more of the video segment (25%, 50%, 75%, 100%). Models show consistent improvement as more context becomes available.

BibTeX

@inproceedings{yang2026guide, title = {GUIDE: A Benchmark for Understanding and Assisting Users in Open-Ended GUI Tasks}, author = {Yang, Saelyne and Yu, Jaesang and Peng, Yi-Hao and Lin, Kevin Qinghong and Cho, Jae Won and Song, Yale and Kim, Juho}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year = {2026} }