Usage is spread across tools, personal accounts, and ad‑hoc workflows. Some people experiment with agents, others stick to chat tools, and many use AI in ways that never get shared. As a result, decisions are based on assumptions, not reality. A short internal survey using this form is the fastest way to get a reliable picture.
Why a form works
It prompts people to describe a specific situation, which gives you usable data.
From a few dozen responses you can extract:
use cases: what tasks AI is used for (writing, coding, research, support)
tools: which products are actually in use (often different from what you approved)
frequency: daily vs occasional usage
outcomes: time saved, quality changes
failure points: where AI produces wrong or unusable output
Avoid generic questions like "How do you use AI?" They produce vague answers.
What you get out of it
After collecting responses, you can do three things immediately:
Prioritize Repeated use cases = highest impact candidates
Standardize Similar tasks solved in different ways → create shared approaches
De-risk Identify where people rely on incorrect outputs or unsafe practices
This turns scattered experimentation into something you can manage. Address the needs and remove the obstacles.
How to execute
keep it under 5 minutes
do not tie it to performance evaluation
share results back to the team (anonymized or in 1:1s)
If people trust the process, they will share real examples.
Bottom line
AI usage already exists. The problem is visibility. This visibility gives you:
actual use cases
real tools in use
clear priorities for improvement
It is the quickest way to replace assumptions with data 🧠



