From Action to Situation: A New Workplace Capability Model for the AI Era

Qiongyu Li

Recently, in a training course, the instructor was introducing the STAR framework (Situation, Task, Action, Result). After she finished, she asked me:

“If you had to rank the four elements of STAR by importance, how would you rank them?”

In true ENTP fashion, I wasn’t going to answer obediently. I tossed the question back: “Then what’s your internal ranking?”

She thought for a moment and said: “Usually we look at A → T → R → S. We mainly care about what you did (Action). As for Situation, it often overlaps with Task…”

I really wanted to argue on the spot, but I held back. What bothered me more was the assumption behind the question:

—as if you could rank STAR’s elements in a “universal order” independent of role and the era you’re in.

In the “pre-AI era,” this idea might barely hold: frontline and execution roles did tend to value Action and Result, while management tended to value Situation and Task.

But over the past few years, with large language models and automation tools everywhere, I’ve felt more and more strongly that:

In the age of AI, no matter the role, companies should shift their focus in evaluating talent—from Action capability to Situation capability.

If I have to reorder STAR’s priorities, my answer would be:

In the age of AI, answers keep getting cheaper; context keeps getting more expensive.

Everything that follows is really just unpacking that sentence.


I. Why “Situation”? #

Let’s start with a plain observation: history repeats itself with striking regularity.

Most people aren’t doing “absolute innovation.” For 70–80% of the problems we face, some variant has already shown up in another company, another industry, or even another era.

What determines whether you can solve it is often not “whether an answer exists,” but:

  • How clearly do you see the current situation?
  • Can you pull the key variables out of the chaos in front of you?
  • Can you judge what this resembles—where it “looks like” an old problem, and where it’s genuinely different?
  • Can you explain those things to other people, or to AI?

Often, if you dissect the Situation finely enough and see it clearly enough, the Task, Action, and Result will naturally grow out of the logic.

In the past, this kind of work—reading the landscape, defining the problem, constructing the context—was by default the job of managers and executives. Frontline employees were expected to simply follow direction and get things done.

But now AI is turning “execution” into an always-available resource that gets cheaper by the day—copywriting, coding, proposals, spreadsheets, all of it can be assisted by AI. As a result, people who can see the landscape, define the problem, and build the Situation are commanding a growing premium.

To make this clear, we need to break down what “Situation capability” actually is.


II. What is Situation capability? #

In one sentence:

Situation capability is the ability to see clearly—and articulate—what kind of situation this really is, inside a messy tangle of reality.

It’s not “adding a couple lines of background.” It’s a full bundle of abilities: perception, abstraction, modeling, plus expression that both humans and AI can understand.

If I break it down, I’d split it into five parts.


1. Situation sensing: pulling key variables out of noise #

Some people describe problems like this: “Users keep complaining, the group chat is noisy, sales say it’s hard to sell, the boss isn’t happy…”

There’s a lot of information, but it’s all a running log of “what happened.”

Others instinctively ask: Among all these phenomena, what actually matters? They do a quick filter:

  • What’s just background noise?
  • What are the key variables that move everything else?
  • Are there obvious connections between these variables?

Take “stalled user growth” as an example. Different people react completely differently:

  • Some immediately think: “Run a campaign, buy more traffic, do a referral loop.”
  • Others first ask: did acquisition stall, or did retention drop? Which part of the funnel is stuck? Have there been structural changes in the product, channels, or competitors recently?

The difference isn’t “having ideas.” It’s the precision of how you read the situation. When collaborating with AI, this decides whether you throw in a vague “Help me come up with a growth plan,” or whether you can provide a clear, executable context brief.


2. Turning experience into leverage: from “stories” to “patterns” #

Most people remember past experience as stories: that year we ran an activity, did these things, and the numbers ended up like that…

That’s fine—but if a story stays a story, the experience doesn’t transfer well.

A higher-level practice is to ask yourself afterwards:

  • Is there any abstractable pattern here?
  • Which moves only worked in that specific context, and which can be “reused” elsewhere?

Over time, you go from a “story collector” to a “pattern designer.” A small “pattern library” grows in your head:

  • When you encounter a situation, you recognize: “This is probably problem #X I’ve seen before.”
  • In similar cases, how did I break it down? What pitfalls did I hit? I can start by trying that playbook.

This is basically the human version of few-shot learning: instead of dumping scattered cases into AI, you first organize them into transferable “few-shot examples” yourself.


3. Systems thinking: seeing the system, not a single point #

The real world rarely works like “press a button and you get a result.” More often, you move one point in a system and trigger cascading effects:

  • Raise a KPI and user experience may crash.
  • Cut costs in one place and, once you do the full accounting, profit may actually fall.
  • Optimize your own team’s metric, and another team ends up taking the blame.

Some people make decisions by looking only at the immediate move: “As long as this action is cost-effective by itself, it’s fine.”

Others instinctively draw a system map in their heads:

  • Which stakeholders and loops does this touch?
  • What positive and negative feedback will it create?
  • If we scale this move by N times, will it blow up somewhere else?

They aren’t necessarily smarter, but they clearly have more systems sense. When collaborating with AI, that systems sense often shows up as this:

You can tell AI explicitly which constraints are non-negotiable guardrails, which assumptions only hold within a certain range, and where you have room to experiment.

AI tends to do “local optimization.” Systems thinking is about keeping the whole system from breaking.


4. Opportunity and risk awareness: simulate multiple scenarios, not just the next step #

Situation capability shows up a lot in how you think about the future.

Faced with a decision, different levels of thinking roughly look like this:

  1. Level 1: “Do it or not?”
  2. A bit more cautious: “What if it doesn’t go smoothly?”
  3. One level higher: proactively simulate multiple scenarios:
    • If everything goes well, what does it become?
    • With an average outcome, where do we get stuck?
    • In the worst case, what do we lose—and can we afford it?
    • To prepare for the worst case, can we put down a few “safety buffers” now?

This kind of multi-scenario thinking isn’t some lofty strategic tool. It’s mostly a habit: before hitting “execute,” you run a multiple-ending movie in your head.

With this habit, you won’t be satisfied with “one complete plan” from AI. You’ll naturally ask:

  • “Give plans and metrics for optimistic / neutral / pessimistic scenarios.”
  • “If metric A drops below what threshold, trigger the backup plan.”

AI can help you compute “how to proceed in each scenario.” You’re responsible for which scenario axis to choose, and where to turn.


5. Structured communication: translating the situation in your head #

Many people think quite well in their heads—but when they write a doc or start talking, it becomes a big blob:

  • Background and viewpoint are mixed together.
  • Constraints aren’t stated clearly.
  • Goals are vague.
  • After listening, others can only guess what you want.

Structured expression is Situation capability made visible. A simple, reliable template is:

  1. Context first: what happened; what facts we have.
  2. Then the core tension: what’s truly hard right now.
  3. Then constraints: hard boundaries on time / budget / policy / staffing.
  4. Finally the goal: which metric you most want to improve this time.

The clearer you can explain it, the faster you align with teammates—and the less likely AI is to go off-track when you hand it the task.

So-called Context Engineering, on the ground, is really just one sentence: explain it a bit more clearly, and you’ll avoid a lot of pitfalls.


Summary: a “formula” for Situation capability #

If I compress all of the above into one line, I’d write:

Situation = Situation sensing × Pattern extraction × Systems thinking × Multi-scenario simulation × Structured communication

The first four determine whether you can see the situation clearly; the last determines whether you can explain it clearly—to people, and to AI.


III. Why shift from Action to Situation? #

Above I described what Situation capability looks like. Now let’s ask it from another angle:

What changed in the age of AI that makes this more important than before?

I’ve felt the shift on three levels: technology, the competitive environment, and organizational roles.


1. Technology: AI is a super-executor; humans need to learn to set the questions #

Let’s admit something upfront: when a task has clear boundaries and a clear goal, AI’s execution capability is indeed far beyond most people. Give it clean data and a well-defined task, and it can produce something quite good in a short time.

That’s not the issue. The issue is: most real-world problems are fuzzy.

  • Information is incomplete.
  • Stakeholders are complex.
  • Goals are ambiguous.
  • Constraints are vague.

If you throw that chaos into AI as-is, you’re really just swapping in a “faster tool for blind busywork.”

A more reasonable division of labor is:

  • Humans first slice messy reality into sub-tasks AI can execute.
  • Translate “meaning” into “data structures.”
  • Make implicit assumptions, roles, and constraints explicit in the problem statement.

AI then does the heavy lifting on that basis. From this angle, the essence of human–AI collaboration is:

Humans write the questions; AI solves them. How well you pose the question directly determines the ceiling of what AI can do.


2. Competition: execution is increasingly commoditized; insight is increasingly scarce #

The external environment is quietly changing.

On one hand, there are more and more tools—SaaS products, AI suites, automation kits. Many things that once required experience and craft can now be produced with a few clicks, or a vague prompt: PRD templates, campaign plans, email drafts, reporting slides…

When everyone can easily deliver “80-point Action,” the competition naturally shifts:

  • Who sees “what the real problem is” first.
  • Who identifies structural opportunities—and structural risks—earlier.

On the other hand, the pace of change keeps accelerating. Often, before your systems, processes, and rules finish updating, the outside world has already changed several times.

Pure “execution” in this context is like flooring the gas pedal: if the direction is wrong, you just get to the wrong place faster.

The more volatile the environment, the more you need someone to step up a level and judge:

  • What situation are we actually in?
  • Placed into the whole system, are these actions patching things up—or digging out the foundation?

That’s where Situation capability starts commanding a real premium.


3. Organizational roles: it’s no longer “generals read maps, soldiers pull triggers” #

Traditional organizations had a clear division of labor: leadership read the map and set direction; frontline teams executed.

The default rule was: do your part well, and you’re competent.

But after AI enters the workplace, that structure is quietly being rewritten:

  • Many execution tasks that used to require lots of repetitive labor can now be partially taken over by AI.
  • If frontline roles remain “people who wait for instructions,” they’ll be marginalized faster and faster.

One thing happening in organizations is that the responsibility for “reading the map” is moving downward:

  • Frontline teams are the “sensors” of real-world context.
  • Middle management abstracts and translates those fragments.
  • Leadership integrates the abstractions into an overall judgment.

In this structure, a wrong context judgment—once executed at AI speed—can become an amplified mistake: what used to be a small error in a corner can turn into a system-wide bias.

So the capability that used to sit backstage—the quality of situation judgment—now has enormous leverage.


IV. From a company’s perspective: how to hire and develop in the age of AI? #

After “why,” the next question is naturally:

“So what should companies actually do?”

I’d roughly split it into two parts: how to select people, and how to gradually shift existing teams toward Situation-type talent.


1. Selection: from “doers” to “problem definers” #

If I were designing an interview process today, I’d deliberately include a few ambiguous situations.

For example, give a candidate a very common yet very fuzzy problem:

“User growth has stalled recently. How would you solve it?”

Then don’t rush to hear a solution. Watch their first reaction.

Some people will answer without hesitation: “Try a campaign,” “buy traffic,” “do a referral loop,” “we can build a membership program.” You’ll notice they’ve already preset the problem type, and they hardly ask any questions about the situation itself.

Others will pause and start unpacking it: what exactly does “stalled” mean? Is it acquisition that stopped, or retention that dropped? Which part of the funnel is stuck? What did competitors do recently? Have there been structural changes in channel, product, or pricing?

These two types create completely different value for an organization: the former is often tactically diligent; the latter has stronger problem awareness.

You can extend the design further:

When you ask about past project experience, don’t only listen for “what you did (Action).” Deliberately ask what they abstracted that could be reused: if this were a different industry, would your thinking still apply? Which parts were only valid in that specific context? Which parts were more general?

Give a complex business pain point and ask the candidate to list what variables and constraints they need to clarify first, instead of rushing to propose solutions.

Include a simple “AI collaboration demo”: give a scenario and ask the candidate to write a brief to AI on the spot. See how they describe background, goals, and constraints—and after AI outputs a first draft, how they refine, supplement, and correct it.

All of these share one common idea: move attention from “what impressive actions you’ve done” to “how you see the problem, construct the situation, and direct AI to do the work.”


2. Development: from “teaching a trick” to “rebuilding thinking scaffolds” #

After you’ve selected people, there’s an even harder question:

“How do we gradually shift the existing team toward Situation-type talent?”

I’d start with three small things.

(1) Treat framework training as mental gymnastics, not as chanting acronyms

Many companies run training on frameworks like PEST, SWOT, or Porter’s Five Forces, but in practice people stop at “memorizing the acronym.”

I prefer using real business problems as drills: force everyone to reorganize the situation with these frameworks and see whether they arrive at different insights than they had at the time.

(2) Rebalance how time is spent in retrospectives

Many teams run retros using a “background → process → result → lessons” script, but the time is often spent on “what we did.” You can do a small time restructure: require at least half the time to discuss how the internal and external situation changed, what judgments led to the choices, which variables were missed, and which assumptions were later overturned by facts.

Shift the focus from “recounting actions” to “re-modeling the situation.”

(3) Add a bit of structured rigor to everyday communication

You can very simply ask that, for important issues, people write at least four lines:

  1. What’s happening now (facts, not emotions).
  2. Who is impacted (users / colleagues / leaders / partners—what types).
  3. What are the hard constraints (time, budget, policy, resources…).
  4. What are we not sure about?

Over time, this naturally becomes part of everyone’s internal thinking scaffolding.

Knowledge management can upgrade as well. Instead of accumulating a pile of docs that focus mostly on “results” and “solutions,” deliberately store situation and solution together:

  • What stage was the business in?
  • What was the external environment like?
  • What was the internal resource situation?
  • What were the key judgments?
  • Which facts later validated or overturned them?

In the long run, you get a “situation–solution” mapping library—not just a “solution collection.”

Finally, encourage people to use AI as a mirror. Not “how to use AI to write copy,” but how to use AI to find blind spots in their understanding of the situation: write your view of the situation and the goal, give it to AI, then ask: “What information do you think is missing?” “What doesn’t make sense to you?”

If the questions AI raises are valuable, then answering them is an upgrade to the Situation itself.


V. From an individual’s perspective: how can ordinary professionals practice Situation capability? #

From the company perspective back to the personal. If you’re in a typical role, it’s natural to ask:

“What can I do to improve my Situation capability?”

My own view is: start with three concrete practice bundles.


Practice 1: learn to write the Situation clearly (a four-line template) #

Next time you run into a problem—even if you already have a solution in mind—hold back that itchy hand. Give yourself five minutes and write down the situation. You can fill out this four-line template directly:

  1. What’s happening right now? Use facts, not emotions or judgments.
  2. Who will be affected? Users / colleagues / your manager / partners—what types, roughly how wide.
  3. What are the hard constraints? Time, budget, staffing, policy, technical boundaries…
  4. What am I not sure about? Write it as questions—for example: “We actually don’t know whether A or B is the main cause.”

After you’ve written those four lines, then go ahead: feed it to AI, discuss it with someone, or start designing solutions.

You’ll find that this one move alone filters out a lot of “ineffective busywork.”


Practice 2: use AI as a “thinking mirror,” not an “answer machine” #

Many people’s default way of using AI is to start with: “Help me write X,” or “Give me a few options.”

That works, but it’s a bit wasteful. Try a different approach: treat AI as a slightly dumb colleague who never gets annoyed.

A simple loop looks like this:

  1. First, do your best to write out the background, goals, conflicts, and constraints.
  2. Then ask two simple questions: What don’t you understand? If you were me, what information would you want to add?

AI’s answers won’t all be important—but the questions it raises are a direct check of your understanding of the situation. You’ll see clearly: you never stated an important constraint; the goal is vague; two assumptions contradict each other.

As you patch those holes again and again, you’ll sharpen your expression along the way—and your Situation capability will grow.


Practice 3: deliberately practice finding isomorphisms across contexts #

This one is slightly more advanced—and also the most interesting. Simply put: try to find similar structures between problems across different domains.

For example:

  • Reading a book about biological evolution, ask yourself: do “survival of the fittest” and “niche competition” resemble anything in my industry?
  • Reading a book about urban planning, notice whether flows of traffic, information, and logistics resemble the flows of information and decisions inside my company.

Conversely, when you face a thorny problem, ask yourself:

  • “Where have I seen a similar situation—in a book, in another industry, in a case?”
  • “How did they handle it then?”
  • “What can transfer, and what is completely inapplicable?”

Once you start noticing these cross-context isomorphisms, your understanding of reality won’t be trapped in a small circle. You’ll increasingly get this feeling: oh, this is that kind of situation again—instead of fumbling from scratch every time.

This capability, for now, is still hard to fully teach to AI—and hard to fully replace with AI.


Closing: answers keep getting cheaper; context keeps getting more expensive #

To wrap everything up in one sentence:

In the age of AI, answers have become extremely cheap—you open a window and get a plan that “looks decent.” What’s truly expensive is the person who can define the problem clearly, who’s willing to spend the time to see the situation clearly and articulate it well.

For companies, the question isn’t only “Should we use AI to improve efficiency?” It’s also: are we consciously looking for and developing people who are good at constructing Situations?

For individuals, the question isn’t only “Will AI replace me?” It’s: am I deliberately practicing the ability to see and express the situation?

Next time you’re about to throw a “help me write something” prompt at AI, pause. Give yourself five minutes to write the Situation first.

Maybe those five minutes are already the start of something that will become more and more valuable over the next few years.


Postscript #

The original version was written in Shanghai in March 2025. Because the first draft was a bit too “ENTP” in voice and packed with jargon—such as isomorphism and mental models—and as concepts like Context Engineering became increasingly popular, I revised a second version in December 2025: integrating the Context Engineering angle and smoothing the prose (more conversational, less ENTP, less jargon-heavy).