The Strategy Department

Thought Leadership

What AI Can’t Do (and Why That Matters for Strategy Work) 

By Alicia Darrow and Cass Moore

We use AI. We use it for research, for first drafts, for synthesizing large amounts of information quickly. We are not here to argue against it. 

What we want to be honest about is what it cannot do, because that gap is where the real work lives, and it is a gap that is not closing as fast as the word on the street around AI suggests. 

 

What using AI well looks like 

We fed a lead’s corporate website, annual reports, and growth strategies into a chat, then asked where the intersections were with what we offer. What would have taken hours of manual research came back in minutes. That is AI doing what it is designed to do well: pulling from a large volume of information and synthesizing patterns. 

But the reason that output was useful is that we knew enough about our own capabilities, and about what a credible intersection looks like to evaluate what came back. We were not copy-pasting and sending. We were using the output as a starting point for our own judgment. 

That is the distinction. AI is a capable research assistant and a strong first draft generator. It is not a strategist. Knowing how to ask the right questions, push back on the outputs, verify sources, and evaluate what is useful requires the expertise that the AI is drawing upon. The tool is only as good as the person directing it. 

 

What AI consistently cannot do 

It cannot read the room, by identifying micro-expressions and tone shifts in a workshop informing that a conversation needs to change direction. It cannot know that a client prefers to meet over coffee and will give you more in an informal setting than in a formal presentation. It cannot pick up on the political context in an organization, the history between two people in the room, or the thing that is not being said but is shaping every response. 

It cannot decide what to cut from a proposal when you are three pages over the limit and everything feels essential. It cannot make the judgment call about which win theme will resonate with a specific evaluator based on what is known about their priorities. It cannot run an interview, facilitate a workshop or strategy session, or build the trust that turns a client relationship into a long-term partnership. 

These are not soft things. They are the core of what makes strategy work. 

 

Where we recommend leaning in 

  • Research and synthesis, where the volume of information is large and the question is clear 
  • First drafts, where speed matters and the human review is thorough  
  • Compliance checking on proposals, where the task is systematic and the output can be verified 
  • Identifying gaps in a document against a checklist 
  • Tasks that are structured, specific, and where the output will be evaluated by someone who knows what good looks like 

Where we counsel caution

  • Any output that goes to a client without a thorough human review 
  • Any strategic recommendation generated without the context that only comes from being in the engagement 
  • Any use of AI that skips the step of asking: do I know enough about this topic to recognize when the output is wrong? 

That last question is the most important one. AI produces confident, well-formatted, plausible-sounding content on topics it does not fully understand. The only protection against that is expertise. You cannot outsource the expertise to the tool. 

We use AI as an assistant. It saves time. It makes our work faster. It is not a replacement for the judgment that makes the work worth doing.