Stop outsourcing your thinking to AI
AI should not be making decisions for you (unless you want it to take your job)
👋 Hi, it’s Greg and Taylor. Welcome to our newsletter on everything you wish your CEO told you about how to get ahead.
At least once a week, someone tells me: “I asked GPT, and it said I should do this.”
I’m a huge advocate for AI as a thought partner, but this is not the way to do it. ChatGPT is not Waymo – you shouldn’t outsource your thinking to it. And you certainly shouldn’t say in a meeting that ChatGPT “told you to do something.” It’s the fastest way to get laid off.
There’s a big difference between using AI as a thought partner and asking it for the answer. AI thought partnership isn’t a shortcut – it’s a process to stress-test your argument, consider risks, and improve your recommendation (which is yours, not AI’s).
Today, I’m talking about how to become an expert in using AI this way. I’ll go deep on the tactics that work for me, the decisions you should use it for, and how to build the habit.
Greg
The decisions you should talk through with AI
When I talk about using AI as a thought partner, I’m talking about debating medium-to-high stakes decisions with real consequences. These aren't decisions like, “Where should we hold our team lunch?” or “What should I call this blog post?”
They’re decisions where:
There’s meaningful upside or risk in the decision. For example: It doesn’t really matter whether you schedule a meeting for Wednesday or Friday – it does matter whether you prioritize shareability features or completion features in your product.
You have a sense of the inputs, constraints, and options. If you ask AI, “What should we do with our marketing budget this year?” it will give you a long list of anodyne ideas. If you tell AI, “We have $15,000 to spend in March on marketing activities – here’s the data on what has performed best in the past. I’d like to think about whether we should double down on event marketing and try to optimize it, or allocate budget to influencer marketing, which would be a net new channel for us,” it will be much more helpful.
You have some background data (or can summarize it quickly). AI does best with context. Ideally you can upload some background documents or summarize the key inputs that are influencing your thinking.
Here are a few decisions I discussed with AI in the last week.
Should we do a paid model to drive revenue, or a free model to drive scale?
Should we prioritize features for a small group of high-value customers or the broader user base?
Should we invest in a net new channel (podcasts) or double down where we're already successful (events)?
What the thought partnership conversation looks like
When you ask AI to discuss a decision, you are NOT looking for it to give you the answer. You are looking for a conversation that sharpens your thinking and deepens your conviction. The final recommendation should still be yours, because you’re the one it affects (ChatGPT doesn’t work at your company … yet).
Here’s what a good thought partnership conversation looks like:
Establish the decision you’re trying to make
Share data, inputs, team opinions, etc.
Ask AI to make an argument for each option from each stakeholder’s perspective (more on this below)
Push back on AI’s arguments and add additional nuance/context it may be missing
Introduce new constraints or scenarios to stress-test the leading options (“What if we lost our biggest customer because of this decision?”)
Ask AI to identify the strongest arguments against your preferred option
Summarize what you’ve learned and make a decision (and share it with AI)
How to get AI to stop agreeing with you
AI’s biggest flaw as a thought partner is its lack of conviction. When I first started experimenting with AI for decision-making, I’d lay out a strategic option and ask for feedback. The AI would say something supportive like, “That’s a sound approach that balances short-term gains with long-term positioning.”
Then I'd say, “Actually, what if we did the exact opposite?" And it would respond, "Yes, excellent insight! Taking the opposite approach addresses several potential weaknesses in the original plan.”
I’ve experimented with prompts for getting around this, and here’s my best advice:
1. Create opposing roles, not just opinions. Don't just say “play devil's advocate.” Set up multiple AI personas with conflicting objectives: “For this decision, I want you to simultaneously represent three perspectives: 1) Our head of growth who wants to maximize user acquisition at all costs, 2) Our CFO who’s paranoid about runway, and 3) Our lead engineer who’s concerned about technical debt. Each should make their strongest case without compromising.”
2. Force ranking with explicit tradeoffs. Ask it to force-rank options by making it explicitly choose what to sacrifice: “If we pursue a freemium model, what percentage of our potential market reach would we be giving up compared to free? And what specific revenue milestones would make that tradeoff worthwhile?” The specificity forces meaningful pushback.
3. Make it defend unpopular positions first. Start your conversation by asking AI to defend the position you're least inclined to believe in, but might secretly be true: “Before we discuss my preferred approach, give me the strongest possible case for why we should actually kill this feature entirely, including evidence I might be overlooking.”
4. Use the ‘conviction metric’ trick. This is my favorite technique. Tell it: “On a scale of 0-10, where 10 is absolute conviction backed by overwhelming evidence, rate your confidence in each recommendation you give me. Don’t hedge – if something is a 3, say it's a 3, and explain why. If it’s a 9, defend that high rating.” This prevents wishy-washy answers.
5. Make it simulate the downsides. Instead of just asking for cons, make it vividly role-play the negative outcomes: “Show me exactly how our quarterly board meeting would go if this initiative fails. Write the specific criticisms our lead investor would make, the metrics they’d point to, and the uncomfortable questions they'd ask me as CEO.”
My advice
Start with five decisions over the next month. Block your calendar and have a 30-minute conversation to pressure-test each decision with AI. Then reflect on the conversation. Did it change your mind, strengthen your conviction, help you mitigate risk, etc.?
In my experience, it takes some practice to build this habit and perfect the back-and-forth with AI. But once you do, it will change how you work. We’re all so strapped for time that it’s really rare to have a great human thought partner – so a lot of us are making stressful decisions all by ourselves.
We’re all learning how to use AI as a thought partner in a way that amplifies rather than replaces our value. So my advice will continue to evolve, and I’d be curious what you’re learning too.
Have a great week,
Greg
In case it helps anyone: You can instruct GPT to be less agreeable in the "Customize" settings so that you don't have to explicitly state this every time you want objective feedback on your inputs. For example, my preset instructions are "Be objective with your answers and not unnecessarily agreeable. "
Amen. We cannot become meat puppets for silicon strings! I wrote about the danger of humans becoming inverse mechanical Turks here: https://www.whitenoise.email/p/the-inverse-mechanical-turk-meat