AI, Critical Thinking, and the Skill of Effective Prompting: A Personal Lesson
There’s a lot of debate about AI these days. Critics point to concerns about stolen materials used in training datasets, the environmental impact of massive data centers, and fears that AI will replace human workers. I understand these concerns (they’re valid and deserve serious consideration). Nevertheless, as someone with communication disability who has benefited from decades of technological advancement, I can’t ignore that AI has definitely increased my capabilities to be productive.
But here’s what I’ve learned through recent experience: you still have to be skillful to use AI effectively.
Recently, I found myself trying to understand NDIS pricing arrangements for therapy services. As a self-managed participant, I’d heard that I could only claim up to $232.99 per hour, while the therapist I was considering charged $280. I’d also heard that this limitation didn’t apply if my plan was plan-managed or agency-managed.
I asked an AI assistant to clarify this situation. The AI initially confirmed what I’d heard, explaining that registered providers must stick to NDIS price limits, while non-registered providers could charge whatever they wanted. So far, so good.
Then the AI made what seemed like a logical leap: since self-managed participants can use non-registered providers who can charge any amount, I could claim the full $280 from my NDIS funding with no out-of-pocket gap.
But something didn’t sit right with me. The explanation felt contradictory with what the AI was presenting earlier. I kept probing. I asked for simplification. I requested clarification. Eventually, I asked the AI to show me the actual text it was using to support its claims.
That’s when things got interesting.
The AI had been citing a webpage that said self-managed participants were “not tied to the NDIS Price Guide and can pay any price.” From this statement, the AI had concluded that I could claim the full amount back from my NDIS funding. But when I pressed for the exact wording, the AI had to acknowledge something crucial: the source never explicitly stated that NDIS would reimburse amounts above the price guide.
The webpage simply said self-managed participants could pay any price (not that they could claim any price). The AI had made an interpretive leap that wasn’t supported by the actual text.
A mate said to me at the pub the other Friday night: “The risk of AI isn’t so much that AI will replace jobs, but it will expose people who don’t do their job well.” He’s right. Smart people who use AI to augment their work will be the ones who benefit.
But here’s the key: augmenting your work with AI requires skill. You have to:
- Know how to build the right prompts to extract useful results
- Understand how to interrogate AI outputs rather than accepting them blindly
- Recognize when something doesn’t add up and keep pushing for clarification
- Ask for primary sources and verify interpretations
The fact is that AI isn’t evil, but blind trust is dangerous. The important point is that the issue isn’t that AI is evil and shouldn’t be used. The issue is that you still need intelligence to create with AI. This includes the skills to understand it, work with it, and most importantly, clarify the truth.
As someone who has relied on communication technology my entire adult life (from electronic typewriters in the 1970s to AAC devices today), I’ve witnessed firsthand how technology can be genuinely life-changing. AI is no different. It has the potential to dramatically increase productivity and capability, particularly for people with disabilities like myself.
But technology is only as good as the person using it. Don’t just trust blindly.
In my case, I had to learn to argue with the AI, to demand evidence, to question interpretations, and to recognize when conclusions were based on assumptions rather than facts. These are the same critical thinking skills we’ve always needed (whether we’re evaluating a research paper, a news article, or advice from a colleague).
AI has definitely increased my productivity. It helps me research faster, draft more efficiently, and explore ideas more thoroughly than ever before. But it hasn’t replaced my need for critical thinking, subject matter expertise, or good old-fashioned scepticism when something doesn’t quite add up. My first computer teacher, Mr Andrews, taught me the golden rule of computing: garbage in, garbage out. That principle applies just as much to AI as it did decades ago. If you don’t ask the right questions or critically evaluate the outputs, you’ll get garbage results.
The future belongs not to those who resist AI, nor to those who blindly accept its outputs. The future belongs to those who learn to work skillfully with AI (understanding its strengths, recognizing its limitations, and knowing when to push back and demand better evidence).
That’s a skill worth developing, whether you’re navigating NDIS pricing arrangements or any other complex challenge in life.
—
And yes, of course I used AI to assist in writing this post. But the ideas, the experience, the critical analysis, and the conclusions are all mine. The AI was simply the tool that helped me articulate my thoughts more efficiently.