This morning, I ran an experiment with Generative AI. The results were worrisome enough that I need to share what happened.
𝗡𝗼𝘁𝗲: The following is in the context of Data Analysis and NOT generic.
From 6 AM to 8 AM, I tested AI capabilities using various models offered by Perplexity AI - Claude, GPT, Sonar, and Grok.
I asked for simple statistical analysis, requiring the AI to act as a statistical expert and to ask for clarifications or inputs when needed. Clear guidelines on inputs and data were provided.
Initial Observations
All AI models responded with remarkable confidence. Yet they all suffered from the same fundamental problems. They created fake numbers, suggested impossible calculations, and mixed up basic concepts. Even after corrections, they repeated the same errors and suggested wrong actionable insights.
The pattern became clear quickly. AI never questioned its own work. It kept giving false suggestions and manufacturing random metrics. Most concerning was its complete failure to learn from mistakes or ask for clarity when needed. Despite clear examples and guidance, the AI plowed ahead with incorrect analyses.
In the spirit of experiment, I persisted. The numbers tell the story:
25+ attempts to guide it led nowhere
3 accidental wins came from my frustration
0 was the number of times AI asked for clarity
The implications of my experiment are serious. AI-driven "data-based" insights were disastrous. False confidence masked bad suggestions.
The one bright spot? AI has a sense of humor and actually helped me find a title for this post.
Key Takeaways for DATA analysis:
Scrutinize AI output
Trust human expertise
Verify numbers diligently
Blind trust causes costly mistakes
Watch out for manufactured data & info
P.S. Do you use AI tools to give you suggestions?
Need help with revealing hidden patterns in your data? Message me today!