Imagine this: A friend asks you a question, and instead of giving them a thoughtful response, you copy and paste an answer from a chatbot. It’s the digital equivalent of shrugging and saying, ‘Figure it out yourself.’ But here’s where it gets controversial: Is this just a time-saver, or is it a subtle way of saying, ‘Your question isn’t worth my effort’? Let’s dive in.
Back in the 2010s, a website called Let Me Google That For You (https://letmegooglethat.com/) became a viral sensation for its single, snarky purpose: calling out people for asking easily searchable questions. The site lets you create a custom link that, when clicked, plays an animation of someone typing the question into Google. The message? ‘You could’ve done this yourself.’ It’s funny, sure, but it’s also a slap in the face. And this is the part most people miss: In personal or professional settings, using this tool screams, ‘I don’t value your question or our interaction.’
Now, fast-forward to 2025, and the same folks behind that site have launched Let Me ChatGPT That For You (https://letmegpt.com/). It does exactly what you’d expect: generates an AI-powered response to questions. But its existence raises a new, pressing issue: Is it rude to respond to someone’s question with AI-generated text, especially in professional contexts?
Here’s the thing: While it might save time, copying and pasting AI output feels impersonal and dismissive. Developer Alex Martsinovich summed it up perfectly in his blog post, It’s Rude to Show AI Output to People (https://distantprovince.by/posts/its-rude-to-show-ai-output-to-people/): ‘Be polite, and don’t send humans AI text.’ His framework for AI etiquette is spot-on: AI output should only be shared if it’s adopted as your own or if the recipient explicitly consents. When someone asks you a question, they’re seeking your perspective, not a machine’s. The internet is meant to connect humans, not replace them. Responding with AI output—especially without disclosing it—ignores this fundamental purpose.
But there’s another layer to this: AI can be wrong. While models like ChatGPT (https://www.wired.com/tag/chatgpt/) and Claude (https://www.wired.com/tag/anthropic/) are improving, they still make mistakes—sometimes hilariously so. Sharing AI output without verifying its accuracy can spread misinformation. Worse, if you don’t disclose it’s AI-generated, you’re implicitly vouching for its correctness. That’s not just unhelpful; it’s irresponsible.
Now, don’t get me wrong: AI is an incredible tool, but it’s not a replacement for human effort. As a journalist, I use AI to find primary sources, not as a final answer. I dig into articles, studies, and even reach out to experts to verify the information. That’s due diligence. Whether you’re a journalist, teacher, or engineer, your profession likely demands the same level of care. AI can be a starting point, but it’s up to you to add value—your expertise, your perspective, your humanity.
So, the next time someone asks you a question, think twice before hitting ‘copy’ on that AI response. Is it worth risking the relationship for the sake of convenience? And here’s a thought-provoking question for you: In a world increasingly dominated by AI, how do we balance efficiency with genuine human connection? Let’s discuss in the comments—I’d love to hear your take.