It doesn’t matter if you use AI at work today. Ask yourself:
In the next 5 years, will you use AI more or less at work?
More, right?
Which means everyone needs to know how to use AI responsibly. This is especially true for any task or role that uses generative AI.
At the end, we’ll share one recommendation for how to use AI more responsibly at work. (Spoiler alert: be transparent about when and how you use AI.)
Let’s dive in.
Generative AI Is Highly Useful
This blog isn’t about doom and gloom. Nor is it about making fantastical claims about the future. It’s just solid, level-headed advice.
Generative AI can summarize your meetings. It can organize your email inbox and draft replies. It can write that response/post/thing you need.
AI can generate images, video, and voice. It can create presentations and slides. It can handle basic business intelligence.
AI can code, build, create, automate, and more.
If you’re not using it now, you’ll be a few steps behind when you inevitably do use it in the future.
Shortcomings of Generative AI
Remember: All LLMs say something like, “This model can make mistakes. Double-check it.”
Very roughly, generative AI often tests in a high percentile humans for any given task or knowledge set.
However, ranking in the top percentile isn’t that hard. Models are compared to the average human, not expert humans. If AI can do Task X better than 98% of humans, the other 2% are probably somewhere around expert level.
Anyone with expert-level knowledge of a subject can instantly spot generative AI’s shortcomings… in their field of knowledge.
But we have blind spots everywhere else. We see highly competent output on a broad range of subjects and think, “Well, that’s probably right.”
Even though an expert could point out where it falls short.
Generative AI is almost never an expert. Not yet. Right now, it’s just highly competent.
When Generative AI Becomes a Problem
So, we’ve established that generative AI is useful. You should use it.
We’ve also noted that it’s fallible. You shouldn’t trust it to be 100% accurate.
AI becomes a problem only when users don’t use it responsibly.
- Understand its weaknesses
- Don’t trust it blindly
- Check its work
It’s good guidance. But a quick scan will always “look good.” It appears credible. A deeper, more considered review might surface issues that a quick scan wouldn’t.
Generative AI becomes a problem when the output is trusted too much… or when it enters a collaborative workflow.
Yep. AI-generated material may also confuse someone in your team.
An AI-generated project plan may include unnecessary or irrelevant steps. AI-generated code may include vulnerabilities or inconsistencies. And so on.
What this means is that you’re introducing non-expert-level material to a workflow. You may know that, but someone else might not. Plus, that material may lack all the context that lives in your brain.
This can cause problems. Colleagues may follow an overcomplicated process, wasting time. They may have to troubleshoot code. They may ask you questions about the details of what you sent them, and you might be unable to answer… or even contradict what the AI-generated material said.
How to Use Generative AI Responsibly at Work
So, what can you do? How can you use AI if you can’t trust it completely? How can you save time on tasks or build something that may have significant issues?
How can you use it responsibly so that you don’t introduce vulnerabilities, unclear expectations, or complications into your work?
Easy.
First, remember that AI isn’t perfect. It can do good, but it can’t do amazing. At least, not yet.
Second, give AI some context. It can’t read your mind. It will guess at what you want, but it might guess wrong. The more context you give it, the more accurate your output will be.
Third, don’t ship anything generated with AI without reviewing it first. Just review it and change what needs to be changed.
Fourth, be extra careful when other people rely on what you generate. Anything inaccurate or confusing could derail their workflow and end up back on your plate.
These are four best practices for working with AI at your job. But there’s one more that we’d like to suggest. It may be controversial, but…
Transparency and Using AI Responsibly at Work
There’s one thing everyone can do to improve your outcomes when using AI. If you do this one thing, it will even allow you to fudge a bit on the previous four steps.
Be transparent about your use of AI.
That’s it. That’s the trick. Just let people know when AI generated the thing you gave them.
Here are three ways you can do it:
Add n AI-Generated tag. However you see fit (maybe with a text note in an email or at the top of the document?), simply add, “This was generated by AI.”
Use this tag when AI generates something and you change nothing—or very little of it.
If you’re handing off something that you didn’t heavily review and edit yourself, just add this disclaimer. Whoever’s reading it may pay closer attention to make sure it all makes sense. They’ll also know to disregard anything that is obviously incorrect.
Add an AI-Assisted tag. Add this disclaimer however you want to: “This is AI-assisted.”
Use this tag when AI generates at least a portion of something, and you’ve fully reviewed and edited the output.
Again, this lets people know they may need to review for accuracy, etc. It also lets them know that you trust the output and it should be accurate.
Is Adding an “AI Transparency Tag” Necessary?
Yes? No? You be the judge.
You don’t need to do this for everything AI. Low-importance things don’t need disclaimers. If you’d be just as happy trusting a non-expert with the output, skip the tag.
Examples: If you pass off AI meeting notes to someone, they don’t need an explicit disclaimer. If you generate a social image, you can skip the tag.
However, if you’re working on something that requires expert-level competence or knowledge, add that tag. Same if you’re working with multiple people.
So if you pass over a project plan? Or a detailed report that someone will use for their role? Or code?
That’s when the AI tags come in handy.
Sometimes, using AI responsibly comes down to transparency. Think of it as good communication. By adding a tag, you’re telling someone that there may be elements, steps, or conclusions that come from a fallible tool with limited context.
People can do their best work only when they have good information and strong communication.
These tags just allow people to do their best work.
What Else Should Your Company Have to Use AI Well?
We believe credit unions can succeed by using AI efficiently, effectively, and responsibly. Some of what we’ve done:
- Delivered keynote speeches to credit unions about AI
- Published an addendum to our credit union book on AI
- Developed AI readiness assessments and AI policies
- Led vibe-coding sessions with credit union leaders
- Offered individual, executive-level AI coaching sessions
If you’d like to learn more about vibe coding or AI coaching, send us a message at info@cu-2.com
Or just get an early look at new fintechs that are using AI by joining our Fintech Call Program. In 30-minute quarterly calls, we’ll introduce you to new tech solutions. No pressure.
Get started here:


