Just Because You Can...

This article is part of Finding the Words, a newsletter that delivers practical insights on the day’s issues.

I've written a few times in this column about the growth zone — that productive, sometimes uncomfortable stretch of learning between what we already know and what we haven't yet figured out. It's the zone where curiosity runs high, where we build confidence by doing things we haven't fully mastered, and where some of the most meaningful growth of our careers can happen.
 
The growth zone is where I love to be, and it's why I've been a consistent advocate for leaning into learning at every stage of life — for building a growth mindset even in the face of ambiguity, and for helping the leaders and organizations I work with find their own growth edge.
 
In past columns, I've focused on the growth zone as a tool to support career advancement: how to build a growth mindset even when navigating ambiguity, and how those same skills can support emerging leaders and young professionals through these times, too.
 
While all of that advice still holds, I've come to see a different perspective that's worth elevating here, too.
 
Over the last few weeks, I've been leading conversations with clients and colleagues about how to integrate AI into their daily operations. There is, of course, no shortage of ways we can integrate AI, but that doesn't mean we should.
 
For many, AI pushes into the other edge of the growth framework: the danger zone. Maybe it causes tension with your stated values. Maybe it pushes too far beyond your working knowledge of the tools. Maybe it's a threat that simply shouldn't be entertained on any level — at least until the guardrails have improved. Each concern is worthy of consideration. The concerns also serve as a reminder that when it comes to AI, what works for some simply won't work for others.
 
Just because you can doesn't mean you should.
 
This is exactly why having a framework for how far you'll go in adopting AI or any new technology is helpful, like a safety boom that helps us navigate back to the shore when we drift into dangerous waters.
 
Earlier this week, I led a team of Board members and senior leaders through an exercise to establish their own AI governance framework. As part of the session, I asked the group to consider a range of scenarios for integrating AI into their daily work and operations. In each scenario, I asked them to raise a red flag if the use case fell outside their organization's stated values, a yellow flag if they were unsure, or a green flag if it aligned with those values.
 
It was illuminating and energizing to see this dynamic group of people work through my prompts together and decide for themselves how they would integrate new technology—and how they wouldn't—all through the lens of their values.
 
Just because they can use AI in every use case doesn't mean they should.
 
That simple framework helped ground them and bring them back to a shared growth zone, which will ultimately set the foundation for a shared learning agenda for the organization, too. 
 
There is no shortage of answers for what these tools can do. What's more important is to align on what organizations should do with the new tools at their fingertips.

  • Just because we can replace a significant portion of our work with an AI tool doesn't mean we should.

  • Just because a peer organization can use AI in a particular way doesn't mean we should too.

When teams work together to clarify this critical distinction between what they can do, what they should do, and ultimately what they will do in service of their values, they land on something far more powerful than the bones of their AI policy. They land on a durable framework that is aligned with their values and rightsized for their organization.
 
As I start and end each of these sessions, I remind teams that it’s not if or when you'll integrate AI, it's on whose terms. Knowing how to use the tools and using them wisely, in line with your values, is far more important in this stage of growth than taking the tools as far as they can go.
 
Bottom line: Leaning into learning will always matter. But it's also OK to name just how far you'll go at each stage of the learning journey. Staying values-driven and being willing to ask, "what-are-we-willing-to-protect, and how far are we willing to go?” is critical to ensure your team doesn't end up in the danger zone, without a life raft to pull you back to safety.
 
Need a partner to help your team align around a shared values-aligned AI vision? I am now offering my AI alignment workshop at a discounted rate for nonprofits if booked before the end of June. Drop me a line if you're interested in learning more.


This post is part of the Finding The Words column, a series published every Wednesday that delivers a dose of communication insights direct to your inbox. If you like what you read, we hope you’ll subscribe to ensure you receive this each week.

 
Next
Next

Planning For Your Absence.