Aligned for Good.
This article is part of Finding the Words, a newsletter that delivers practical insights on the day’s issues.
When I first wrote about generative AI for this column, ChatGPT had just surpassed 1 million users. That was late 2022. Today, that number is 900 million users. And that's just one platform.
Whether we like it or not, AI is everywhere, showing up in nearly every part of our daily lives. Just ask The New York Times' A.J. Jacobs, who attempted to live without artificial intelligence for 48 hours to see how profoundly ingrained it has become in modern life in a very short span of time. (Profoundly ingrained it is.)
While I haven't traversed as deeply as A.J. did, I have been paying close attention to AI — the headlines and the much harder questions beneath them. The urge to understand AI's impact on the social sector more deeply came last fall, when early findings from our 2026 Insights on Purpose™ research report began to surface.
According to that research, purpose-driven organizations know AI matters — but most feel behind in leveraging it. More than half of nonprofits and foundations say they are "behind the curve" on AI —creating a digital divide between the 88% of organizations that now use AI formally in at least one business function, according to McKinsey's 2025 State of AI Report.
At Mission Partners, our values — people come first, integrity, excellence, courageous leadership, and continuous learning and growth— guide our work every day. So, as it became increasingly clear that AI would reshape our field, we leaned into learning through those values.
We began exploring what ethical and responsible AI integration looks like for a firm like ours, and for the mission-driven organizations we serve. At the outset, our research raised real concerns — about bias, misinformation, data security, and what gets lost when we let technology do too much of our thinking for us.
My concerns about the potential harms of AI haven't changed. But something else has become clear: the question is no longer whether to use AI. It's how — and more importantly, on whose terms.
As I keep learning, I find that most organizational leaders haven't yet done the hard thinking to set those terms even as AI is being used across their organizations — often informally, inconsistently, without clear guidance on when it's appropriate, who's accountable, or what happens when something goes wrong.
That troubling disconnect is what led us to develop Aligned for Good™ — our new dedicated practice area for ethical and responsible AI use in strategic communications, which we launched publicly last week.
At the heart of the framework is a distinction we believe every organization needs to internalize: ethical use and responsible use of AI are not the same thing, and both matter.
Ethical use asks: Is this the right thing to do? It's grounded in values — asking whether AI should be used at all, for what purpose, and for whose benefit.
Responsible use asks: Am I using this carefully and correctly? It's practical — protecting data, disclosing AI involvement, managing risk, and ensuring that what gets produced reflects the quality and integrity your organization stands for.
Without such a framework, it becomes increasingly hard to answer: When should we use AI? For what purpose? And for whose benefit?
So, let me ask you: How is your team using AI — and more importantly, on whose terms?
If you’re not clear on the answer, it's time to do some human-powered thinking. And it's time to articulate an AI governance strategy that can help you navigate through the uncertainty, too.
What I've found is that the clarity of our strategy, even with the evolving nature of the tools, makes the uncertainty easier to navigate. We have a clear decision-making framework for how we will and will not use AI in our business, and it allows us to hold each other and the company accountable to that strategy, together.
Because here's what I've come to know for sure:
The work we do at Mission Partners has always been about human impact, and our approach to AI is no different. AI will not replace our thinking, and it need not replace yours. But when used and governed well — with minds open and values intact — we believe it can open new doors to innovative methods of learning, working, collaborating, and advancing mission-driven work in ethical, responsible, and more meaningful ways.
AI is everywhere, and as A.J. Jacobs found, it's harder than you think to escape. So, before AI governs how you work, govern it with a smart and sound policy.
If your organization is exploring these questions, and you need help navigating the answers, reach out. You can also learn more about our Aligned for Good framework at alignedforgood.ai
This post is part of the Finding The Words column, a series published every Wednesday that delivers a dose of communication insights direct to your inbox. If you like what you read, we hope you’ll subscribe to ensure you receive this each week.
