
Corner to Corner by Diane Denish. Diane Denish is a lifelong advocate for children and a former lieutenant governor of New Mexico. Contact her at diane@dianedenish.com
AI is the newest buzzword in the world of technology and its use is growing every day. Many of us are curious about what it really is and more importantly what it does.

Artificial intelligence is no longer science fiction. It’s in our phones, our emails, our resumes—and now, increasingly, our classrooms and workplaces. While some celebrate AI as a tool of liberation, others fear it as a force of disruption. The truth is, it’s both—and we need to be ready for what that means.
In schools, AI tools like ChatGPT are rewriting the rules of teaching and learning. Students can now get instant help writing essays, solving math problems, or studying for tests. For educators, that’s both a challenge and an opportunity. Some worry these tools will enable cheating or make traditional assessments obsolete. Others see AI as a powerful assistant—helping differentiate instruction, offer feedback faster, and free up teachers to focus on what only humans can do: inspire, mentor, and connect.
But our education system wasn’t designed with AI in mind. Most schools lack clear guidelines on when and how students and teachers can use these tools. Teachers are being asked to navigate a technological shift without training, resources, or institutional support. The result is uneven implementation—and a risk that students in well-resourced schools will benefit, while others are left behind.
In the workplace, AI is reshaping how we hire, collaborate, and make decisions. It can write code, summarize meetings, screen resumes, and even suggest strategic plans. For workers, this raises urgent questions. Which tasks will be automated? What skills are still uniquely human? And how do we prepare for jobs that don’t exist yet?
Employers, too, are in new territory. They face a dual responsibility: harness AI’s potential to improve productivity and creativity, while ensuring its use is fair, transparent, and accountable. That means building guardrails—against bias in AI-generated decisions, against surveillance overreach, and against the quiet erosion of human judgment.
Across both schools and workplaces, the through-line is equity. Who has access to AI tools and who doesn’t? Who’s building these systems—and whose values are baked into them? The digital divide is no longer just about internet access; it’s about who gets to learn with and from the technologies shaping our future.
We can’t afford to treat AI as someone else’s problem or as tomorrow’s issue. The pace of change is fast, but the window for shaping it responsibly is still open. That means investing in digital literacy, updating policies, and having open conversations—between educators and students, managers and employees, technologists and communities.
AI can make us smarter, more efficient, and more connected. But only if we lead with purpose, not just convenience. The question isn’t whether AI belongs in our schools and workplaces. It’s how we ensure it serves us all—not just a few.
If you’ve made it to this point in my column, I want to disclose the following. With the exception of the introductory line and this closing line, the entire column was written by ChatGPT based on one short prompt. (GPT stands for Generative Pre-trained transformer)
Does it elicit your fear or your favor?