Engaging Artificial Intelligence with Discipline and Care
Mar 4, 2026Dear Lynx community,
When I was a child, my grandfather volunteered to manage the irrigation network in a small town in northern New Mexico for many years. The work was rarely static, and new challenges would surface with little warning. Each day required him to assess current conditions and lead with steadiness and clarity. What I learned from watching him serve in this role was this: Strong leadership requires you to uphold standards and principles while focusing on what is within your control.
The pace of technological change, and specifically artificial intelligence (AI), can feel similarly unrelenting. While we cannot control the speed at which AI evolves or the pace of its adoption, we can establish guardrails, develop and uphold reasonable standards, and engage with it cautiously and with discipline.
I recognize the uncertainty many feel about AI and the discomfort that comes with not knowing how our world and lived experience may change. The technology is evolving rapidly, and there is no question it is provoking deep thought and passionate discussion about its ethical, environmental, developmental, social, and political implications. As with any disruptive innovation, our understanding of its benefits and risks is still forming as adoption accelerates.
We have received many questions about whether we should use AI at all, given both its unknown and—in some cases—known risks. In my view, it is critical to recognize that disengagement is not a neutral act. This is a moment when we should choose to engage with these tools to determine the most responsible and effective ways to use them—during a period when implications are being debated and best practice is still being shaped. If we are to prepare graduates for the demands of a modern workforce, we must equip them to navigate these technologies with care and competence. If we believe AI has the potential to shape our lives so profoundly, then we should seek to ensure its use aligns with our expectations for teaching and learning, scholarship, and fulfillment of our mission through our daily work.
Universities like ours must lean in by inquiring, exploring, and debating issues of societal importance and impact, including AI. Our scholarly community is exactly where these actions must happen for the good of society.
I believe we can do this by viewing AI with an eyes-wide-open and disciplined-judgment approach. As a mechanical engineer, I have studied and worked on systems that are responsible for cooling the computer hardware powering AI, and I understand the environmental impact associated with each query. A May 2015 MIT Technology Review article addressed these issues. For example:
- A text-generating prompt can require the equivalent amount of energy to running a microwave for eight to 10 seconds.
- A video generating prompt can require at least 100 times the energy compared to a text prompt.
- An active daily AI user (15 text prompts, some image generation, and short video generation) is responsible for the emission of at least 500 grams of carbon dioxide, depending upon the energy source powering the data center.
As someone committed to being informed about the risks and consequences of their actions in this and other regards, I engage AI only when it provides substantive value. I do not use it to generate trivial photos or suggest recipes based on what I have in my pantry. But, I do use it to analyze policy implications or summarize research findings to inform my decision-making.
In many ways, this practice reflects other choices we make throughout our day. For example, walking a little further to put a soda can in a recycling bin rather than the trash or debating whether to purchase a product you may need from a company whose public positions you may not share. We navigate competing priorities every day based on our informed understanding of risks and consequences. The same restraint must guide how we integrate these AI tools across our institution.
While employees retain control over whether and how they integrate AI into their classrooms or workflows, that autonomy must be exercised thoughtfully. As noted in the systemwide message, decisions about course design, instructional methods, research practices, and learner expectations remain with faculty. The same principle applies to staff in academic and administrative units. It is my hope that no matter the decision you reach, your efforts are guided by what is in the best interest of our learners.
CU Denver’s engagement with ChatGPT EDU builds on a structured, multi-year exploration of the technology, which also included the convening of three working groups in early 2025 that evaluated use cases and developed policy priorities. I encourage you to review their full set of recommendations.
We are continuing to develop ways to support responsible engagement of AI among employees and learners as we seek to use these tools in our studies and work. For example, the provost recently appointed three AI fellows, Farnoush Banaei-Kashani, Soumia Bardhan, and Cameron Blevins, to implement these recommendations, in consultation with faculty shared governance. The fellows will also be responsible for developing a faculty AI community of practice. This work will include creating forums for sharing teaching and research experiences, developing resource centers and syllabus guidance, establishing disclosure and approval processes for AI tools, and providing clear student guidelines.
We are also hiring an AI coordinator to assist with strategy, policy, and training for learners and employees. This role will ensure campuswide AI integration aligns with institutional priorities and supports learner success.
Higher education has always operated at the frontier of possibility. Generative AI tools can inform our inquiry and decision-making, but they are not the decision-maker. If you choose to engage with these tools, I ask each of you to approach AI use thoughtfully and to make informed decisions, exercise good judgment, and consider its impact.
For additional information, please refer to the University of Colorado OpenAI Initiative FAQ as well as the CU Denver | Anschutz OpenAI Initiative FAQ.
