Saturday, May 24, 2025

Navigating the Gray: The Complex Journey Toward Ethical AI

Share

The conversation around AI ethics is becoming more vital—and more complex—by the day. Engineers, who often see the world in clear-cut terms—right or wrong, good or bad—find it particularly challenging to navigate the gray areas of ethical decision-making in artificial intelligence. Unlike technical standards that are often straightforward, ethical considerations involve nuanced judgments that can feel vague and ambiguous.

This was a key takeaway from a recent session on the Future of Standards and Ethical AI at the AI World Government conference, which brought together experts both in person and virtually from Alexandria, Virginia. The overarching theme? Ethics in AI is a topic that touches every corner of government and industry alike, with many voices echoing similar concerns and insights.

Beth-Ann Schuelke-Leech, an engineering management professor from the University of Windsor, put it plainly: “Engineers often think of ethics as a fuzzy concept that no one has really nailed down.” She explained that many engineers struggle with what it truly means to be ethical, especially when they’re told to follow certain rules but aren’t given clear guidance on what that entails. With her background spanning engineering and social science, Schuelke-Leech offers a unique perspective—she’s involved in AI projects within a mechanical engineering context but understands the social implications deeply.

She described engineering projects as having clear goals, features, and constraints—like budget and timelines. Standards and regulations become part of those constraints. “If I know I have to comply with them, I do,” she said. “But being told something is ‘good’—that’s a different story. I might or might not follow that.”

Schuelke-Leech also chairs a committee within the IEEE focused on the social implications of technology standards. She highlighted an important point: many standards around interoperability and best practices are voluntary—they don’t have legal teeth but are followed because they make systems work better or align industry efforts. Yet, whether these standards help or hinder engineers depends on their goals and circumstances.

The messiness of AI ethics was candidly acknowledged by Sara Jordan, senior counsel at the Future of Privacy Forum. She emphasized that ethics is “messy and difficult,” heavily dependent on context, with a multitude of theories and frameworks that can overwhelm practitioners. “Practicing ethical AI will require consistent, rigorous thinking tailored to specific situations,” she said.

Both Schuelke-Leech and Jordan agree that ethics isn’t just an end goal—it’s a process, something to be actively followed rather than a checkbox to tick. However, there’s a gap: engineers often feel excluded from ethical debates because these conversations involve complex, unfamiliar terminology—words like “ontological,” which can be intimidating. As a result, their involvement in shaping ethical standards is limited. Schuelke-Leech pointed out that if managers simply tell engineers to “figure it out,” they will, but it’s crucial to support engineers with clear guidance and shared responsibility. “We need social scientists and engineers to work together and not give up on this,” she emphasized.

The conversation about embedding ethics into AI development is also gaining ground in military and government settings. At the US Naval War College in Newport, Rhode Island, future military leaders are increasingly being trained to understand ethical issues surrounding AI. Ross Coffey, a professor of National Security Affairs, noted that as students engage with these challenges, their ethical literacy grows—highlighting the importance of early education in responsible AI use.

Carole Smith from Carnegie Mellon University, who has been involved in integrating ethics into AI since 2015, stressed the importance of “demystifying” AI for users. She explained that people often overestimate what autonomous systems can do, like Tesla’s Autopilot, which is designed to assist but not replace human judgment. Clear communication about what these systems can and cannot do helps users trust them appropriately and prevents dangerous assumptions.

Another pressing issue is AI literacy among the upcoming workforce. Taka Ariga, the first chief data scientist at the US Government Accountability Office, pointed out that many data scientists are not trained in ethics. “Accountable AI is a noble goal,” he said, “but not everyone truly understands what that entails or feels responsible for it.” Building a workforce that’s both technically skilled and ethically aware is essential for responsible AI deployment.

International cooperation also looms large in these discussions. Panelists debated whether principles of ethical AI can be standardized across nations. While full alignment may be challenging, there’s consensus that some common ground—such as what AI should not be allowed to do—is necessary. Smith highlighted the European Union’s leadership in establishing enforceable ethical standards, setting an example for others to follow.

Ross Coffey underscored the importance of international interoperability, especially in military contexts. “We need to find common ground with allies on what AI can and cannot do,” he said. Unfortunately, these conversations are still in their early stages, and there’s a need for more open dialogue, possibly even through existing treaties or agreements.

As federal agencies develop their own AI principles and frameworks, the challenge remains: how to keep these diverse efforts consistent and meaningful. Smith is optimistic that, over the next year or two, the landscape will begin to coalesce into a more unified approach.

Ultimately, navigating AI ethics is a complex but essential journey—one that requires collaboration across disciplines, sectors, and nations. It’s about creating systems that are not only technologically advanced but also ethically responsible, transparent, and aligned with societal values. The conversations are ongoing, but the momentum is clear: ethics must be an integral part of AI’s future.

Read more

Local News