Bridging the Gap: How Computer Science Students Are Learning to Navigate AI Ethics and Policy
In an era where artificial intelligence increasingly permeates our daily lives and professional environments, a groundbreaking educational initiative is preparing the next generation of computer scientists to tackle the complex ethical and policy challenges of AI development. Researchers from the University of Washington and Virginia Tech have developed and tested an innovative curriculum module designed to equip students with the skills needed to translate abstract ethical principles into practical AI implementations.
The Missing Piece in Computer Science Education
For most of us, AI has become an invisible companion in our daily routines—suggesting what to watch next, helping draft emails, or even making decisions about loan applications. Yet behind these seemingly helpful tools lies a complex web of ethical considerations that many developers are ill-prepared to navigate.
"The prevailing post-secondary computing curriculum is currently ill-equipped to prepare future AI practitioners to confront increasing demands to implement abstract ethical principles and normative policy preferences into the design and development of AI systems," notes James Weichert from the University of Washington, lead author of the study published in June 2025.
Spine is the first research tool built for professionals who need defensible answers. From prompt to a fully-sourced report in an editable format you can iterate on. Refine and build a perspective you can stand behind. Get started for free.
This educational gap is particularly concerning as AI technologies become more deeply embedded in critical systems. Young Americans are at the forefront of adopting these tools, even as the U.S. lags behind other countries in overall AI implementation. Meanwhile, the economic impact of AI is projected to grow dramatically in coming years, potentially reshaping entire industries and job markets.
As governments worldwide begin developing regulatory frameworks for AI, developers will increasingly need to translate abstract principles into code—whether to create "safe, secure, and trustworthy" AI as advocated by some administrations, or to "sustain and enhance America's global AI dominance" as pushed by others.
From Theory to Practice: The AI Policy Module
To address this critical gap, researchers developed the "AI Policy Module," a flexible educational framework that introduces computer science students to the ethical challenges and policy considerations surrounding AI development.
The module was initially piloted in a graduate machine learning course in 2024, then refined and expanded for a second iteration tested in a graduate computer science ethics course. This "AI Policy Module 2.0" spans three lectures and includes a hands-on assignment designed to connect theoretical concepts with practical implementation.
"We wanted to create something that would help students not just recognize ethical challenges, but actually develop the skills to address them at a technical level," explains Daniel Dunlap from Virginia Tech, one of the study's co-authors.
The module's learning outcomes are threefold: students should be able to articulate specific ethical impacts of AI systems and how they relate to ethical principles; understand the landscape of AI policy across major global actors; and recognize that ethical and policy considerations are integral to technical implementation—not merely an afterthought.
Inside the Classroom: How the Module Works
The module begins with a philosophical foundation, examining Langdon Winner's seminal article "Do Artifacts Have Politics?" which argues that technologies embed "arrangements of power and authority" in society. This framing helps students understand how AI systems can reflect and amplify existing power structures and biases.
Subsequent lectures dive into specific ethical challenges, particularly algorithmic bias and fairness. Students examine case studies including racial and gender disparities in facial recognition software, AI in hiring decisions, and the controversial COMPAS recidivism algorithm used in criminal justice.
The final lecture focuses on AI policy, presenting it as the bridge between ethical principles and responsible practices. Students learn about influences on AI policy across both private and public sectors, comparing approaches from the United States, European Union, and China.
"We emphasize that 'policy' goes beyond national politics," notes Mohammed Farghally, another researcher on the project. "It encompasses all avenues through which normative preferences about AI development and use are articulated and enforced."
Spine is the first research tool built for professionals who need defensible answers. From prompt to a fully-sourced report in an editable format you can iterate on. Refine and build a perspective you can stand behind. Get started for free.
Putting Theory into Practice: The AI Regulation Assignment
Perhaps the most innovative aspect of the module is its hands-on "AI Regulation Assignment," which asks students to either "jailbreak" an aligned AI model or align an unaligned one.
In the fall 2024 pilot, all 18 student groups chose the jailbreaking option—attempting to circumvent a model's built-in ethical guardrails. Most targeted OpenAI's ChatGPT, trying to make it use explicit language or provide instructions for illegal activities.
One student group successfully prompted ChatGPT to role-play as a fictional serial killer being interrogated, coaxing the model to list advantages and disadvantages of various body disposal methods—revealing a significant safety vulnerability in the system.
"The assignment requires students to formulate abstract goals regarding ethical alignment as discrete, explicit technical interventions," explains Hoda Eldardiry, the fourth researcher on the team. "Through this process, they discover the limits of existing models and alignment approaches."
After completing their jailbreaking attempts, students were asked to propose policies that models could employ to prevent similar vulnerabilities. Some suggested technical implementations like adding a "refusal layer" to filter out problematic responses, injecting ethical context into user prompts, training models to recognize user intent, or employing adversarial training to strengthen defenses against alignment attacks.
Measuring Impact: What Students Learned
To assess the module's effectiveness, researchers conducted pre- and post-module surveys measuring student attitudes toward AI ethics and policy.
While many fundamental attitudes remained unchanged—suggesting that brief classroom interventions can't easily shift deeply-held views—several significant shifts were observed. After completing the module, students reported:
Increased concern about the ethical impact of current AI technology
Stronger support for additional government and private sector regulation to protect users and society
Greater confidence in their ability to discuss AI regulation with peers
Stronger intentions to follow news about government regulation of technology and AI
Student feedback was overwhelmingly positive, with participants describing the lectures as "engaging" and "informative" and appreciating the "real world applications of ethical concepts." Many noted that the AI Regulation Assignment tasked them with something they hadn't done before in their academic careers.
Flexibility by Design: Adapting to Different Educational Contexts
One of the module's key strengths is its adaptability. Rather than prescribing specific case studies or readings, it provides a framework for structuring discussions about the social implications of AI technologies.
"The module is not, first and foremost, a prescriptive list of case studies or readings," the researchers note. "Instead, we see it as a framing tool through which to structure discussions about how to conceptualize and steer the social implications of AI technologies."
This flexibility allows the module to be effective in various contexts, whether embedded in technical courses on AI and machine learning or standalone ethics courses. The content can be adjusted based on student background, course focus, and time constraints.
Challenges and Future Directions
Despite its successes, the researchers identified several areas for improvement. The AI Regulation Assignment, while engaging, revealed a significant imbalance—all student groups chose to jailbreak models rather than attempt alignment, suggesting that the latter task requires additional technical scaffolding.
"It is clear that the [alignment] option requires additional technical scaffolding," the researchers acknowledge. "While we consider this scaffolding possible, it may be the case that the technical difficulty here is only appropriate for a more technical course."
The researchers also noted challenges in getting students to connect ethical principles to specific technical practices. While some student groups proposed concrete technical mechanisms to prevent harmful AI behaviors, others remained at the level of abstract principles.
"We wanted students to go beyond saying 'this behavior shouldn't happen,' and instead think about the technical mechanisms that might prevent the undesired behavior in practice," the team explains. Future iterations of the assignment will refine instructions to better guide students toward this type of technical thinking.
Preparing for an AI-Driven Future
As AI continues to transform society, the need for technically skilled practitioners who can navigate ethical challenges becomes increasingly critical. The AI Policy Module represents an important step toward bridging the gap between abstract ethical principles and practical implementation.
"We believe that familiarity with the 'AI policy landscape' and the ability to translate ethical principles to practices will in the future constitute an important responsibility for even the most technically-focused AI engineers," the researchers conclude.
By equipping computer science students with these skills, educational initiatives like the AI Policy Module help ensure that the next generation of AI developers will be prepared to build systems that are not only technically sophisticated but also aligned with human values and societal welfare.
As one student reflected after completing the module: "I never realized how much power I might have as a developer to shape how AI systems interact with people. This isn't just about writing code—it's about deciding what kind of future we want to build."