From Principles to Practice: The Messy Reality of Developing AI Policies

Author Name

Roshan Bharwaney & Roshan P. Melwani

Published On

June 5, 2025

Keywords/Tags

Responsible AI, Policy Development, Regulatory Alignment, AI Governance, Policy Iteration

When Microsoft unveiled its Responsible AI Standard (RAIS) in June 2022, the 30-page framework looked comprehensive: six core principles, with dozens of structured goals and requirements for accountability. The ink was barely dry before Microsoft had to iterate. By late 2024, the RAIS was expanded to cover generative-AI red-teaming and content-safety checks, detailed in its inaugural Responsible AI Transparency Report. In January 2025, Microsoft updated RAIS yet again to align with the EU AI Act and its compliance deadlines.

This rapid sequence of updates illustrates a broader truth: AI policy is shipping in perpetual beta. Technology, public expectations, and regulations are evolving faster than any one-off rulebook. Yet without published and effective guidelines, organisations sit exposed to challenges such as safety failures, legal liabilities, and reputational fallout.

This article explores why crafting AI policies is messier than it looks, and how organisations can keep them credible and coherent as AI capabilities and impacts magnify over time.

Why is Crafting AI Policy so Challenging?
Developing clear and responsible policies for AI use is crucial for corporations and public organisations alike. As AI can make biased or unpredictable decisions at scale, effective policies help ensure AI deployment aligns with an organisation’s mission and values. Ideally, policies guide decisions, provide operational clarity, and establish a principled foundation for AI adoption. However, crafting policies is complicated for several reasons.

Firstly, AI technologies evolve rapidly. New models, capabilities, and risks may emerge without much or any advanced notice. Policies can quickly become outdated, leaving policymakers and regulators chasing a moving target.

Secondly, definitions of ethical and responsible AI use vary across cultures, industries, and stakeholders. Translating values such as fairness, transparency, and accountability into actionable policy involves complex trade-offs and ambiguity. Often these tradeoffs aren’t apparent until teams are deeply engaged in complex technical work or navigating uncharted, unprecedented scenarios where existing guidelines fall short.

Thirdly, challenges around explainability, accountability, and fairness further complicate policy development. AI models, particularly advanced large-language models and generative AI, often remain partially opaque. In other words, people can’t always explain what the AI has been doing, making it unclear who is responsible when issues arise, whether it’s the developer, vendor, user, or deploying organisation.

Finally, for organisations operating internationally, AI policies must account for diverse legal environments and cultural norms, making a one-size-fits-all approach nearly impossible.

Steps for Crafting Effective AI Policies

1. Ensure AI policy development is cross-functional
AI policy development shouldn’t be left to technology or legal teams alone. Cross-functional collaboration helps ensure policies are ethical, practical, and aligned with real world needs. Organisations can rely on ethicists to bring moral reasoning perspectives, legal experts for compliance and risk, engineers and product teams for technical feasibility, leaders to tie it into organisational strategy, and end-user advocates or community representatives to ensure real-world fairness.

Organisations can develop their own policies with in-house experts familiar with AI regulations, data protection laws, and governance. Where internal expertise is thin, or if operating in high-risk or highly regulated sectors (e.g., healthcare, finance, and law enforcement), organisations should consider hiring consultants or experts.

2. Determine high level values the organisation aims to uphold
Fairness, transparency, accountability, privacy, and safety are common values prioritized to guide an organization’s development and use of AI. Yet as these values can pull in different directions, organisations should surface tensions and trade-offs early. For example, articulate where rapid innovation competes with privacy expectations. Balancing these values requires thoughtful consideration of stakeholder interests, regulations, and business objectives. The outcomes of these discussions form the foundation of coherent policy, like a moral compass for all AI-related decisions.

3. Convert values and risks into operational rules
After identifying the values and drafting policies, assess risks such as bias, security, explainability, and compliance. Then, translate the values, policies, and risks into clear governance objectives so they can be adopted and implemented by the organisation. For example, “fairness” becomes “no disparate impact across protected demographic groups.” Develop rules and technical standards aligned with each value and risk. Include data handling guidelines, model development protocols (if relevant to your organisation), and evaluation metrics (e.g., fairness audits, robustness tests), and protocols for oversight. Agree on actionable guardrails while tracking known exceptions, as individuals and teams trying to “do the right thing” may get caught between competing logics and evolving laws and risks.

4. Continuous improvement
Responsible AI policy isn’t static—it must evolve with new models, applications, and challenges.
A key way to embed adaptability is through regular ‘retrospectives’ or ‘post-portems’. These structured sessions let organisations reflect on how their actions, rules, and values apply in real-world scenarios, and are especially valuable for periods of ambiguity, pressure, or high stakes. Retrospectives should:

  • Surface ethical trade-offs and decision-making dynamics when values conflicted.
  • Identify policy gaps or unclear guidance that hindered confident or consistent action.
  • Prompt updates to policies and operational rules based on real-world experience.
  • Transparently document decision reasoning for shared learning and future alignment.
  • Facilitate collective learning, especially in fast-moving or decentralised teams.

By embracing continuous improvement through a habit of retrospection, organisations can navigate complexity without paralysis.

Treating Policy as a Product
Developing AI policies is inherently messy, but the messiness shouldn’t hold us back. While the rapid pace of technology means we’re often playing catch-up, this also offers organisations an opportunity to lead proactively, guided by purpose and responsibility.

Effective AI governance demands ongoing revision and adaptation. Organisations must close the loop between principle and practice by ensuring values remain traceable through concrete, iterative steps—cataloguing open issues, conducting regular retrospectives, and encouraging transparent dialogue. Treating policy like a product, with continuous improvement cycles, ensures your organisation stays aligned with technological advances and public expectations.

The responsibility is in every organisation’s hands to ensure AI benefits everyone. If we rise to the challenge of creating effective AI policies and iterating them when needed, we can shape a future where AI enriches society, enhances fairness, and promotes equity.

Reference

  • Liu, Yutong “Joining the Table”