Author: Kristina Podnar, a digital policy expert, speaker and author of The Power of Digital Policy.
When OpenAI released ChatGPT in November 2022, businesses deploying digital technologies across industries rushed to respond. Some blocked it outright. Others quietly encouraged employees to experiment with it. A few attempted to do both at the same time—issuing strict policies while employees continued testing the technology anyway.
Within weeks, companies were facing questions they hadn’t yet answered:
- Can employees use generative AI for work?
- What data can be entered into these systems?
- Who is responsible if the outputs are wrong?
- And what happens when employees start building workflows around tools the organization hasn’t officially approved?
None of these questions are really about artificial intelligence. They’re about governance inside the organization. And they illustrate something important: digital policy was never about eliminating risk. It’s about helping organizations navigate it. Every meaningful digital technology introduces both risk and opportunity. Trying to remove one without affecting the other simply isn’t realistic. For organizations deploying AI, data platforms, and automated systems, the real challenge is learning how to govern both at the same time.
Risk and opportunity arrive together
Most conversations about digital policy start with the harms: misinformation, algorithmic bias, privacy violations, and the societal impact of AI. Those concerns are real and deserve serious attention. But focusing only on risk obscures something equally important: the same technologies that introduce risk for organizations and the people they serve also create entirely new opportunities.
Artificial intelligence is a good example. Amazon once experimented with an AI recruiting tool that ended up disadvantaging female candidates because it learned patterns from historically male-dominated hiring data. The company ultimately scrapped the system. It was a governance failure. But the same class of technology is also accelerating drug discovery, improving medical diagnostics, and helping researchers analyze complex datasets at a scale that was previously impossible.
The point isn’t that the risks don’t matter. It’s that risk and opportunity arrive together, and digital policy exists to manage that tension—not pretend it can eliminate it.
Where governance tends to break down
Most enterprise digital governance efforts don’t fail because the principles are wrong. They fail because principles never make it into operations. Many organizations publish thoughtful frameworks built around values like fairness, transparency, accountability, and safety. Those ideas matter. They establish direction and signal intent. But principles don’t govern systems. Processes do.
When something goes wrong with a digital system inside an organization, the questions that matter aren’t philosophical. They’re operational:
- Who can retrain the model?
- Who approves new datasets?
- Who reviews outputs when something doesn’t look right?
- Who carries responsibility when the system fails?
If those answers aren’t clear, the framework isn’t actually governing anything. It’s just vocabulary.
Policy can’t keep up unless governance becomes operational
Digital systems evolve quickly. Most policy frameworks don’t. Traditional regulatory models were built for industries like pharmaceuticals, aviation, or energy—sectors where development cycles span years. Digital systems operate on cycles measured in weeks. Machine learning models get retrained. Data pipelines evolve as new sources are integrated. Cloud infrastructure develops constantly.
During the rise of social media platforms, companies like Facebook and X scaled globally long before governance structures caught up with the societal impact of their systems. The problem wasn’t the absence of policy. It was policy that lagged behind the technology it was supposed to govern. To work in environments that move this quickly, digital policy has to focus less on static rules and more on building governance capacity inside the organization.
That usually means a few practical things:
- Operational accountability — every system should have a clearly identified owner responsible for its lifecycle.
- Defined decision rights — policies should clarify who can introduce data, retrain models, or override automated outputs.
- Built-in monitoring — systems should include mechanisms for detecting unexpected outcomes and escalating issues quickly.
- Intervention authority — someone must have the ability to pause or shut down a system when risks exceed acceptable thresholds.
- Continuous policy review — governance frameworks should evolve alongside the technologies they oversee.
None of this eliminates risk. But it does something far more useful: it creates the operational guardrails that allow organizations to protect people while still leaving room for experimentation, adaptation, and progress. Because in the digital world, uncertainty isn’t a temporary condition. It’s the operating environment. And the real job of digital policy is helping organizations navigate that environment wisely.