
How to find every unauthorized AI tool in your district, assess the risk, and build a governance system that prevents shadow AI from coming back.
Shadow AI is any artificial intelligence tool being used in your district without official approval, vetting, or oversight. It is the AI equivalent of shadow IT — but with higher stakes, because the tools often process student data.
Shadow AI in schools takes many forms:
The problem is not that educators are using AI. The problem is that they are using AI tools that have not been vetted for student data privacy, that may be training on the data entered, and that the district has no visibility into. If one of these tools has a data breach, the district is responsible — even if no one in administration knew the tool was being used.
Shadow AI creates four categories of risk for school districts:
Every AI tool that processes student data without a data processing agreement is a potential FERPA violation. Tools used by students under 13 without proper consent may violate COPPA. Many states now have additional student data privacy laws with their own requirements. Shadow AI tools almost never have the agreements in place.
Consumer AI tools are not designed for student data. They may store data indefinitely, use it for model training, share it with third parties, or lack the security controls required for educational records. When staff enter student names, grades, behavioral data, or IEP information into these tools, that data leaves the district’s control entirely.
If an unauthorized AI tool experiences a data breach that exposes student information, the district bears the liability — not the teacher who used the tool. Districts cannot credibly claim they had no knowledge of AI use when the tools are freely accessible and widely adopted by staff.
When AI tools are adopted informally, some students get the benefit of AI-enhanced instruction while others do not. Without governance, AI adoption becomes dependent on individual teacher initiative rather than a coherent district strategy, creating inconsistent experiences across schools and classrooms.
A shadow AI audit is not a one-time project. It is the first step in building ongoing AI governance. The framework below is designed for districts of any size and can be completed in 4–8 weeks.
Find every AI tool in use across the district
→Assess each tool’s risk level and data practices
→Approve, restrict, or block each tool
→Prevent new shadow AI from accumulating
The goal of discovery is simple: find every AI tool being used by staff and students across the district. Most districts are surprised by what they find. Use multiple discovery methods to get a complete picture:
Review firewall, proxy, and DNS logs for traffic to known AI domains:
If you use MDM or endpoint management, review:
Many AI tools are accessed via browser extensions that bypass web filters.
Send an anonymous survey asking staff about AI tool use. Frame it as supportive, not punitive:
For secondary students, a brief anonymous survey can reveal:
The discovery phase must be framed as a governance initiative, not a disciplinary investigation. If staff fear punishment, they will not disclose the tools they are using. The messaging should be: “We know AI is being used and we want to make it safe and supported. Help us understand what tools are valuable so we can approve them properly.”
Once you have a list of AI tools in use, classify each one by risk level. The classification should be based on what data the tool processes and what it does with that data.
| Risk Level | Data Access | Examples | Action Required |
|---|---|---|---|
| Critical | Processes student PII, education records, IEP data, or behavioral data | AI tools used for IEP writing, grading, progress monitoring, behavioral tracking | Immediate review. Block until DPA is in place and FERPA compliance verified |
| High | Student-facing tools where students create accounts or enter personal information | AI tutoring tools, writing assistants, coding assistants used by students | Block or restrict student access until COPPA/FERPA compliance verified and DPA executed |
| Medium | Staff-only tools where no student data is entered | AI used for lesson planning with no student names, professional development tools | Review data practices, ensure no student data is being entered, add to approved list if compliant |
| Low | No student data, no school data, general productivity use | AI image generators for presentations (no student images), grammar tools for staff communication | Add to approved list with usage guidelines |
For each tool identified in the audit, make one of three decisions:
The tool meets your data privacy requirements, a DPA is in place (or not needed), and the educational value justifies the data practices. Add it to your approved tools list and communicate approval to staff. Define how it should be used and any restrictions.
The tool has value but needs guardrails. Common restrictions include: staff-only use (students cannot access), use only without entering student data, or use only for specific purposes. Document the restrictions clearly and communicate them.
The tool does not meet your data privacy requirements and the vendor is unwilling or unable to provide a DPA or modify data practices. Block access at the network level if possible. Communicate the decision to staff with the reason and offer approved alternatives.
For every tool you block, offer an approved alternative that meets the same need. If you block ChatGPT without providing a governed AI option, staff will find workarounds. Governance without enablement creates more shadow AI, not less.
The audit is not the end. Without ongoing governance, shadow AI will accumulate again within months. Build these three systems:
Create a simple, fast process for educators to request new AI tools. The number one reason shadow AI grows is that the approval process is too slow, too complex, or nonexistent. Set a target of 5 business days for an initial decision on a new tool request. Publish the process and make it easy to find.
Set up ongoing technical monitoring using the same methods from Phase 1 (network logs, device management). Review AI domain traffic monthly. Flag new AI tools that appear and route them through the approval process. A governance platform like Beni can automate this monitoring.
Build a culture where educators understand that the approval process protects students, not blocks innovation. Share the approved tools list widely. Celebrate when teachers request tools through the proper channel. Train staff on why data privacy matters and what the consequences of unauthorized use can be.
Beni’s governance platform automates the entire shadow AI lifecycle. It provides real-time visibility into what AI tools are being used across the district, automated enforcement of your approval decisions at the classroom level, and a streamlined tool request workflow that makes governance fast enough that educators don’t need workarounds. Apply to become a Founding Partner or schedule a demo.
Beni gives districts real-time visibility into AI tool usage and automated governance that prevents shadow AI from accumulating.
Apply to Learn More