Beni
Read about Beni's approach to AI Governance in K-12
Governance Guide

Shadow AI Audit Guide for K-12 Districts

How to find every unauthorized AI tool in your district, assess the risk, and build a governance system that prevents shadow AI from coming back.

What Is Shadow AI?

Shadow AI is any artificial intelligence tool being used in your district without official approval, vetting, or oversight. It is the AI equivalent of shadow IT — but with higher stakes, because the tools often process student data.

Shadow AI in schools takes many forms:

The real risk

The problem is not that educators are using AI. The problem is that they are using AI tools that have not been vetted for student data privacy, that may be training on the data entered, and that the district has no visibility into. If one of these tools has a data breach, the district is responsible — even if no one in administration knew the tool was being used.

Why Shadow AI Matters for Schools

5–10x
More AI tools in use than approved
82%
Of educators have used AI tools
0
DPAs for most shadow AI tools

Shadow AI creates four categories of risk for school districts:

1. Compliance risk

Every AI tool that processes student data without a data processing agreement is a potential FERPA violation. Tools used by students under 13 without proper consent may violate COPPA. Many states now have additional student data privacy laws with their own requirements. Shadow AI tools almost never have the agreements in place.

2. Data security risk

Consumer AI tools are not designed for student data. They may store data indefinitely, use it for model training, share it with third parties, or lack the security controls required for educational records. When staff enter student names, grades, behavioral data, or IEP information into these tools, that data leaves the district’s control entirely.

3. Liability risk

If an unauthorized AI tool experiences a data breach that exposes student information, the district bears the liability — not the teacher who used the tool. Districts cannot credibly claim they had no knowledge of AI use when the tools are freely accessible and widely adopted by staff.

4. Equity risk

When AI tools are adopted informally, some students get the benefit of AI-enhanced instruction while others do not. Without governance, AI adoption becomes dependent on individual teacher initiative rather than a coherent district strategy, creating inconsistent experiences across schools and classrooms.

The 4-Phase Audit Framework

A shadow AI audit is not a one-time project. It is the first step in building ongoing AI governance. The framework below is designed for districts of any size and can be completed in 4–8 weeks.

1

Discover

Find every AI tool in use across the district

2

Classify

Assess each tool’s risk level and data practices

3

Decide

Approve, restrict, or block each tool

4

Govern

Prevent new shadow AI from accumulating

Phase 1: Discovery

The goal of discovery is simple: find every AI tool being used by staff and students across the district. Most districts are surprised by what they find. Use multiple discovery methods to get a complete picture:

Technical Discovery

Network & DNS Logs

Review firewall, proxy, and DNS logs for traffic to known AI domains:

  • openai.com, chat.openai.com
  • gemini.google.com, bard.google.com
  • claude.ai, anthropic.com
  • copilot.microsoft.com
  • perplexity.ai, you.com
  • quillbot.com, grammarly.com/ai
  • Common AI image generators
Technical Discovery

Device Management Data

If you use MDM or endpoint management, review:

  • Browser extension installations
  • Application installations
  • Browser history patterns (aggregate)
  • Chrome/Edge extension audit

Many AI tools are accessed via browser extensions that bypass web filters.

Human Discovery

Staff Survey

Send an anonymous survey asking staff about AI tool use. Frame it as supportive, not punitive:

  • Which AI tools have you used for work?
  • What tasks do you use them for?
  • Do you enter student information?
  • What tools would you like approved?
Human Discovery

Student Survey

For secondary students, a brief anonymous survey can reveal:

  • Which AI tools students use for schoolwork
  • Whether they created accounts with school emails
  • Which tools they perceive as “allowed”
  • AI tools recommended by teachers
Approach matters

The discovery phase must be framed as a governance initiative, not a disciplinary investigation. If staff fear punishment, they will not disclose the tools they are using. The messaging should be: “We know AI is being used and we want to make it safe and supported. Help us understand what tools are valuable so we can approve them properly.”

Phase 2: Classification

Once you have a list of AI tools in use, classify each one by risk level. The classification should be based on what data the tool processes and what it does with that data.

Risk Level Data Access Examples Action Required
Critical Processes student PII, education records, IEP data, or behavioral data AI tools used for IEP writing, grading, progress monitoring, behavioral tracking Immediate review. Block until DPA is in place and FERPA compliance verified
High Student-facing tools where students create accounts or enter personal information AI tutoring tools, writing assistants, coding assistants used by students Block or restrict student access until COPPA/FERPA compliance verified and DPA executed
Medium Staff-only tools where no student data is entered AI used for lesson planning with no student names, professional development tools Review data practices, ensure no student data is being entered, add to approved list if compliant
Low No student data, no school data, general productivity use AI image generators for presentations (no student images), grammar tools for staff communication Add to approved list with usage guidelines

Classification questions for each tool

  1. Does this tool process any student personally identifiable information (PII)?
  2. Do students interact with this tool directly?
  3. Does the tool store the data entered into it?
  4. Does the vendor use input data for model training?
  5. Is there a signed data processing agreement with the vendor?
  6. Does the tool comply with COPPA (if used by students under 13)?
  7. Does the tool meet your state’s student data privacy requirements?
  8. Can the district request deletion of all data if the tool is discontinued?

Phase 3: Decision

For each tool identified in the audit, make one of three decisions:

Approve

The tool meets your data privacy requirements, a DPA is in place (or not needed), and the educational value justifies the data practices. Add it to your approved tools list and communicate approval to staff. Define how it should be used and any restrictions.

Restrict

The tool has value but needs guardrails. Common restrictions include: staff-only use (students cannot access), use only without entering student data, or use only for specific purposes. Document the restrictions clearly and communicate them.

Block

The tool does not meet your data privacy requirements and the vendor is unwilling or unable to provide a DPA or modify data practices. Block access at the network level if possible. Communicate the decision to staff with the reason and offer approved alternatives.

Critical principle

For every tool you block, offer an approved alternative that meets the same need. If you block ChatGPT without providing a governed AI option, staff will find workarounds. Governance without enablement creates more shadow AI, not less.

Phase 4: Ongoing Governance

The audit is not the end. Without ongoing governance, shadow AI will accumulate again within months. Build these three systems:

1. Tool request and approval process

Create a simple, fast process for educators to request new AI tools. The number one reason shadow AI grows is that the approval process is too slow, too complex, or nonexistent. Set a target of 5 business days for an initial decision on a new tool request. Publish the process and make it easy to find.

2. Continuous monitoring

Set up ongoing technical monitoring using the same methods from Phase 1 (network logs, device management). Review AI domain traffic monthly. Flag new AI tools that appear and route them through the approval process. A governance platform like Beni can automate this monitoring.

3. Culture and communication

Build a culture where educators understand that the approval process protects students, not blocks innovation. Share the approved tools list widely. Celebrate when teachers request tools through the proper channel. Train staff on why data privacy matters and what the consequences of unauthorized use can be.

Shadow AI Audit Checklist

How Beni eliminates shadow AI

Beni’s governance platform automates the entire shadow AI lifecycle. It provides real-time visibility into what AI tools are being used across the district, automated enforcement of your approval decisions at the classroom level, and a streamlined tool request workflow that makes governance fast enough that educators don’t need workarounds. Apply to become a Founding Partner or schedule a demo.

Related Resources

Frequently Asked Questions

Shadow AI refers to AI tools being used in a school district without official approval, vetting, or oversight. This includes teachers using ChatGPT for lesson planning without district knowledge, students using AI writing tools that haven't been reviewed for data privacy compliance, and administrators using AI tools for communication or scheduling without a data processing agreement in place.
Extremely common. Research consistently shows that the majority of educators have used AI tools, but most districts have approved very few through a formal process. In most districts, the number of AI tools actually in use is 5-10x higher than the number that have been formally vetted.
Shadow AI tools may process student data without proper data processing agreements, violating FERPA. They may collect data from students under 13 without consent, violating COPPA. They may use student data for model training or advertising without disclosure. And if an unauthorized tool experiences a data breach, the district is still responsible.
A shadow AI audit involves four phases: Discovery (use network logs, surveys, and device management data to identify all AI tools in use), Classification (categorize each tool by risk level based on what data it accesses), Decision (approve, restrict, or block each tool), and Governance (implement ongoing monitoring to prevent new shadow AI). The full process typically takes 4-8 weeks.
Prevention requires three things: a fast tool approval process so educators don't bypass governance out of frustration, ongoing monitoring through network-level visibility or a governance platform, and a culture shift where educators understand that the approval process exists to protect students, not to block innovation.

See What AI Tools Are Really Being Used

Beni gives districts real-time visibility into AI tool usage and automated governance that prevents shadow AI from accumulating.

Apply to Learn More