AI Ethics, Literacy, Policy & Security Part 1: Essential Knowledge for Administrative Decisions
Audience | K – 12 Administrators |
Presenter Name(s) | David Hatami, Ed.D |
Presenter Bio(s) | Dr. David Hatami is the Founder and Managing Director of EduPolicy.ai, an organization dedicated to helping higher education institutions navigate the responsible integration of AI. EduPolicy.ai specializes in developing clear, adaptable, and ethically grounded AI policies that balance innovation with academic integrity. With deep expertise in AI Literacy, AI Ethics, AI Policy development, and institutional governance, EduPolicy.ai collaborates with administrators, faculty, and students in the K12 & Higher Education space to create comprehensive frameworks that ensure consistency, and transparency, across all disciplines. Dr. Hatami was also published by Harvard Business Impact in April 2025. https://www.hbsp.harvard.edu/inspiring-minds/guidelines-effective-ai-policy
Drawing on extensive experience in higher education administration, teaching, and faculty and student engagement, Dr. Hatami brings a holistic approach to educational innovation. His career spans diverse educational settings, including traditional universities, community colleges, and proprietary institutions, where he has led larger scale development initiatives in faculty management, online pedagogy, and institutional policy development. Beyond consulting, he is a sought-after keynote speaker and workshop leader, advocating for forward-thinking AI policy frameworks that align with both ethical imperatives and institutional needs. His passion for cross-sector collaboration is also evident in his forthcoming book, Rethinking Approaches to AI Policy & AI Ethics Creation in K-12 & Higher Education. |
Description | This is part one of a two-part series which aims to equip Massachusetts K–12 school and district leaders to evaluate, govern, procure, and communicate about AI in ways that protect students, support staff, and meet community and School Committee expectations.
Part one will build foundational leadership competence across AI literacy, ethics, policy, and security for immediate decision-making.
Part two will convert those foundations into durable systems for governance, privacy, procurement, instructional integrity, and continuous improvement.
Session 1: What to Tell Staff (and Families) Now Staff are already using AI tools informally, parents are asking questions about AI in schools, and administrators need to respond immediately with credible, consistent messaging that demonstrates leadership competence while avoiding panic or unrealistic promises.
Administrators will be able to articulate a concise district stance that frames AI appropriately; explain how literacy, ethics, policy, and security translate into leader duties; and communicate age-appropriate, multilingual messages across elementary, middle, and high school communities in alignment with central-office partners (Teaching & Learning, Technology/Data, Legal, HR, Communications).
Session 2: Crisis Ready: When AI Goes Wrong in Your Building AI incidents will occur – students sharing personal information with ChatGPT, AI generating inappropriate content, vendor data breaches – and administrators must respond within hours with appropriate communication, documentation, and corrective action while maintaining community trust.
Administrators will be able to recognize common AI failure modes, apply tiered responses with defined timelines, and communicate with families and staff transparently, including the use of a parent-notification threshold when student PII may have been exposed or exposure cannot be ruled out within a reasonable time frame.
Session 3: School Committee Approval: Getting AI Policy Passed School Committees require formal policies for AI use, but members have varying technology comfort levels and different risk concerns, requiring administrators to present complex topics in ways that enable informed voting and community confidence.
Administrators will be able to present a clear, comprehensible policy to the School Committee, map responsibilities across building leadership and central office, publish an adoption timeline, and ensure public-records and retention requirements are integrated into communications and documentation.
Session 4: The Vendor Meeting: Questions That Matter Vendors are aggressively marketing AI tools to schools with varying claims about capabilities, privacy protection, and educational value, requiring administrators to evaluate proposals within budget constraints while ensuring student safety and educational appropriateness.
Administrators will be able to assess vendor claims, require transparent privacy and deletion commitments, ensure accessibility and equity considerations are met, and authorize limited pilots with defined KPIs and exit ramps.
Session 5: How to Implement AI in K12 Successfully: A Plan That Actually Works Having policies and approved tools means nothing without sustainable implementation systems that work within existing staff capacity, budget constraints, and competing priorities while maintaining educational quality and student safety.
Administrators will be able to publish a realistic implementation plan, set light-touch monitoring routines, communicate disclosure expectations for AI-assisted work, and track progress using a concise dashboard without adding undue burden. |
Synchronous / Asynchronous | Synchronous |
Location | Live via Zoom; Sessions will also be recorded |
Dates & Times | Wednesday, October 15th, Tuesday, October 21st, Wednesdays, October 29th, November 5th & November 12th 9:30am – 11:30am |
PDPs | 10 PDPs |
Credit | n/a |
Cost | $200 ACCEPT members/$240 non-members for entire series $50 ACCEPT members/$60 for an each individual session Team Rates Available |
Registration Deadline | October 8, 2025 |