Preventing Poor AI Investments that May Endanger Student Data
Ken Shelton, an Educational Strategist and Instructional Designer, shares valuable insights into what school districts need to consider before deciding on an AI tool. One of the key points Shelton emphasizes is the importance of having basic digital literacy to understand how AI works.
Shelton suggests that it's important to use an AI platform to solve a problem, not to look for problems to solve with the platform. He also advises being proactive in piloting, testing, and refining AI tools to keep up with any potential unwelcome outcomes.
In the light of data privacy, Shelton states that there should be no expectation of privacy when connecting to the internet. He recommends minimizing data collection to only what is necessary for AI tools to function, avoiding excessive or unnecessary student data gathering. Use of anonymized or aggregated data whenever possible can help reduce privacy risks.
Establishing clear transparency and consent policies is also crucial, explaining to students, parents, and staff what data is collected, why, how it is used, and who can access it. Include policies on data ownership, retention, and secure deletion to maintain accountability and trust.
Shelton questions the difference between responsible use and digital citizenship in schools. He compares the acquisition of platforms to a hammer looking for a nail, suggesting that marketing strategies of companies may over-promise. He warns against the pitfalls of implementing AI tools without a defined process.
To mitigate risks, Shelton advises using purpose-built platforms with built-in privacy protections and compliance with state and federal standards. Select AI vendors that meet rigorous security standards and have been vetted or pre-approved for educational use.
Maintaining human oversight and continuous evaluation is another crucial aspect. AI outputs should be monitored by educators to prevent misuse or biased outcomes. Privacy practices and AI impact should be regularly assessed and refined based on evidence and stakeholder feedback.
In conclusion, districts should focus on data minimization, robust encryption and access controls, clear consent and transparency, educator training and collaboration, using secure, compliant AI platforms, and ongoing human oversight to safeguard student data and avoid loss or misuse during AI tool implementation in schools.
A recent example of the importance of these practices is the case of LAUSD, which invested $6 million in the development of an AI chatbot named "Ed" by AllHere Education. However, AllHere Education subsequently shut down, forcing LAUSD to abandon the chatbot. The question now is what will happen to the data inputted into the chatbot by students and teachers. Continuous monitoring of AI tools is crucial to ensure they deliver on their promise without unintended harm.
- An Educational Strategist and Instructional Designer, Ken Shelton, emphasizes the necessity of digital literacy to comprehend AI workings.
- Shelton recommends utilizing AI platforms to address problems rather than seeking problems to address with the platform.
- He advocates piloting, testing, and refining AI tools to keep pace with potential adverse consequences, ensuring data collection is limited to the bare minimum required for AI functionality.
- Shelton questions the distinction between responsible use and digital citizenship in schools, and warns against implementing AI tools without a defined process.
- To mitigate risks, Shelton suggests using secure, compliant AI platforms that meet stringent security standards, have been vetted or pre-approved for educational use, and maintain continuous human oversight.
- Continuous monitoring of AI tools is essential to minimize unintended harm, as exemplified by the case of the LAUSD's AI chatbot "Ed" which was abandoned, raising questions about the fate of the data inputted by students and teachers.