Abstract
Learning engineering has the potential to impact society broadly. As we introduce novel AI-enabled technologies into learning environments, we must consider both the qualities of a technology that make it trustworthy (e.g., accuracy, reliability) as well as the qualities of the implementation context (e.g., permissions, involvement) that affect the trust of learners, educators, and administrators in this technology. In this chapter, we consider a broad cross-section of learning technologies and the social and ethical implications of adopting these technologies in higher education. Following a review of values in value-based approaches to psychometrics, we consider how specific formal and informal learning technologies manifest in various learning environments inside and outside of higher education, and the ethical affordances of these systems. We then expand on how these ethical affordances should impact our assessments of technology trustworthiness as well as the need for applications of current trust frameworks to expand their level of analysis beyond traditional evaluations of technology performance (e.g., accuracy and reliability), toward more sociotechnical system level considerations (e.g., social and organizational impacts).
Original language | English (US) |
---|---|
Title of host publication | Putting AI in the Critical Loop |
Subtitle of host publication | Assured Trust and Autonomy in Human-Machine Teams |
Publisher | Elsevier |
Pages | 127-165 |
Number of pages | 39 |
ISBN (Electronic) | 9780443159886 |
ISBN (Print) | 9780443159879 |
DOIs | |
State | Published - Jan 1 2024 |
Keywords
- Education technology
- Learning engineering
- Sociotechnical systems
- Trust
ASJC Scopus subject areas
- General Computer Science