Saw a Slashdot headline "Intel Calls Its AI That Detects Student Emotions a Teaching Tool, Others Call it 'Morally Reprehensible'"

Um, …Yes.

People think the questions of AI ethics arise from dealing with the consequences of what Turing and all those cats invented: Computers. And they are right, to a point. But more it is about dealing with the consequences of what the Sumerians invented: Bureaucracy.

Teachers infer the emotional state of students all the time, so what if a computer program does it, what is the problem? If it is okay for the human to do it, then it is okay for the computer to do it, too—as long as the computer does it competently, as long as the computer does it fairly. Right?

Wrong.

When a teacher looks a student in the eye and infers his/er emotional state that is an invasion of the student’s privacy. And, as an isolated act, it would be wrong. But it is not an isolated act.

  • The student understands s/he is in a school, has frequently chosen to be there.
  • The teacher is looking into the student’s eyes (or otherwise observing, maybe asking questions of the student), and the student can see that.
  • When the teacher is looking into one student’s eyes s/he can’t also drill into another student’s eyes.
  • And the student can look back into the teacher’s eyes, and the teacher can see that.
  • The student gets to infer, too.

All this is part of a larger human relationship. And that is valuable. (No, not all human relationships are good, but they are pretty much all we got.)

Estimating a student’s emotions is just a small part of what is going on between a teacher and student. In isolation, staring at someone is an aggressive act, one where the “student” might be justified to hit back, but more wisely would turn away from this crazy person, cross the street, and put some distance between them.

Just because a human does thing X, even as a key part of some wonderful accomplishment, doesn’t mean thing X is itself somehow good or even remotely acceptable in other circumstances.

The fundamental problem with Intel’s innovation has little to do with whether they did a good job of implementing this isolated skill or not, it is more the fact they implemented it as an isolated feature that then can be deployed in myriad ways. Ways the subject (“student”) maybe has no knowledge of, ways that can drive decisions about the subject that s/he has no ability to influence nor appeal.

When used completely as intended, as part of online “Zoom” learning, this innovation might not be all bad, but the student is no longer in a student/teacher relationship of the sort we think we understand. No, this rapidly evolving online tool that Intel apparently wants a piece of, is redefining what is going on. And God knows what “data driven” innovations will be added next, particularly in the hands of for-profit institutions.

The real problem here is what horrors are possible in the maw of faceless and unanswerable bureaucracy. The fact that the bureaucracy has a shiny new toy is important, but don’t so focus on the toy as to ignore who is using it, and how it will be used as part of a much larger system.

I have seen the movie Schindler’s List only once, it was in first run, a zillion years ago. I do remember it was in black and white, and I happened to see it on a glorious big screen. But I don’t remember a lot of other details. Yes, there were nasty Nazis, terrible dilemmas, sadness and heroism, but it’s all kind of blurry. Except one chilling shot etched in my mind: a typewriter, typing up names. The film hit me between the eyes with that.

Bureaucracies have always kept lists of names, the difference is the typewriter made it more efficient. Everytime bureaucracy is handed a more efficient tool, we are at risk of bureaucracy doing what it does at a more industrial scale. As impressive and powerful as the typewriter is, seeing it in the movie also reminded me that IBM sold more powerful information processing equipment to the Nazis. To help them run their bureaucracies more efficiently.

Some of the the new computer programs we like to call “AI” are really impressive information processing tools. They are tools that allow bureaucracies to not just possess and sift through vast amounts of data, but tools that allow bureaucracies to make subtle decisions about that data without bothering with the pesky bureaucrat step anymore. No human bureaucrat means breaucracies can now make complex decisions at scale, at great speed, and therefore make many more decisions than ever before.

Sure, individuals can deploy this technology in bad ways, but so can bad teachers can be bad in completely old fashioned ways, individuals don’t need such a fancy tool to do individual harm. But bureaucracies need to operate at scale.

“AI” ethics isn’t about the “AI”, it is mostly about bureaucracies—government and commercial, public and secret—and ethics in how and what they deploy. Bureaucracies have never been good at ethics.

But right now they are all transfixed by this shiny new toy, looking for what data they already have sitting around, what new data they might collect, and how they could plug it all together, to do things they could never do before.

Read Kafka’s The Castle, then let your imagination roam…

-kb

©2022 Kent Borg

P.S. Comments are broken and have been for sometime. Sorry.


Posted

in

by

Tags:

Comments

One response to “Saw a Slashdot headline "Intel Calls Its AI That Detects Student Emotions a Teaching Tool, Others Call it 'Morally Reprehensible'"”

  1. Sang Avatar
    Sang

    In Scarborough, Maine, having access to […]

    Nope! I am not going to approve your spam in my comments. Not going to happen. Nope, nope, nope.

    – Moderator

Leave a Reply

Your email address will not be published. Required fields are marked *