Though technology, and in particular Artificial Intelligence, is generally helpful and often dramatically extends our abilities, it can also constrain us in subtle but morally important ways.
I was returning to Minneapolis today from a long weekend in south-western Wisconsin and decided to take the scenic route along the Mississippi instead of the quicker (but no more direct) interstate highway. So naturally i turned on my navigation system and selected the river-route to my final destination. But no sooner had i started off that the navigatrix told me she had found a faster route and that i should hit “accept” to take it. I didn’t want to, but she insisted a half-dozen times until i was far enough along that said faster route had mercifully become slower. Moreover, every time i stopped, for gas or to take a picture, she would reroute me as if i has lost my way and select a new, fastest route. So i had to continually go back and reselect my original, more beautiful, route.
My issue is not with the software’s abilities — i have no (well, few) complaints there, it’s basically magical — but with my options for deciding how that magic is carried out. The software designers decided that most people, most of the time would want the fastest route, so they made that the default behavior. Now, we are all slightly pressured into taking the fastest route, that is, into not enjoying the slow, scenic one. The software, technically the layout of the user interface, has made a decision for me, one that i did not want it to. And even though i can probably turn the “always select the fastest route” setting off somewhere, the developer’s choice of the default settings nevertheless tilts me towards a certain kind of behavior.
In effect, though i was able to achieve what i wanted (to get home), i couldn’t easily determine how that goal was to be pursued. Both of these are essential to exercising freedom. Good AI should not only help me get home, but should make it easy for me to tell it how best to get there.