October 22, 2016
Current recommendation algorithms merely suggest things we might want; they should instead help us choose the one thing that is best.
The technology powering our recommendation engines today falls into two main categories: “similar items” and “others also bought”. The first is what Netflix uses: it analyses all the movies i have already watched and looks for similar ones. The second algorithm is what you will find on Amazon: it is simply a list of things that were bought by other people who also bought the item i am currently looking at. The first type of AI analyses the intrinsic properties of items, whereas the second looks to external and relational elements. However, both algorithms nevertheless produce the same output and involve the same human behavior: they propose a list of things i might want and ask me to choose.
That only gets me half way there. I really want the AI to tell me which of those items i should choose. Moreover, it should not select the item i currently want the most, but rather that magical item i might not even know i want, though in retrospect i will agree it was even better than what i had initially wanted.
Note that such an algorithm might not require any new inputs: it can still use the intrinsic and relational data about items and people to hone in on the right choice. But it does require a different process, and a different set of assumptions about human nature: instead of the algorithm presuming that i am an opaque “bundle of wants”, it must construct an intelligible model of my “better wants”. To do so will require a dynamic model that can determine what i am likely to pick now as well as what i am likely to “like having picked”.
Such assumptions and their ensuing models play out in how the humble recommendation engine influences the quality of our lives. Our current algorithms notoriously encourage us to stay within our comfort zones, and they force us to choose (often mostly at random) between broadly indistinguishable options. This is a shallow kind of freedom. Do you want the blue one or the red one? We should look instead for a deeper kind of freedom, one that might indeed ask the AI choose for us, but which in return allows us to determine the desired outcome. We are training that AI to better serve us.
Grocery aisle yogurt: BobMical via photopin cc
October 16, 2016
Though technology, and in particular Artificial Intelligence, is generally helpful and often dramatically extends our abilities, it can also constrain us in subtle but morally important ways.
I was returning to Minneapolis today from a long weekend in south-western Wisconsin and decided to take the scenic route along the Mississippi instead of the quicker (but no more direct) interstate highway. So naturally i turned on my navigation system and selected the river-route to my final destination. But no sooner had i started off that the navigatrix told me she had found a faster route and that i should hit “accept” to take it. I didn’t want to, but she insisted a half-dozen times until i was far enough along that said faster route had mercifully become slower. Moreover, every time i stopped, for gas or to take a picture, she would reroute me as if i has lost my way and select a new, fastest route. So i had to continually go back and reselect my original, more beautiful, route.
My issue is not with the software’s abilities — i have no (well, few) complaints there, it’s basically magical — but with my options for deciding how that magic is carried out. The software designers decided that most people, most of the time would want the fastest route, so they made that the default behavior. Now, we are all slightly pressured into taking the fastest route, that is, into not enjoying the slow, scenic one. The software, technically the layout of the user interface, has made a decision for me, one that i did not want it to. And even though i can probably turn the “always select the fastest route” setting off somewhere, the developer’s choice of the default settings nevertheless tilts me towards a certain kind of behavior.
In effect, though i was able to achieve what i wanted (to get home), i couldn’t easily determine how that goal was to be pursued. Both of these are essential to exercising freedom. Good AI should not only help me get home, but should make it easy for me to tell it how best to get there.
October 9, 2016
Interpreting a Beethoven sonata well is not an easy task. In reading Alfred Brendel’s essays on the topic in Music, Sense and Nonsense, I learned something about the process which struck me: it is the same exacting process as interpreting a religious text — one that requires access to the earliest manuscripts, an understanding of the expressive abilities of the notations of that time, an imaginative ability to extract the author’s intentions in order to correct mistakes and fill in blanks, and the intelligence to translate the work into something a contemporary audience will acknowledge is beautiful and eternal. Beethoven himself could write down the wrong note, forget to rework a hand in light of changes in the other; his editors could introduce mistakes or add their own misguided remarks; a semiquaver or sforzando might long have been interpreted too narrowly or against the grain; and a long-standing, bland classical performance might be due to reading the mere absence of accentuation as a prescription. Up to this point, the musician of the twentieth century is but following in the footsteps of the earlier century’s exegetical theologians. Where Brendel’s art exceeds today’s theologies is in his humble recognition that the value of an interpretation lies in how deeply it respects the original text, though only in order to extract from it those eternal truths and beauties which today will compel the audience’s assent.
October 2, 2016
When my father gave me a few new plants last year as I moved into my new apartment, he told me to water them in proportion to their leaf surface, since it is the leaves that breathe, and hence exhale water. I’ve stuck to that bit of advice and it has served me, or rather the plants, well.
One of the plants, however, grew quickly and I soon needed to trim it. At first I simply cut off entire branches that stuck out too far, or trimmed their tops if they had grown too tall. But the branches never grew back, leaving gaping bare sections, and each trimmed top grew two new sprouts, weighing the branch down even further until it broke under its own weight. So I attempted a new strategy: i began to cut the branches off mid-way. Now the plant looks less bare, and instead of doubling the tips, new leaves grow along the full length of the shorter branches.
The traditional moral of this story might be that in practical matters, since we cannot understand every element of complex systems, we must resort to rules-of-thumb, and tinker at the edges. However, i whish to suggest another conclusion: in horticulture as in ethics — or politics, or economics — we must not define rules or laws, however approximate; instead, we should develop principles, which need only be adequate and exact, not approximate and true. And since such principles are not hazy laws, which must be applied with art and guesswork, they can be followed exactly, and steadily improved through experimentation.