Current recommendation algorithms merely suggest things we might want; they should instead help us choose the one thing that is best.
The technology powering our recommendation engines today falls into two main categories: “similar items” and “others also bought”. The first is what Netflix uses: it analyses all the movies i have already watched and looks for similar ones. The second algorithm is what you will find on Amazon: it is simply a list of things that were bought by other people who also bought the item i am currently looking at. The first type of AI analyses the intrinsic properties of items, whereas the second looks to external and relational elements. However, both algorithms nevertheless produce the same output and involve the same human behavior: they propose a list of things i might want and ask me to choose.
That only gets me half way there. I really want the AI to tell me which of those items i should choose. Moreover, it should not select the item i currently want the most, but rather that magical item i might not even know i want, though in retrospect i will agree it was even better than what i had initially wanted.
Note that such an algorithm might not require any new inputs: it can still use the intrinsic and relational data about items and people to hone in on the right choice. But it does require a different process, and a different set of assumptions about human nature: instead of the algorithm presuming that i am an opaque “bundle of wants”, it must construct an intelligible model of my “better wants”. To do so will require a dynamic model that can determine what i am likely to pick now as well as what i am likely to “like having picked”.
Such assumptions and their ensuing models play out in how the humble recommendation engine influences the quality of our lives. Our current algorithms notoriously encourage us to stay within our comfort zones, and they force us to choose (often mostly at random) between broadly indistinguishable options. This is a shallow kind of freedom. Do you want the blue one or the red one? We should look instead for a deeper kind of freedom, one that might indeed ask the AI choose for us, but which in return allows us to determine the desired outcome. We are training that AI to better serve us.