Dogs in the park have their rituals, as do their masters.
Nothing is unnatural or artificial. There is a first- and a second-order nature; the former is what is given to us (oceans, trees, animals), the latter is what we have made of the first. As such, second-order nature is always subjective: it is defined in relation to us, the authors, who once were just first-order natural beings.
Current recommendation algorithms merely suggest things we might want; they should instead help us choose the one thing that is best.
The technology powering our recommendation engines today falls into two main categories: “similar items” and “others also bought”. The first is what Netflix uses: it analyses all the movies i have already watched and looks for similar ones. The second algorithm is what you will find on Amazon: it is simply a list of things that were bought by other people who also bought the item i am currently looking at. The first type of AI analyses the intrinsic properties of items, whereas the second looks to external and relational elements. However, both algorithms nevertheless produce the same output and involve the same human behavior: they propose a list of things i might want and ask me to choose.
That only gets me half way there. I really want the AI to tell me which of those items i should choose. Moreover, it should not select the item i currently want the most, but rather that magical item i might not even know i want, though in retrospect i will agree it was even better than what i had initially wanted.
Note that such an algorithm might not require any new inputs: it can still use the intrinsic and relational data about items and people to hone in on the right choice. But it does require a different process, and a different set of assumptions about human nature: instead of the algorithm presuming that i am an opaque “bundle of wants”, it must construct an intelligible model of my “better wants”. To do so will require a dynamic model that can determine what i am likely to pick now as well as what i am likely to “like having picked”.
Such assumptions and their ensuing models play out in how the humble recommendation engine influences the quality of our lives. Our current algorithms notoriously encourage us to stay within our comfort zones, and they force us to choose (often mostly at random) between broadly indistinguishable options. This is a shallow kind of freedom. Do you want the blue one or the red one? We should look instead for a deeper kind of freedom, one that might indeed ask the AI choose for us, but which in return allows us to determine the desired outcome. We are training that AI to better serve us.
Though technology, and in particular Artificial Intelligence, is generally helpful and often dramatically extends our abilities, it can also constrain us in subtle but morally important ways.
I was returning to Minneapolis today from a long weekend in south-western Wisconsin and decided to take the scenic route along the Mississippi instead of the quicker (but no more direct) interstate highway. So naturally i turned on my navigation system and selected the river-route to my final destination. But no sooner had i started off that the navigatrix told me she had found a faster route and that i should hit “accept” to take it. I didn’t want to, but she insisted a half-dozen times until i was far enough along that said faster route had mercifully become slower. Moreover, every time i stopped, for gas or to take a picture, she would reroute me as if i has lost my way and select a new, fastest route. So i had to continually go back and reselect my original, more beautiful, route.
My issue is not with the software’s abilities — i have no (well, few) complaints there, it’s basically magical — but with my options for deciding how that magic is carried out. The software designers decided that most people, most of the time would want the fastest route, so they made that the default behavior. Now, we are all slightly pressured into taking the fastest route, that is, into not enjoying the slow, scenic one. The software, technically the layout of the user interface, has made a decision for me, one that i did not want it to. And even though i can probably turn the “always select the fastest route” setting off somewhere, the developer’s choice of the default settings nevertheless tilts me towards a certain kind of behavior.
In effect, though i was able to achieve what i wanted (to get home), i couldn’t easily determine how that goal was to be pursued. Both of these are essential to exercising freedom. Good AI should not only help me get home, but should make it easy for me to tell it how best to get there.
Interpreting a Beethoven sonata well is not an easy task. In reading Alfred Brendel’s essays on the topic in Music, Sense and Nonsense, I learned something about the process which struck me: it is the same exacting process as interpreting a religious text — one that requires access to the earliest manuscripts, an understanding of the expressive abilities of the notations of that time, an imaginative ability to extract the author’s intentions in order to correct mistakes and fill in blanks, and the intelligence to translate the work into something a contemporary audience will acknowledge is beautiful and eternal. Beethoven himself could write down the wrong note, forget to rework a hand in light of changes in the other; his editors could introduce mistakes or add their own misguided remarks; a semiquaver or sforzando might long have been interpreted too narrowly or against the grain; and a long-standing, bland classical performance might be due to reading the mere absence of accentuation as a prescription. Up to this point, the musician of the twentieth century is but following in the footsteps of the earlier century’s exegetical theologians. Where Brendel’s art exceeds today’s theologies is in his humble recognition that the value of an interpretation lies in how deeply it respects the original text, though only in order to extract from it those eternal truths and beauties which today will compel the audience’s assent.
When my father gave me a few new plants last year as I moved into my new apartment, he told me to water them in proportion to their leaf surface, since it is the leaves that breathe, and hence exhale water. I’ve stuck to that bit of advice and it has served me, or rather the plants, well.
One of the plants, however, grew quickly and I soon needed to trim it. At first I simply cut off entire branches that stuck out too far, or trimmed their tops if they had grown too tall. But the branches never grew back, leaving gaping bare sections, and each trimmed top grew two new sprouts, weighing the branch down even further until it broke under its own weight. So I attempted a new strategy: i began to cut the branches off mid-way. Now the plant looks less bare, and instead of doubling the tips, new leaves grow along the full length of the shorter branches.
The traditional moral of this story might be that in practical matters, since we cannot understand every element of complex systems, we must resort to rules-of-thumb, and tinker at the edges. However, i whish to suggest another conclusion: in horticulture as in ethics — or politics, or economics — we must not define rules or laws, however approximate; instead, we should develop principles, which need only be adequate and exact, not approximate and true. And since such principles are not hazy laws, which must be applied with art and guesswork, they can be followed exactly, and steadily improved through experimentation.
A small parallel struck me the other night as i watched Brian DePalma’s movie Mission: Impossible. At the very end, a jumbled helicopter comes crashing through the Chunnel towards the hero (Tom Cruise), who is stuck against the rear window of a stationary train. The helicopter gradually slows, as do its blades, which finally come to halt a mere inch away from the hero’s jugular. DePalma leaves no doubt in the viewer’s mind that this happy turn of events was sheer luck.
This reminded me of a scene towards the middle of the movie when Tom Cruise and Jean Reno were storming the CIA headquarters, and had just knocked out two security guards. Jean Reno’s character is about to slit their throats with his knife when Tom Cruise grabs his hand to stop him: the hero will allow no innocent bystanders to be killed.
There can be little doubt that DePalma has a clear message here: the good guy was saved at the end through luck because he had earlier spared the guards from an almost identical fate. It is because he was good that he became lucky. (If you still harbor doubts, I’ll point out that the very dead pilot of the helicopter crashing towards Tom Cruise was Jean Reno.) Is this not the doctrine that God favors the Just, with Luck merely substituted for God?
I believe many Hollywood action movies display this same feature. One need only point out how much better the good guys are at dodging bullets than their luckless opponents. But perhaps the best proof lies in Hollywood’s own self-conscious satire. Do you remember the (first) bathroom shooting scene in Quentin Tarantino’s Pulp Fiction? John Travolta and Samuel L. Jackson have just finished chewing out, and then shooting to death, a gang of petty two-timing drug dealers, when one last member emerges from the bathroom and empties his gun at them, missing both entirely. They shoot the unlucky man dead, but then fall into an argument as to whether it was Luck or God that had saved them.
Hollywood no longer believes in God, but it remains beholden to the hope that goodness leads to salvation, and since nothing on our moral scene has yet appeared that might fill God’s commanding role, our story telling has resorted to that old standby, incomprehensible luck.
In what follows i would like to suggest that the practice of yoga bears a strong resemblance to the modern idea of a scientific laboratory setting, and hence that certain aspects of yoga might be relevant to a broader set of moral practices.
The idea of a laboratory in the natural sciences encompasses a number of features. (1) a laboratory is a dedicated space that is (2) insulated as much as possible from outside influences. In a laboratory (3) various experiments are performed and reproduced usually by (4) a team of senior and junior researchers. The purpose (5) of such a laboratory is to produce new knowledge in a specific scientific discipline. Hence a laboratory is also (6)generally equipped with reliable instruments specific to the discipline which enable the scientists to easily construct and measure new experiments.
If we follow Patañjali’s classic exposition of yoga, we can locate, at least in nuce, all of the above six features of a laboratory. One important difference should be noted from the outset. Whereas the natural sciences are a theoretical endeavor looking for truth, yoga is a practical endeavor, which does not produce true statements but good practices. This one translation will inform how each of the above features is reinterpreted when applied to yoga. Continue reading
Our idea of religion originates in the axial age when philosophers and prophets discovered that we can interpret the world from within our own souls: it is a subjective concept; and we are now in thrall to its addictive promise of a single truth which will illuminate the meaning of the world to each of us. We must give up this religion and accept that the promise was false. Then we can return to understanding the world little by little, accepting that there will always be much that others must explain to us, and even more that none of us can yet fathom. Hopefully the violent symptoms of our addiction will then subside.