Thanks to the in-ear headphone comparison I took up a couple of months ago, an interesting aspect of the human experience came to the forefront – the constant quest for objectivity in our decision making. Since AI is a constantly occurring theme in the news lately, my mind immediately jumped to the conclusion that an artificial intelligence burdened with the same task would be able to make quick work out of such things. At least that’s what is believed. But my question is, is that truly desirable?
In the case of the audio test, my job as a reviewer was to objectively tell you readers which headphone is better. Given the myriad number of personal preferences users have (bass, aesthetics, brand etc) this wasn’t an easy task. Given the limited resources I had at hand, I had to tread the middle ground to come up with the most objective answer possible. An AI would’ve probably settled for “insufficient data”.
My objective mind balked at the idea because we were completely ignoring the big implicit “for now” in that equation. This is almost like saying I’ll put all my gold in that bank in the inside gully of a nondescript area, because hey, chances are robbers haven’t noticed the bank yet.
I know the analogy doesn’t fit perfectly, but you get the point: for an advanced AI this is just a nano-second run through its risk assessment subroutine. It would’ve either told you how to diversify your risk by only storing low-level passwords across multiple providers or even told you that it doesn’t have sufficient data to make an assessment. We humans on the other hand, used the quick shorthand of close enough. A member of the group was quick to point out “what would you rather use? A system that’s already been breached multiple times or a system that hasn’t been breached yet?”. Hmmm… Makes a little bit of sense doesn’t it?
Perhaps for now using 1Password is indeed the correct option even though it may not be the best one objectively. If this question was thrown at an AI, it would’ve returned a different answer, but this thought exercise brings to mind an old mathematician joke that I’ve slightly modified below.
An AI and a human (who happens to be an engineer) are sitting at a table drinking when a very beautiful woman walks in and sits down at the bar. Both the AI and human start arguing over who gets to approach her. A salarian bouncer walks in and lays down one condition “you may both approach her if you abide by one condition – every five minutes, you can only cover half the distance between where we are and where she is, then after five minutes half of the distance that remains, then five minutes later half of that distance, and so on. The series is infinite.”
The AI starts working on the equation, blows a chip and figures out he’ll never reach her, and gives up. The human, on the other hand, accepts the challenge declaring “Ah, well, I figure I can get close enough for all practical purposes within no time”.
Now, I’m not saying algorithms (or AI) are entirely bad at approximations. All I’m saying is we humans are slightly better at things like operating on sentiment, feelings, and instinct – they are a quick shortcut that enables us to get by when working with limited time or limited information.
For now, I think I could live with this “close enough”.
How about you?
Let me know in the comments section below