Digit Geek
Human vs AI
Digit Geek > Recent Articles > Technology > Can AI ever understand the nuances of subjectivity?

Can AI ever understand the nuances of subjectivity?

Humans can understand the middle ground between subjectivity and objectivity quite well to come up with the most objective answer possible. But can AI?

Thanks to the in-ear headphone comparison I took up a couple of months ago, an interesting aspect of the human experience came to the forefront – the constant quest for objectivity in our decision making. Since AI is a constantly occurring theme in the news lately, my mind immediately jumped to the conclusion that an artificial intelligence burdened with the same task would be able to make quick work out of such things. At least that’s what is believed. But my question is, is that truly desirable?

In the case of the audio test, my job as a reviewer was to objectively tell you readers which headphone is better. Given the myriad number of personal preferences users have (bass, aesthetics, brand etc) this wasn’t an easy task. Given the limited resources I had at hand, I had to tread the middle ground to come up with the most objective answer possible. An AI would’ve probably settled for “insufficient data”.

Take another example: sometime ago yet another vulnerability was detected in LastPass (an uber-popular password manager). It gets breached so often we all might as well have www.haveibeenpwned.com listed as our default homepage, but that’s a different story. So anyway, while discussing the development in a tech-enthusiast Telegram group I’m part of, 1Password was being touted as a “better” option. Worth a try sure, but my question was, is 1Password objectively better with its security or is it just that it hasn’t attracted the attention of hackers just yet. The security pages of both services offered up the usual spiel: AES-256 encryption, PBKDF2 key derivation, local storage of hash key and the works. But what brought down LastPass in this latest breach was literally a two-line JavaScript code, which exploited a vulnerability in its Chrome extension. No one on the group could objectively say whether one’s security was better than the other. Strangely enough, the quick shortcuts we humans use for decision-making led to a common consensus within the group: “Even if 1Password is safer just because it hasn’t attracted the attention of truly skilled hackers, isn’t that still a good enough reason to use it?”


My objective mind balked at the idea because we were completely ignoring the big implicit “for now” in that equation. This is almost like saying I’ll put all my gold in that bank in the inside gully of a nondescript area, because hey, chances are robbers haven’t noticed the bank yet.

I know the analogy doesn’t fit perfectly, but you get the point: for an advanced AI this is just a nano-second run through its risk assessment subroutine. It would’ve either told you how to diversify your risk by only storing low-level passwords across multiple providers or even told you that it doesn’t have sufficient data to make an assessment. We humans on the other hand, used the quick shorthand of close enough. A member of the group was quick to point out “what would you rather use? A system that’s already been breached multiple times or a system that hasn’t been breached yet?”. Hmmm… Makes a little bit of sense doesn’t it?

Perhaps for now using 1Password is indeed the correct option even though it may not be the best one objectively. If this question was thrown at an AI, it would’ve returned a different answer, but this thought exercise brings to mind an old mathematician joke that I’ve slightly modified below.

An AI and a human (who happens to be an engineer) are sitting at a table drinking when a very beautiful woman walks in and sits down at the bar. Both the AI and human start arguing over who gets to approach her. A salarian bouncer walks in and lays down one condition “you may both approach her if you abide by one condition – every five minutes, you can only cover half the distance between where we are and where she is, then after five minutes half of the distance that remains, then five minutes later half of that distance, and so on. The series is infinite.”

The AI starts working on the equation, blows a chip and figures out he’ll never reach her, and gives up. The human, on the other hand, accepts the challenge declaring “Ah, well, I figure I can get close enough for all practical purposes within no time”.

Now, I’m not saying algorithms (or AI) are entirely bad at approximations. All I’m saying is we humans are slightly better at things like operating on sentiment, feelings, and instinct – they are a quick shortcut that enables us to get by when working with limited time or limited information.

For now, I think I could live with this “close enough”.

How about you?

Let me know in the comments section below

Siddharth Parwatay

Siddharth Parwatay

Vertically challenged geek. Interested in things like evolution, psychology, pc gaming, history, web culture... amongst other things.