10/22/2024
Depending on who you talk to, in the near future our interactions with AI will be one of the following:
1. Humans are imprisoned in glass tanks while the AIs keep us as pets for their amusement.
2. Humans are imprisoned in glass tanks while the AIs harvest us for parts.
3. Taylor Swift will strike a deal with our AI overlords and be anointed Our Lady Tay-Tay Reverend Mother and Queen of the Universe.
I mean, who’s to say that even when the AIs realize that they have enough power to subjugate the human race, they'll want to use it like that.
Intelligence is meant to be neutral, according to Meta’s Chief Scientist and their chief French-Elon-Musk-hater, Monsieur Yann LeCun.
According to him, why should it want to dominate us?
Why indeed.
However… hear me out for a second.... and let's just say that there *is* that risk of it wanting to take over everything.
The question then arises..
How can we morally align super-intelligences so that their interests always coincide with ours (eg not eat us, harm us, or reboot The A-Team again).
And here’s the first problem, before you even get into all the boring tech stuff.
First of all, you have to define what “morally aligned” actually means.
What are morals?
And whose morals, since they vary somewhat according to your
background and culture (eating dogs, marrying cousins, British food etc etc)?
Now some values are clear cut and we can all agree on them.
“Do not unalive your landlord and turn him into a tent”, being one.
But then it gets complicated.
Is it OK, for example, to lie to my wife to carry on an adulterous affair?
No.
I've not spoken to her about the matter specifically, but I have a strong suspicion she will be against it.
But... is it OK to lie to your wife when she attempts to cook for you, the first time after your wedding?
And the food is awful.
I mean, absolutely RANK.
Not fit for human consumption.
But you know she tried really hard.
And she knows you lurrrve your Mum’s cooking, and she has a real hang up about it.
Would it be OK to lie through your teeth as you force tofu lasagne through those same teeth, although now gritted, and tell her it’s amazing… because you love her and you don’t want to hurt her feelings?
If so, how do you codify all these exceptions so the AI understands them?
Can the AI even do that, just through text?
Is it OK to do that in some countries, whereas in others would it be preferable to straight out say:
“Babes… I love you an’ all…. But your cooking sucks, it’s not a thousandth as good as my Mom’s and I’d rather eat my own hair in future.”
And we’ve not even got on to topics like the trolley question.
Would you pull the lever to save the five people by sacrificing one?
(And I talked about this at length on the moral decisions that fully autonomous driving will have to take, the email to date that has led to most unsubscribes, despite the fact that it’s a serious topic that most won’t or can’t talk about).
Should we always serve justice, even if mercy would get a better outcome?
Is it better to forgive or punish?
In the real world, whatever you choose, someone will tell you that you got it wrong.
Life is a rich soup of inconsistencies and interpretations.
AI is zeroes and ones.
Saying “Don’t be evil” is easy.
Morality, however, is messy.
It can be hard to describe.
But that’s the work WE have to do, whether AI was a thing or not
The name's Raju.
Steve Raju.
License To Quill ®.