One more time, and then I have to give up.
Pythagorean theorem. a^2 + b^2 = c^2. That’s the math, and it always applies to the real world of plane geometry.
Moral absolute. When A deprives B of X, A has committed a wrong. It does not depend on the need of A, or the surplus of B, or the value of X. It is wrong, and it is wrong because it is not right, and that is the only other option. It always applies in the real world, and always will. To suggest otherwise is to suggest that opinion has more value than fact.
Is there such a thing as degrees of crime? Indeed. There are lesser crimes that involve theft, and greater crimes that involve theft. DO any of those degrees make the theft less wrong? Absolutely not; that which is wrong cannot be right. And there is only one degree, wrong, or right. it is and always will be completely binary.
is there such a thing as mercy? Why sure. You can have mercy for someone who has done a wrong thing. Does the mercy mitigate the wrong? No. Maybe it mitigates the crime. but wrong and right, right and wrong will always be just those two things. To complicate them is to deny the nature of reason. And for this reason, AI will be diffcult to ever accomplish, because people think wrong and right are a sliding scale, and they are not and never will be. They are a coin, that you can only be on one side or the other of.
Maybe I’m dense, but I don’t see the connection between AI and moral absolutes, unless we reach a point where the AI can be considered a “being” (as in human being). Otherwise, it’s a machine, and machines don’t own things, therefore depriving a machine of a thing isn’t theft (otherwise my car would have a few harsh words for me). Likewise, if a non-being “steals” from me it’s not theft (if a wolf kills someones livestock, the wolf is not immoral. It may need to be eliminated to protect the livestock, but it’s not a moral agent and what it does can’t be called “wrong”. Undesirable certainly, but not wrong).
Now it’s possible an AI could lack a sense of morality (just as some psychotic humans do). Maybe we need to discuss sane vs insane AI?
The more I think on this topic, the more I see a similarity between humans developing AI and God creating humanity. We set the AI forth and …. well we know how the latter turned out don’t we?
Given that we’re not omniscient, perhaps best not.
That link is part of another discussion that probably you haven’t seen. The point is that we will not be able to create an AI as we understand it because we would have to be able to convince a logical thinking machines that there can be such a thing as wrong and not so wrong under the same circumstances. For instance that human life is valuable but that abortion is acceptable. This is the obstacle as I see it to a i
Ah, I get it.
I’d offer to you that if an AI were to pass the Turing Test (a human communicating with it couldn’t tell it wasn’t a human) there’d have to be a certain randomness and illogic to it, so a “logical thinking machine” wouldn’t be an AI because humans aren’t always (or honestly, even often) logical. Humans aren’t purely, or even primarily logical beings. More often than not emotion overrules logic, which explains a great many problems in our society but certainly makes it more interesting.
So your hypothetical AI would have to be able to say “Logic says I should do A, but I’m gonna do B instead”. I know I’ve done that many times in my life, mostly where women and/or alcohol were involved.
We agree.
What we have are two problems here. One is programming an AI to absolute standards. The other would be programming an AI to standards a human could survivably interact with. I harbor doubts on either being achieved any time soon.
Yet another problem, as I see it at least, is the propensity for people to advertise their latest creation as “AI” when the reality is, at best, only a part of the level of performance such a thing would express. The Alexa (or Siri) chorus I suggested certainly seems do-able, but I don’t know how close that could come to actual AI. Not very, I suspect, which is probably good enough for most applications.
Its been fun Og (if more than a little aggravating for you I suspect). Thanks.
No, you can’t aggrivate me. I come pre-aggrivated. I think what you’re doing is important, and it’s worth discussing, so long as there’s more light than heat.
Thing is, what really is AI? In Steely Dan’s IGY Fagen talks about “Just machines to make big decisions, programmed by fellows with compassion and vision” and this is sort of what some people have come to expect as AI, to be a sort of a human created “God”, but again, because of the human propensity for changing the goalpost of morality to suit todays mores, we will never get there- or we will, as I posited earliuer- create Artificial Insanity.
The fallacy of the human mind is we can great God. We might be able to suppliment god but we cannot create him.
Kind of like meta physics. That is a big bunch of hokum.
Nuts