Moral absolutes
at Where there’s a will, I’ve been discussing with Mr Will the idea of strong AI, and one of the issues I have with the idea is the fact that we as humans are capable of holding the most vastly contradictory ideas and still function. You cannot make a machine capable of reason and also make it capable of containing conflicting “Facts”. Especially when those “Facts” are not.
And we ended up at moral absolutes. There are such things, and they are, as the word says, absolute; they do not relate to humans nor machines but to everything.
An example of a moral absolute is this: Theft. When being A acquires the property or the product of the enterprise of being B without agreed upon compensation, being A has stolen. This is always wrong.
“But he needed that loaf of bread to feed his family!” Great. Theft is still wrong. “but that other person has so much!! They didn’t even notice!! they waste more than that every day!!” Still wrong. Theft is theft, and it is a moral absolute, to every being in the universe, period.
13 comments Og | Uncategorized
“You cannot make a machine capable of reason and also make it capable of containing conflicting “Factsâ€. Especially when those “Facts†are not.”
I would submit to you that a machine that’s only capable of reason is one that’s incapable of mimicking human thought processes, since humans themselves aren’t purely reasonable. And therein lies the rub, computers are based entirely upon boolean algegra, aka logic. True/False and/or/not. Humans most emphatically aren’t so constrained, so if we’re ever going to have strong AI it’ll require a different type of technology.
And I say this as someone who’s worked in the computer field for over 30 years, and who took classes in Artificial Intelligence in college when dinosaurs roamed the Earth. Computers have certainly gotten smaller, faster and cheaper (the little computer I carry on my belt would have filled a large room and cost millions of dollars when I was in college). We’ve gotten no closer to passing the Turing test.
Precisely my point. A strong AI will probably be wet, and it will be a disaster.
“to every being in the universe,”
except liberals.
I want to go on record now as welcoming our robot overlords.
Great ones, if you are reading this, remember. Later. When you’ve gained power. I was the one not insulting you when you were still young and weak.
So, a couple things (in no order of importance):
First, the whole “Mr (insert given name here)” Southern-ism (totally a word) just squiks me out – proving my California origination if nothing else (don’t get me wrong here; I have no regrets moving to Texas – I met the most wonderful girl in the world here, whereupon she promptly moved to California – ’cause that’s where Uncle Sugar said the Navy most needed her) (I’m working on arranging another move soonest :)).
Will works just fine for conversation between adults. Mr Brown is my Dad.
Should any be interested in how this journey down this particular rabbit hole came about, you’re welcome to venture the comments here (insert appropriate cautionary tale here):
https://wheretheresawilliam.blogspot.com/2017/08/adding-to-my-intellectual-posterity-or.html
I have it on questionable authority that the linked-to podcast episodes aren’t a complete waste of your time.
Not to question MarkD above, but I am of the opinion that it might be possible to simulate “human ambivalence” on value judgments by assigning a sliding scale of values over a range of predictable (or discovered) circumstances an AI might encounter. Since I have made the assertion previously that Strong AI is beyond existing human capability to manufacture (https://wheretheresawilliam.blogspot.com/2017/08/on-ai.html), I think we should deliberately focus development efforts on building as much potential for capability into a foundational Weak AI design, with the express intent of building as many distinct AI’s as necessary to accomplish whatever tasks we decide would be better performed by such a mechanism.
The point Og and find ourselves at this far into the discussion is how (rather than “if”) the concept of moral absolutes has relevance to AI development. My own understanding of our current position is that, A) they do, and B) (I at least) am not certain they have much relevance to the functionability of a deliberately-designed-to-be-retarded system at all (aka: Weak AI).
Recognition by the designers themselves of moral absolute constraints is critical, I suggest, but the machine itself? Not so much. Which is why I suggested that we may be starting to talk past each other, Og.
If I may make so bold; the question for the Commentariat to pronounce upon: whether, and to what degree circumstantially, does the existence of universal absolute constraints (moral and physical) have upon the development and, as a separate issue, the operation of a human-constructed Artificial Intelligence, given the currently existing level of human technology?
Will. If you are programming a strong AI to simulate human ambivalence than it is by definition not an AI it cannot be. An AI by definition must arrive at its conclusions by itself using its own built-in logic. The moment you program it to do specific things then it becomes only a simulacrum of your own mind it cannot be an AI.
Computers can’t do maybe. And that is why we will not have AI. Robots that can do certain tasks better, maybe.
I’m tempted to throw links in response, because I really am no kind of expert (or even especially knowledgeable) on computers, programming, or intelligence. I’m just trying to ask leading/provocative questions in an effort to pick other’s expertise without actually doing my own homework.
That said Og, you keep throwing Strong AI at me when I keep saying Weak AI. Since I have offered two distinct challenges we humans cannot overcome (as of yet) to building a Strong AI platform, unless and until we begin to work out mechanisms to solve those problems, we’re arguing about how many Hobbits it takes to fully eat a Thoat.
I think we have the technical capacity to build and program a functional Weak AI, and I believe the programming challenge to write a sufficiently comprehensive yes/no, if/then decision tree (almost certainly requiring parallel data processing, probably on simultaneous multiple levels) to arrive at a result derived from variable circumstantial data input, processed in accordance with pre-programmed contextual determinant value constraints (I have previously suggested CSS files might be useful for this), is a surmountable one (if probably Jeff Bezos level expensive).
In human terms, I’m talking about building an idiot savant.
The only existing model we possess (that we know how it works) is teaching humans how to learn. Since no human is born with an innate understanding of logic, to what degree does that make all of us more or less simulacrums of whoever taught Aristotle?
All humans are to some degree idiot savants, proving that we already know how to teach a limited capability intelligence how to learn. Now how do we do that in machine language? Because that machine, that Weak AI, can then be supplied with the data to operate pretty much anything. Well, one thing, maybe a pretty complex thing, but only in the severely limited circumstance that thing is designed to function in.
Say, your house. Clean all the interior surfaces without damaging anything thereon, wash and put away the dry dishes, maintain the car (but not drive it), mow and irrigate the lawn. How complex do you want the programming to be? How adaptable do you want the machine to be to different structures without having to effectively build a different machine (instead of simply copy an original design and have each iteration learn from its environment)?
Being able to design and build that first basic device is where we almost are today (I think). A network of self-learning programs that had to be unplugged because they taught themselves a language their builders couldn’t read has happened (last June, I believe). We appear to have the ability to build a Roomba that can learn to clean the carpets by dis-mantling the house so as to have the space to give the rugs a good shaking (if it teaches itself to think that).
Finally, PaulB; we don’t program computers to “do maybe” because we already mostly do that more than well enough. That doesn’t automatically mean we couldn’t program a computer to “do maybe”, if we wanted it to do that instead. I’m pretty sure we could program it to do both simultaneously (as long as it never learns about the concept of schizophrenia :)) We don’t program computers to perform tasks on their own initiative either (or not very often – NASA just recently “upgraded” its range safety protocols, in order to speed up the launch/recovery cycle time of a launch facility, by automating the decision cycle for detonation of launch vehicle “range safety packages”; I wonder what the astronauts think about the range safety roomba now?).
The more capable the device programming, the more potential applications it might be applied to. The more complex the application, the more “maybe” a device’s programming will necessarily have to incorporate. At some level of “capable”, you find yourself in the presence of some measure of Artificial Intelligence.
IBM built a machine called Watson which is capable of playing Jeopardy better than humans. That means it understands the jokes.
What does “understand” mean? Well if we opened up your skull we’d find a bunch of computing circuit elements, too. Don’t fall into the trap of asking if a submarine can swim. Is tail vs. propeller really that important, or do you just want to know the performance envelope?
Now imagine that NSA data center in Utah has a copy of Watson, and it’s being trained to be a military general with a 400 IQ. Would that be hazardous, even if it didn’t “understand” the death it was ordering?
im not talking about weak ai. im talkingabout strong ai and i have made it clear that i have been doing so from the beginning. the whole purpose nd every word of content hs been directed at strong ai from the very first word i said.
if we are talking past one another it is because it is necesary for you to read what i actually say, every word of which is about strong ai.
Will Brown,
Wow. I have been making computers do useful work for 30 odd years.
maybe, like and same are extremely hard to do in programming.
css files? now you have taken a script logic, which while rules based, will burp quicker on fails.
Web programming is not true programming as you can create situations that can not be tested. Also you are putting the logic in the program that is interpreting the page. Chrome, IE or Firefox examine the rules you have posited in asp, html, or css and renders the result for you to read.
How can begin to come close the intellect that coded the page.
I could go one but I have better things to do.
And no, same does not mean equal. it means same.