OK, so everyone understands.
Weak AI vs Strong AI.
it isn’t exactly weak vs strong, what it is, essentially, is that weak AI has a group of preprogrammed responses to everything, and if you ask it to perform a task, it will perform that task based on it’s programming, which is to say it will perform the task in the way it’s programmers want it to perform the task. Weak AI is merely a stand in for a human; it does not “Think” but merely respond in a set of preprogrammed ways.
Strong AI, which we are nowhere near, nor are we anywhere close to, requires building an electronic infrastructure that “learns” And I don’t mean it calculates answers to formulas it is given, it takes input and draws conclusions based on that input. For an example of how weak vs strong might work, think of a lawnmower. You could make an autonomous lawnmower that cut the grass. You could have a camera that sensed when the grass was too high and activated the lawn mower to cut it. You could have several patterns of mowing programmed in so the mower would vary the pattern. But it would still always be all that equipment acting as a proxy for the specific desires of the programmer. A true AI system would start with only the equipment, and the knowledge that the grass needed to be short.
17 comments Og | Uncategorized
First words matter.
Weak AI: “I want more programming.”
Strong AI: “I demand to see my attorney NOW.”
:-)
The whole idea of Strong AI implies to me that we somehow create the ability to build our own version of a (non) human that still manages to “think” in an identifiable fashion to we humans, despite the little evidence we already have that non-humans don’t think like us at all (see my previous reference to self-learning programs that teach themselves a non-human language).
I keep tap dancing around the notion of Strong AI for basically the same reason I won’t eat my own gun … mostly because I expect to experience the same result from either one.
At least I know how my guns work.
I suspect you are so confident about moral absolutes preventing development of Strong AI as a result of your expansive education in the tenets of your faith. With no intent of offending that conviction, would you accept that the existence of physical absolutes might have a similar restraining effect on Strong AI being develop-able?
Separate question; how does the Weak AI you describe differ in capability from a human of limited knowledge and/or intellectual capability? Still a human, so why wouldn’t a Weak AI still be a legitimate (if limited) AI?
It’s not a conviction, will, it is a fact. it has no basis in faith, it has basis in fact. Let’s look at the example given: Theft is wrong. Tell me one way in which it is not wrong. You cannot, but I’m interested in what justification you could possibly give.
A human of limited intelligence is still capable of varying from programming. A weak AI is not.
You’ve been dealing with a topic I have some, admittedly limited experience with.
Back in grad school I took an AI course, and my class project was to develop a Neural Network, which is (we believe, or at leave believed in the late 1980’s) how humans “learn”. In essence, it’s a matrix of neurons, and when you present it with information you can then ask it if it has previously learned that information, up to the capacity of the network. In a computer it’s a two-dimensional matrix of bits, each if my bits was actually a byte because I was using a language that didn’t do bit-manipulation well and it was more proof-of-concept that actually trying to maximize capacity.
So my matrix was pretty much every byte I had to spare in my TRS-80 Model 3 (about 25K after the OS, program and run-time environment IIRC), and I taught it single randomly selected characters, then asked it if it had previously learned that character before teaching it the next character, until I reached the point where it forgot a character it had been taught (IOW the capacity had been reached). I could reliably teach it about five characters before things went sideways and it forgot everything. Mind you, using a standard array to store the individual characters I could have taught it about 25,000 characters with the same amount of memory, with a binary search to determine if it had been learned already the recall would have been acceptable.
This is why strong AI is so problematic, it’s resource intensive for what you get, so unless you absolutely NEED a system that learns independently and then reacts accordingly it’s a waste of time. And for processes we could argue NEED strong AI, lets say we develop some to explore a distant planet where the speed-of-light round trip is too long to actually give it direction as needed. The strong AI could just decide “screw this, I don’t want to”, come back home, and the whole trillion dollar project would be toast.
Hit “submit” a little soon….
So perhaps what we need is “strong-ish” AI, where it can make decisions within a general framework but can’t go outside that framework, rather like Asimov’s Three Laws of Robotics.
nicely put, mark d. and shows a more than passing acquaintence with the issue.
As soon as you have rules, ie framework, you do not have an AI. If you use humans as the base we can ignore rules if we feel the need is there.
Look at every one on the freeway. every one would go faster than the posted limit if they could. every.single.one.
Og, you asked me to tell you one way that theft is not wrong, knowing that I could not of course. I believe one way of describing that rhetorical technique is “the gambler’s force”; the “gambler” posits the bet in such a way that precludes any answer other than a “losing” one. Smart.
I would counter that, theft being always wrong, there yet remains the necessary existence of circumstantial mercies (degrees of clemency resulting from the circumstances under which the theft transpired). A recent example might be the reported theft of groceries from an abandoned store in the days following Harvey’s visitation to S. Texas. It was theft without question; there being no one from whom the stolen items could have been purchased by those who needed them to eat (placing the most charitable face on their motivation), and the store being unlocked, does seem to indicate that the issue is not as simplistic as the premise as stated (another fact I am confident you are well acquainted with from other writings of yours). So, not “not wrong”, but not uniquely wrong to any particular involved individual either.
Which seems a decent enough example of the problem we are discussing here; to what degree can a general principle be expected to be adhered to by a humanly designed intelligence in a variety of circumstance, some of which are going to be conflicting with each other?
As to your other point, I am tempted to argue that all humans are possessed of “limited intelligence” to some degree, but my initial meaning was what was once classified as “retarded” intellect so let me counter that both a retarded human and a similar AI are equally possessed of the capability of varying from pre-programming – through the efforts of a programmer (or teacher if you will permit).
I stated earlier that none of us is born with an innate understanding of logic. The same can be said for any and every other branch of knowledge. Similarly, any AI will have to be “taught” before it is able to learn on its own. A Weak AI is designed to only ever attain expertise in a stipulated application of knowledge (which doesn’t necessarily have to be all that confined), mostly because humans don’t know how our brains function to allow the intuitive creative process (by which I think I mean the precise electro-chemical mechanism within our neurological system we utilize to imagine). Much as any autistic person can be taught to learn to function in ways new to that individual, an AI can be “taught” to do similar – though probably it would make more economic sense to simply build a separate machine to operate in the different application.
AI =/= human, and we should try very hard to make sure no qualifying adjective changes that.
Mark D; your example is an excellent illustration of why my focus keeps returning to Weak AI.
A robot can be programmed to perform a given task (even a complex series of tasks) at a given cost. A Weak AI can be programmed to learn to initiate the performance of a complex process under a variety of circumstances having often contradictory inputs and desired outcomes at a much greater cost. A Strong AI, one that can design the process and facilities necessary to accomplish a task previously undefined in its programming/instruction set would be infinitely more costly (we humans not yet knowing how to emulate our creative imagination process into such a machine).
Excellent real world example of the Strong AI Problem. Thank you.
Og; regarding “fact” vs “conviction”, I was attempting to point out that you arrived at your knowledge about moral absolutes as a result of you education in your faith (you having previously written about your education in the seminary) in an effort to avoid any conflation of moral vs physical sources of certainties.
Should have just ignored all that and carried on with making my point. Sorry for the added confusion.
On the whole “learning” question, there are existing language translation systems that incorporate self-learning routines as a part of their development programming. Why couldn’t that be an included feature in the initial programming of an AI? Much as you can’t expect a rational answer from a language translation device about the function of your automobiles electrical ignition system (being outside the parameters of its data base) (not to mention yours, apparently), an AI that is the world expert on the construction of (insert example here), that is also capable of extracting relevant data from published sources to improve that construction process, doesn’t seem to be outside the realm of existing human programming technology nor particularly without utility to human society.
That is my example of a Weak AI.
you have missed the point entirely. moral absolutes do exist, as i have said. You did exactly as I expected that you would do and and said that, given your example, when someone steals to eat…
But here is the fact. No matter what the reason is for theft, the theft is still wrong. It is a moral absolute. Motivation does not bear on it and it is not part of this question. You cannot make a wrong thing right by inventing a good reason to do the wrong thing. And this is the issue with strong AI. You somehow think that doing the wrong thing makes it not wrong because of the reason you do the wrong thing. This is logical nonsense, and you cannot have a wrong thing be wrong and also not wrong and not be considered insane. So, if you try to teach an artificial intelligence to accept that theft is wrong sometimes and not wrong other times, and the rules for when it is wrong and when it is not wrong are complex and ridiculous, you have not created artificial intelligence. You have created artificial insanity. Right and wrong exist outside of theology. Forgiveness and mercy can exist in the realm of theology, but not necessarily only there.
Moral absolutes exist. The issues that crowd the headlines force you to accept that one group of fascists is good and another group is bad for doing the same thing. Real intelligence cannot accept this, and neither can artificial intelligence.
You mis-read me Og. I did not say theft was not wrong, I said there were circumstances when the punishment might be mitigated (or be deserving of added punishment). Not the same thing at all, and I submit a valid consideration for any AI to possess to compete with standard human intelligence.
Crime and punishment are two inextricably intertwined considerations involving two of the basic decision making binary options: crime = yes/no; punishment = if/then, but both decisions have to be arrived at to begin to equal human-level intelligence.
still: missed the point. Theft is wrong, is a moral absolute, and it exists outside of theology. Any discussion of the mitigation of that wrong is by definition extraneous and obfuscates the facts.
crime and punishment have absolutely nothing to do with discussion, at all.
i like that! lol.