Here’s the thing.

All the previous posts have been about the motion of the robot and the math it takes to achieve that motion. And most of that motion is anchored solidly to the ground.The machine tools and robots and etc, I mean, some of them work on moving objects, like robots painting cars or robots welding, but in reality, it’s all bolted down.

Lets take a look at positions, for a moment. You have a position in space, and so do I. We can look at our positions several ways, most commonly by distance and direction. This generally only works if there’s a road between us, which generally, there is not. We can sort of imagine that, though, as an airplane flight.

Once the airplane takes off, it is no longer connected to the planet. We have to give it’s location as a group of numbers. Imagining for a moment that the earth is flat, we can give that group of numbers as lat, long, and H above G. (Actual pilots have different ways of dealing with this, I’m just trying to simplify) Those three numbers would correspond to x,y, and z. If you know the origin and the destination and the location of the plane and it’s speed, it’s simple enough to calculate the time since departure and the time until arrival.

But this all treats the plane, the departure, and the arrival as dots. They arent, they have mass and occupy space. So there are three other important dimensions- they’re important to the passengers of the plane and damned important to the pilot. Pilots call them “Roll, Pitch, and Yaw”, and they correspond to the robot/machine tool A, B, and C.

The plane has it’s own six axis coordinates. The destination and departure places do as well. So do all of the people on the plane. So do all the parts of the people on the plane; as you move around your left foot has it’s own location in relation to your right, and it becomes fractally complex very quickly.

Now, I don’t know anything about football, but I know a powerful lot about motion. If you watch a football game, you will regularly see a man running, throwing a football at another man running, who catches it.

Think about this for a moment. This is of course a skill learned with some practice, but the sheer computing power required for the thrower to judge the distance he has to throw, compensating for the speed he is moving, all the while controlling his muscles to increase/decrease the speed, while watching for and examining obstacles, actually throw the ball, to another player who must also watch for and avoid obstacles, and intercept the ball, which has it’s own speed and direction and location, being simultaneously slowed by air friction and sped up by gravity in minute increments. And yes, there are incomplete passes but the fact that anyone catches a ball ever is downright astounding. And NFL players do it without ever thinking about it.

So now, do you still fear robots will be taking over soon? No, none of the things i have discussed are impossible to do with a robot, given resources, time, and money, but what would the point be?

Now, you’d think

Having such complexity would require incredible processing power. And you’d be right! But the simple fact is, the algorithms are really all the same, they just move in ways that are foreign to almost everyone.

The simple fact is, G code programming in machine tools, and TP programming in robots, is the simplest way to do the job. That’s why it’s still done that way. In the 80’s FANUC developed a language called KAREL, named after Karel ÄŒapek , the man who coined the term “Robot” It was a compiled language much like Pascal, and was very powerful. It lasted a couple years, and mostly died. It was loved by code geeks, but not by customers who might need to change the code, because it meant paying a wad of cash to a programmer to make even the simplest change. KAREL took several weeks to learn and several months to be good at, even for a programmer accomplished in other languages.

So TPP was written (By FANUC, other robots use other similar languages) so you could teach someone the basics in a couple of days. I know, I’ve gotten TPP+ programming into people’s heads in under three days, a sharp guy can pick the most up in an afternoon. And it’s powerful. Now, there are a few things that KAREL is still useful for, mostly doing difficult communications via serial and Ethernet to pass data back and forth to other items- but even that functionality is being superceded, because as TPP (and other, similar languages) become ever more powerful, the needs for compiled code get fewer and fewer. And even that is nothing compared to the power of “basic” robot programming to deal with frames and user coordinate systems. So a robot sitting on the floor in a factory has it’s own “Frame”, and it’s gripper has it’s own “Frame”, and it has a mathematical image of the “Frames” of the conveyors, and the regrip stands, and the machines, and even the fixtures in the machines. And if those positions change, it is only neccesary to reteach the frame, not the whole program. That’s pretty sophisticated, but it’s just the beginning. Now take the frame that is attached to the workpiece, and put it in motion.

Yes, the robot has a program that it used to paint the part, in this case an axle, and the axle is moving, and the program modifies itself in real time to accomodate the motion of the part, and always paints the part in the same way. This is actually really simple to do, they teach the program when the part is stopped, and then just tell the robot how fast it’s going, and when it enters the robot envelope, and it just does it. Pretty neat, huh? Easy peasy. Now let’s put the robots on rails and move them too.

Yes, those robots- and there are at least six working on the car at any point- are following a car down a paint line. And they are painting it as it moves. And as they move. And several of the robots at any given time are being operated by the same control, so one CPU might be calculating the motion of a six axis robot, the three axis door opening robots, the rails that the robot move along, and the relationship between them all and the moving car. All the while monitoring the temperature, barometric pressure, and humidity in the paint booth and modifying the paint process in real time to optimize the amount of paint used. And the paint is electrostartically applied so there is almost no Fordite to be had, anymore. And the robots can pant each car it’s own color, there is no changeover of the assembly line to switch to a new color. And this is still not really even difficult to do, if I spend three or four days with anyone who can understand X, Y, and Z, they can do this; to have a firm and complete understanding of the geometry is rare, but it is also unnecessary. You teach the points, the robot does the work.

Robots.

The end point of this discussion is the robot, and how it moves. TO understand this you have to have a comprehension of the cartesian coordinates. This is a triad.

This is one given away as a teaching tool by Fanuc. You can see the X Y and Z axis, and you can also see the A, B, and C rotary axis.

it is a common trap to get into to think that each axis is a line, and the triad can add to that confusion; just looking at the X axis, think of it as not a line of motion but a direction. Anything anywhere moving parallel to the X axis is moving in an X direction. The triad shows the positive direction, but if you pick up a jack, you will find it’s almost the same piece.

The Jack has six “Directions” corresponding to the six sides of a cube, and those six directions are x,y,z positive, and x,y,z negative.

What the triad does represent is Origin. The X, Y, and Z axis all originate in the same spot, and the triad helps you to visualize that. Think of the corner of a box represnting the origin, while the edges of the box represnt X, Y, and Z.

But a robot doesn’t have straight axis, or any mechanical way to move in a straight line. Witness:

The robot has six axis, and each axis is a rotary and not a linear axis. So it can’t move in a straight line. Right? Wrong. Comp[lex algorithms allow the robot to move according to a cartesian coordinate system. Imagine that the triad above was at the center of the base of the robot. The robot can easily and accurately move in a straight line, moving all the axis to do so. It can move in X, Y, and Z as easily as jogging a machine tool, and it can do so within very nearly machining tolerances.

This is where it gets weird. The robot also has some kind of an end effector- a gripper, maybe, or a welding torch. That gripper might be double ended, it might even have three grippers all at 120 degrees to each other. Jogging this gripper into position using the basic “Triad” at the base of the robot gets you back to etch-a-sketch type moves again.

Except.

Except the robot has the math in it, to put a ‘Virtual” triad in the center of every gripper, so that you can switch from the robot base triad to the gripper triad to move the robot based on the orientation of the gripper or welding torch.

Now let’s think about what the robot is working on. It’s almost impossible to have the robots base triad lined up perfectly with the table of a machine, or the conveyor bringing parts into the cell, so there are OTHER virtual triads that you can place literally on every component in the system. And you can use vision to change those tiads (Called “Frames”) in real time. And you can manipulate those frames either by using sensory apparatus in the system, or by simply calculating and moving numbers into the frame.

A classic example of this is a process called ‘Through Arc Seam Tracking” or “Tast”. Through arc monitors the amp draw of the robot as it weaves around a weld, and when it “Wobbles” the torch back and forth, if the current on one side of the “Wobble” is bigger than the current of the other side of the “Wobble”, it will modify the path of the weld in real time to make sure the weld metal is deposited in exactly the correct location. And this is only a hint at what the programming is capable of doing. More on that later.

« Prev - Next »