From automation to autonomy: how the new era of robots will revolutionize manufacturing

Blog 09.25.2018
from_automation_to_autonomy

In discussions of automated production lines and autonomous robots, “automation” and “autonomy” are often conflated. Though similar, the concepts are distinct in industrial manufacturing:

  • Automation is a set of human-defined functions performed by a robot or piece of equipment;
  • Autonomy is a state in which a robot or piece of equipment operates independently, without explicit instructions from a human.

Automation is made feasible by creating static environments in which robots are designed to perform the same activity 24/7 for months or years. Automation benefits manufacturing environments that are highly controlled and that produce large volumes of an identical product, with a classic example being the automated work cells used to produce automobiles. Perhaps the epitome of automation is the fully automated “lights out” factories where robots operate without human intervention.

Autonomy, on the other hand, requires that the robot react and change its actions without a human directing those changes. Autonomy has incredible potential for manufacturing in high-mix, low-volume environments, or those that produce made-to-order products in a batch size of one. But to achieve this degree of flexibility on the fly, a robot must be capable of acting in unfamiliar situations without explicit instructions. In effect, the robots must be able to program—and reprogram—themselves.

The limitations of static programs

The traditional method of programming an industrial robot is to manually generate a set of positions for the robot to move through using a teach pendant. The robot then executes this static program. This method is time-consuming and results in downtime whenever the robot needs to be reprogrammed.

Alternatively, many robot manufacturers and third-party software providers offer offline programming tools. Although these tools can reduce robot downtime and overall programming time, the resulting program is still static.

For a static program to control an industrial robot successfully, the environment in which the robot operates must be engineered to be as structured as possible. This approach works well for large batch automation in large factories, but it is expensive and any changes to the engineered environment require robot reprogramming. The statically programmed robot can’t compensate for the changes or even perceive them in the first place.

So while this kind of automation can be useful in well-controlled environments, managing a highly variable manufacturing process requires something different.

The path toward autonomous robots

Advances in modern robotics and AI have the potential to address these challenges for the manufacturing industry. Specifically, the following approaches are key to moving towards autonomy in manufacturing:

Using sensor data smartly

When the robot can sense its environment, it can handle less structured and less predictable situations. The most commonly used sensors to help guide robot motions are force/torque and vision sensors.

Force/torque sensors give the robot the ability to “feel” the environment and are often used for robotic surface treatment processes such as polishing or sanding. The sensors allow the robot to follow a surface while applying the desired force without relying on a highly accurate model of the workpiece. Vision sensors, on the other hand, allow the robot to “see” the environment. Most commonly, a 2D or 3D position is extracted from a camera image to control a pick-and-place motion.

Sensor data offers opportunities to automate more complex processes. Recent developments in machine learning, notably convolutional neural nets, enable the extraction of more advanced features from sensor data and guide less structured tasks. For example, this Finnish recycling robot uses neural nets to identify different types of garbage and sort them accordingly.

Real-world information from sensors can also be used to create or update digital models, making them more robust and accurate. For example, 3D point clouds can be used to detect and analyze geometric deviations in machined or printed parts, or thermal sensors can be used to validate and update thermal models during welding.

Programming through digital geometry

Some offline programming tools allow users to generate robot toolpaths directly from digital CAD data. This is a great idea, but most such tools still require significant human input during the authoring process and manual fine-tuning during deployment, making them too expensive in man hours for small batch automation. For example, when programming a robot to polish a complex geometric surface, offline simulation is an invaluable tool, but the engineering work required makes this approach unfeasible for small batch processes such as 3D printing.

Luckily, the technological frameworks to overcome these programming limitations are already available. First, for a specific robotic process, the trial and error aspects of offline programming can be largely automated by leveraging modern techniques in optimization, AI, and computing.
Second, sensor data can be used to eliminate the need for manual fine-tuning; instead, to compensate for the geometric deviation between the real and the digital part, a 3D scanner can adjust the geometric models before planning a trajectory, and a force/torque or vision sensor can actively compensate for any geometric deviations in real time.

When the robot is programmed directly from digital geometry using these techniques, product variability can be handled autonomously, making processes like robotic polishing of unique 3D printed parts economically feasible.

Capturing process knowledge

When traditional robot programmers automate a process by generating a static sequence of robot positions, all process information is lost. The program only represents a geometric toolpath, not the complex interactive process between the robot and a workpiece, so for processes that are sensitive to variations, it’s difficult to develop a program that performs consistently.

For example, when a robot is programmed to weld two thin pieces of steel together, the way heat is transferred away strongly influences how the weld behaves. Every workpiece is clamped slightly differently and the resulting change in contact surface can change the thermal properties significantly. As such, a toolpath that generates a perfect weld on one workpiece might result in a big hole in another.

Typically, for a robot to adapt successfully to any changes in a tool or workpiece (be it geometry, hardness, or, in the case of welding, thermal conductivity), these scenarios must be explicitly programmed. Sensor feedback loops can be programmed to handle some of these changes, but without a proper simulation model, the robot is at the mercy of the programmer’s ability to come up with a good enough heuristic.

However, when robot programs are generated directly from up-to-date digital models by accurately simulating the expected result, these programs are inherently more robust.

So, in our welding example above, by monitoring the part with a thermal camera we can update our thermal model, predict how the part will behave and adjust the torch velocity or intensity accordingly to ensure the desired quality.

Learning from experience

As the complexity of sensor data, models, and processes increases, building accurate models and controllers from first principles becomes exponentially more complicated. Recent developments in reinforcement learning show promising results in bypassing this complexity and having the robot “learn” its own model. By executing millions of motions without prior knowledge of the process, the robot learns the task from its successes and failures.

This approach can be applied at different levels of the program, and, in extreme cases, end-to-end learning of the entire task (e.g., sensor to actuators; pixels to torques). For example, University of California, Berkeley’s Sensorimotor Deep Learning research group taught a robot to screw the lid on a bottle by learning a model from sensor pixels to robot joint torques.

The biggest hurdle in robotic reinforcement learning today is that it’s impractical for a physical robot to fail a million times before it becomes usable. Therefore, researchers are trying to use simulated robots to do the learning and then transfer those models onto physical robots to do the actual work. This approach is called transfer learning and recently allowed OpenAI to teach a robot hand how to rotate a block.

Reinforcement learning is a powerful tool to control more complex processes, and— although still in early days—is expected to slowly make its way into industrial applications.

And beyond…

These technologies will allow industrial manufacturing robots to move away from simply executing static programs, and make the transition from automation to autonomy. As these future robots will be able to easily handle less structured environments and more variability in processes and products, they will exist in ever more integrated environments where data can be shared efficiently across the whole factory and organization.

Robots will receive production orders on the fly, generate the necessary programs and execute them autonomously. Their execution plans, as well as the real-time progress, will be shared with a factory level planner to ensure deadlines are met, and optimize the use of all interdependent equipment on and across factory floors. And as plans and execution details are captured and shared, robots can learn not only from their own but also from each other’s experience to improve over time.

A digitally connected factory, where all processes use data smartly, capture process knowledge and feed it to dependent systems, and from it continuously improve, is the future of production. That seamless connection between factory workflows – from design to fabrication, post-processing, inspection, and assembly – is the vision that Oqton is working toward.

Learn more about Oqton FactoryOS >