Improve productivity with data-driven manufacturing
New materials with unique properties that can be used for 3D printing are always being developed, but figuring out how to print with these materials can be complex.
Often an expert operator must use manual trial and error to determine the ideal settings that consistently print new material efficiently. These settings include the print speed and the amount of material deposited by the printer.
MIT researchers have now used artificial intelligence (AI) to streamline this procedure, developing a machine learning (ML) system that uses computer vision to monitor the manufacturing process and then correct errors in real time. .
The researchers used simulations to teach a neural network how to adjust printing parameters to minimize errors, then applied this controller to a real 3D printer. Their system printed objects more accurately than any other 3D printing controller they compared it to.
The work avoids the prohibitive process of printing thousands or millions of real objects to train the neural network. And it could make it easier for engineers to incorporate new materials into their prints, which could help develop objects with particular electrical or chemical properties. It could also help technicians adjust the printing process on the fly if material or environmental conditions change unexpectedly.
“This project is truly the first demonstration of building a manufacturing system that uses machine learning to learn complex control policy,” says lead author Wojciech Matusik, professor of electrical engineering and computer science at MIT. who leads the Computational Design and Fabrication Group (CDFG) within the Computer Science and Artificial Intelligence Laboratory (CSAIL). “If you have smarter manufacturing machines, they can adapt in real time to the changing workplace environment, to improve yields or system accuracy. You can get more out of the machine.
The co-lead authors of the research are Mike Foshey, mechanical engineer and project leader at CDFG, and Michal Piovarci, postdoctoral fellow at the Institute of Science and Technology in Austria. MIT co-authors include Jie Xu, a graduate student in electrical engineering and computer science, and Timothy Erps, a former CDFG technical associate.
Determining the ideal parameters for a digital manufacturing process can be one of the most expensive parts of the process because a lot of trial and error is required. And once a technician finds a combination that works well, those settings are only ideal for a specific situation. It has little data on how the material will perform in other environments, on different hardware, or if a new batch exhibits different properties.
Using an ML system is also fraught with pitfalls. First, the researchers had to measure what was happening on the printer in real time.
To do this, they developed an artificial vision system using two cameras pointing at the nozzle of the 3D printer. The system illuminates the material as it is deposited and, based on the amount of light passing through it, calculates the thickness of the material.
“You can think of the vision system as a pair of eyes observing the process in real time,” Foshey explains.
The controller would then process the images it receives from the vision system and, based on any detected errors, adjust the feed rate and direction of the printer.
But training a neural network-based controller to understand this manufacturing process is data-intensive and would require making millions of impressions. So the researchers built a simulator instead.
To train their controller, they used a process known as reinforcement learning in which the model learns through trial and error with a reward. The model was responsible for selecting the print settings that would create a certain object in a simulated environment. After showing the expected result, the model was rewarded when the parameters it chose minimized the error between its print and the expected result.
In this case, an “error” means that the model either dispensed too much material, placing it in areas that should have been left open, or didn’t dispense enough, leaving open spots that should be filled. As the model performed more simulated impressions, it updated its control policy to maximize the reward, becoming more and more accurate.
However, the real world is messier than a simulation. In practice, conditions usually change due to slight variations or noise in the printing process. The researchers therefore created a digital model that approximates the noise of the 3D printer. They used this model to add noise to the simulation, which led to more realistic results.
“The interesting thing we found is that by implementing this noise model, we were able to transfer the control policy that was purely trained in simulation onto untrained hardware with no physical experimentation,” Foshey says. “We didn’t need to do any fine-tuning on the actual equipment afterwards.”
When they tested the controller, it printed objects more accurately than any other control method they evaluated. It worked particularly well when printing infill, which prints the interior of an object. Some other controllers deposited so much material that the printed object bulged, but the researchers controller adjusted the print path to keep the object level.
Their control policy can even learn how materials spread after being deposited and adjust settings accordingly.
“We were also able to design control policies capable of controlling different types of materials on the fly. So if you had a crafting process in the field and you wanted to change the material, you wouldn’t have to revalidate the crafting process. You could just load the new hardware and the controller would automatically adjust,” says Foshey.
Now that they have shown the effectiveness of this technique for 3D printing, the researchers want to develop controllers for other manufacturing processes. They would also like to see how the approach can be modified for scenarios where there are multiple layers of material or multiple materials printed at the same time. Additionally, their approach assumed that each material had a fixed viscosity (“syruposity”), but a future iteration could use AI to recognize and adjust the viscosity in real time.
Other co-authors of this work include Vahid Babaei, who leads the Artificial Intelligence Aided Design and Manufacturing group at the Max Planck Institute; Piotr Didyk, associate professor at the University of Lugano in Switzerland; Szymon Rusinkiewicz, David M. Siegel ’83 Professor of Computer Science at Princeton University; and Bernd Bickel, professor at the Institute of Science and Technology in Austria.
The work was supported, in part, by the FWF Lise-Meitner Program, a European Research Council Starting Grant, and the US National Science Foundation.