Created in partnership with Allied Vision.
Frequent cleaning of high-traffic areas, for instance, hospitals, airports, and public transit systems, is crucial to curtailing the spread of COVID-19. A wave of new vision-guided cleaning robots has been deployed to safely sanitize these locations without putting front-line workers, such as janitorial staff, in harm’s way, simultaneously killing germs and minimizing human contact.
Lidar, a laser-based remote sensing method, or machine vision lenses and cameras allow these robots to navigate through their environment, dodge obstacles, and ensure that all required surfaces have been cleaned. Many such robots sanitize surfaces using ultraviolet (UV) radiation, which destroys the DNA or RNA of viruses when operated at a sufficient optical power.1 At the onset of the COVID-19 outbreak, China introduced thousands of UV-based cleaning robots and this technology has now made its way to other countries. In San Diego, CA, a hospital is disinfecting 30 COVID-19 patient rooms and breakrooms every day using a fleet of robots that are able to completely sanitize a room in 12 minutes; this same task would ordinarily take 90 minutes for human employees to perform.2
UV-C radiation, which covers wavelengths from 100-280nm, has been utilized to disinfect surfaces, air, and water for decades. Research published through the American Chemical Society found that 99.9% of aerosolized coronaviruses similar to COVID-19 were killed when directly exposed to a UV-C lamp.3 The outer proteins of the virus are damaged by these lamps, rendering the virus inactive.
Vision-guided cleaning robots sanitize rooms without the need for any human operators. This is important because UV-C radiation can damage the skin and eyes of those exposed to it. Motion sensors can ensure that the UV sources are turned off if a person comes too close to the robot.
Cleaning robots typically navigate through their environment using either Lidar or 3D machine vision. Incorporating multiple lenses and cameras allows robots to generate 3D images of their environment and accurately gauge distances (Figure 1). The location, optical specifications, and recorded images of each imaging assembly are used to determine depth through triangulation algorithms.
Using multiple lenses and cameras can quickly make a system very large and heavy, so compact solutions are critical for practical robots. Small M12 imaging lenses and compact cameras are ideal for reducing bulk while maintaining performance. Ruggedization to protect lenses from shocks, vibrations, or humidity can also be beneficial for long-term performance sustainability.
In addition to cleaning, vision-guided robotics are improving safety in other areas including the restaurant industry, which has suffered significantly during the COVID-19 outbreak and resulting shutdowns. Robots can minimize human-to-human interaction in restaurant settings, particularly fast food, where they can perform repetitive tasks such as food preparation and serving. However, robots offer far fewer potential benefits for fine dining, as the subtle details, critical for each dish and connecting with customers, often require a human touch.4
The vision-guided cleaning robots deployed to combat COVID-19 likely will not disappear once this crisis has ended. The added safety and efficiency they offer will continue to be beneficial to hospitals and other high-traffic spaces, so do not be surprised if you spot more robotic coworkers in the future.