Light Drawing Robot
10/22 - 12/22
Created along with the great pleasure of Alexander Yu, Adam Rashid, and Ryan Huang.
Skills utilized:
Programming (Python and C++)
ROS
CAD (Fusion 360)
Metal laser cutting and metal bending
Microcontroller use and custom electronics
For our final project in the Introduction to Robotics class at Berkeley (EECS C106A, Fall 2022), my team and I decided to modify our lab’s Sawyer robot arm to be able to draw with light. We utilized the long-exposure camera technique to achieve this, programming the robot to extract contour lines from an image, create motion plans, and follow the given trajectory while actuating the end effector LEDs appropriately. The result is a glowing outline of the salient features of the image.
Light trail photography is a very popular field of photography but, for the most part, is limited to light trails of very simple geometries. It's very difficult for humans to sense and plan complicated path trajectories for themselves to draw light trails in 3D space. Our project aims to bridge the gap between artistic vision and light trail photography by utilizing precise and programmable movements of robots to draw light contours in 3D space.
Improvements
One goal we didn’t get to during the semester was using the LED strip to fill in every pixel in an image, not just the most prominent contours. We would do this by sweeping the end effector across the image and effectively painting every pixel that is covered. This is something we hope to achieve in the near future. Stay tuned…
Design and Manufacturing
In order to make our solution work, we needed to figure out the following:
Contour Extraction
Path Planning
Actuation of Light Source
Contour Extraction
For the sensing criteria of our project, the robot must be able to accept an input of an image and detect the salient edge features of the image. We had to build our preprocessing software to be robust to multiple different nuances in images. To achieve this, we scaled the image by the ratio of its height to a maximum image height. We then used an adaptive thresholding technique to extract contours. If we used a global thresholding technique in our feature extractor, areas in sunlight would be included and any areas in shadow would be occluded. By using an adaptive threshold, sampled from neighboring pixels, we could pick up on edge features that exist in both light and dark better. We then filtered the features by size so that smaller, noisy features are excluded.
Planning
For the planning criteria, our robot must be able to take the feature list and create a path plan to draw the given contours. The path must be smooth, efficient, and closely resemble the features of the input image. To achieve this, we down sampled our feature points to reduce the path planning resolution. This helps smooth out features during drawing time at the expense of slightly softer corners. We then converted each feature into a list of poses that ROS’s MoveIt can use in its waypoint path planning api. Given the planned path from MoveIt, we then ran the path through a retiming API to adjust the velocity of the arm so that it is both smooth and efficient. Additionally for each contour, we precomputed the RGB light values that we wanted our end effector to emit during the drawing of the contour.
Actuation
For the actuation criteria, our robot must be able to follow the planned paths and draw the appropriately colored contour for any given feature. To do this, we kept a list of contour feature Poses and then planned then executed each of them individually. Between contours, we turned off the LED and planned a path for the robot arm to go from the last Pose of the last contour to the first Pose of the next contour. This allowed us to toggle and move our LED end effector as desired.
Manufacturing
The hardware consists of an array of RGB LEDs mounted on a steel bracket which is designed to mate with the Sawyer arm. This 1/4” steel bracket was laser cut and then bent to spec. The LEDs are all linked to an ESP32 microcontroller, allowing each to be programmed individually through their index value. The firmware is designed to communicate with the robot via serial, parsing string data that is sent whenever an LED is desired to actuate.