This project seeks to develop an application of swarm robotics by creating a swarm of robots that can autonomously arrange themselves to replicate drawings made by a user. The proposed system leverages swarm intelligence to translate artistic expressions into a visually stunning and collaborative robotic display.
In the recent evolution of robotics, a fascinating trend has emerged: embracing art. This convergence led to innovative projects like ARTbot, a swarm robotics project. Here, multiple robots collaborate to create artwork guided by human input.
Swarm robotics involves coordinating multiple robots, often called agents, to achieve common goals through decentralized control and local interactions. Inspired by collective behaviors observed in natural swarms such as ants and bees, swarm robotics aims to harness the power of distributed systems for efficient task completion.
Each robot in a swarm typically operates autonomously, relying on simple rules and local communication to achieve complex behaviors at the collective level. This approach offers scalability, robustness, and adaptability advantages, making swarm robotics suitable for applications ranging from search and rescue missions to environmental monitoring.
In this project, a canvas window is created using OpenCV. In this canvas window, the user can draw any shape he wants. After drawing, right-clicking the mouse will plot some points on the shape that has been drawn. Then, the swarm algorithm works, and the bots arrange themselves to trace the art in real-time.
ROS2 (Iron Irwini)
Python3
OpenCV
C++
ROS 2 (Robot Operating System 2) is the next-generation open-source robotics middleware framework that builds upon the success of the original ROS. It introduces significant improvements, including support for multiple programming languages, a more robust communication layer based on the Data Distribution Service (DDS) standard, enhanced real-time performance, built-in security features, better support for multi-robot systems and distributed systems, Quality of Service (QoS) policies for reliable communication, improved lifecycle management for nodes, and increased interoperability with other robotics frameworks. ROS 2 also introduces a new Robot Description Format (RDF) for describing robot models and provides tools for maintaining backward compatibility with existing ROS packages, enabling a smoother transition for developers.
Nodes: A node is an executable that uses ROS 2 to communicate with other nodes. A ROS 2 Node can be a Publisher or a Subscriber. A Publisher puts the messages of a standard message type to a particular topic. The Subscriber, on the other hand, subscribes to the topic and receives the messages that are published to the topic.
Messages: ROS 2 data type used upon subscribing or publishing a topic.
Topics: Nodes can publish messages to a topic and subscribe to a topic to receive messages.
Client libraries needed for this project:
rclpy: Python client library
rclcpp: C++ client library
Python is a high-level, general-purpose programming language. Its design philosophy emphasizes code readability with the use of significant indentation. It supports multiple programming paradigms, including structured, object-oriented, and functional programming.
OpenCV, or Open Source Computer Vision Library, is a popular open-source software library for computer vision and image processing tasks. Developed by Intel, OpenCV provides various functionalities for functions such as object detection, recognition, tracking, and segmentation. It offers support for various programming languages, including C++, Python, and Java, making it accessible to developers across different platforms. OpenCV incorporates advanced algorithms and techniques from computer vision, allowing users to perform complex image analysis tasks efficiently. Its extensive documentation and immense community support make it a valuable tool for both beginners and experienced researchers in the field of computer vision. By leveraging OpenCV, students can explore applications in robotics, augmented reality, medical imaging, and more, gaining practical skills in image processing and computer vision.
Swarm robotics is an approach to coordinate multiple robots as a single system to achieve a particular goal. It is inspired by the emergent behavior observed in social insects.
In this project, we have devised a novel approach inspired by bird flocking behaviors to achieve coordinated movement and shape formation in a swarm of robotic agents. Our custom implementation draws upon principles of swarm intelligence and integrates them with specific functionalities tailored to our task. The algorithm operates within a simulation environment where only the current position and orientation of the bots are sensed in real-time.
Our algorithm involves a swarm of robotic agents, each equipped to navigate and coordinate with its peers autonomously. The swarm is divided into two groups: target-seeking bots and integrating bots. The target-seeking bots are assigned specific targets or coordinates to move towards, simulating the behavior of birds homing in on particular locations within a flock. Meanwhile, the integrating bots dynamically position themselves between pairs of "parent bots," facilitating cohesion and formation maintenance within the swarm.
Each "child bot" is assigned specific "parent bots," between which it positions itself. This arrangement ensures that the "child bots" contribute to the overall shape formation by maintaining optimal spacing and alignment within the swarm. Through decentralized communication and coordination mechanisms inspired by bird flocking behaviors, the swarm collectively maneuvers to achieve the desired shape outlined by the provided coordinates.
The "child bots" are crucial in ensuring smooth transitions and spatial organization within the swarm, enhancing its ability to represent complex shapes accurately. As the swarm progresses towards shaping the desired form, the algorithm continuously adapts and optimizes the distribution and movement of bots based on environmental feedback and performance metrics. This real-time adjustment mechanism enables the swarm to overcome obstacles, maximize resource utilization, and refine shape accuracy during the execution phase.
The basic idea behind the BOID algorithm is to model individual agents ("boids") within a group that exhibits three primary behaviors: separation, alignment, and cohesion. These behaviors enable the boids to maintain cohesion while avoiding collisions and aligning their movement direction with nearby neighbors.
In the context of ROS (Robot Operating System), the BOID algorithm can be implemented to synchronize the movement of multiple robots or agents within a simulated environment. Here's a brief overview of how the BOID algorithm can be applied in ROS:
Separation: Each robot calculates a repulsive force based on the proximity of its neighbors. This ensures that robots maintain a safe distance from each other to avoid collisions. In ROS, this can be implemented using sensor data (e.g., lidar or proximity sensors) to detect nearby robots and adjust the robot's velocity accordingly.
Alignment: Robots adjust their velocity to match the average velocity of nearby neighbors. This helps in synchronizing the movement direction of the robots within the group. This can be achieved in ROS by calculating the average velocity of neighboring robots and adjusting the robot's velocity using ROS messages or services.
Cohesion: Robots move towards the center of mass of nearby neighbors to maintain cohesion within the group. This encourages robots to stay together as a cohesive unit. In ROS, this can be implemented by calculating the centroid of nearby robots and adjusting the robot's velocity to move toward the centroid.
Overall, the BOID algorithm offers a versatile framework for synchronizing the movement of multiple robots in ROS simulations, enabling the development of more realistic and lifelike swarm robotics behaviors.
The Artbot Dynamics system introduces a custom-made turtlebot equipped with a holonomic drive, a feature built on the ROS2 platform. Unlike traditional differential drive systems, which rely on two independent wheels for movement, the Artbot's holonomic drive incorporates omnidirectional wheels, granting it unparalleled maneuverability and control. With omnidirectional movement capabilities, the Artbot can navigate in any direction without complex steering mechanisms. This precision enables the Artbot to execute intricate maneuvers quickly, including translations, rotations, and lateral movements, while maintaining its orientation.
The algorithm for drawing lines on the canvas in the ARTbot simulation involves two main parts, each contributing to the interactive drawing process and subsequent robot navigation. The algorithm can broadly be broken into two parts.
In the first part of the algorithm, the canvas refreshes every millisecond, providing a real-time interactive drawing experience for the user. When the user left-clicks and drags the mouse, a red line is drawn on the canvas between the previous mouse coordinates and the new coordinates obtained after the time period (in this case, one millisecond). This process continues as the user moves the mouse, creating a continuous line on the canvas. Although the drawn line appears as a smooth curve to the human eye, it's a series of small line segments drawn rapidly. This real-time drawing mechanism allows users to sketch paths for the robot to follow intuitively.
In the second part of the algorithm, the user signals the completion of their drawing by right-clicking the mouse. At this point, all the points defining the drawn path are stored in a single list or array. The total length of this path is then calculated by summing the distances between consecutive points. This total length is divided by the number of robots present (30 in this case) to determine the desired segment length for navigation. Subsequently, the algorithm identifies the endpoints of each segment along the drawn path and stores them in another list, known as the target list. These target points represent waypoints for the robot to navigate through and are crucial for guiding the robot along the user-defined path.
Once the target list is populated with segment endpoints, these points are displayed on the canvas as blue dots, with numbering provided beside each dot for clarity. This visual representation allows users to verify the accuracy of the waypoints and make adjustments if necessary. Additionally, displaying the target points on the canvas enhances the user's understanding of the robot's intended navigation path.
Finally, after the target points are displayed and verified, the canvas is reset by clearing the list containing the drawn path. This ensures that subsequent drawings start from a clean slate and facilitates the creation of new navigation paths without interference from previous drawings.
The project utilizes a lightweight, customized turtlesim simulator, integrating fundamental ROS 2 concepts and packages into a bespoke artbotsim framework. Upon receiving a pattern from the canvas, the bots gracefully orient themselves to mirror the shape with precision and finesse.
The simulation of the project is inspired by observing the synchronized movements of flocks of birds, schools of fish, or swarms of insects, which inspired the development of algorithms for coordinated movement and pattern formation.
The simulation aspect of the project offers a fertile ground for exploration and innovation across various domains. One critical focus lies in optimizing scalability and performance, which is vital for efficiently handling increasing bot numbers and intricate patterns. This entails meticulous parameter fine-tuning, harnessing parallel processing techniques, and potentially integrating distributed computing methods to meet the demands of evolving simulation scenarios.
The "ARTbot Canvas" script, implemented using ROS2 and OpenCV, serves as the central control node for the ARTbot simulation. It allows users to draw paths on the canvas using the mouse. Upon left-clicking and dragging, the script updates the display with the drawn path in real time. Right-clicking divides the path into segments, which are collected as waypoints for the ARTbot.
The collected points are then displayed on the canvas, and after a brief delay, they are published as messages to a ROS2 topic named "target." Each message contains the coordinates of a subset of the collected points, which the ARTbot then uses for navigation. Afterward, the canvas is reset, and a blank canvas appears. It creates a publisher to send messages containing the target coordinates to the ARTbot robot. The published messages are then sent to the swarm algorithm, where these coordinates are taken as input for the bots to trace the art. This script provides a user-friendly interface for drawing paths and generating waypoints for the ARTbot robot.
In the ARTbot simulation environment, a ROS2 launch file is employed to start the canvas, swarm algorithm, and initiate multiple bots, denoted as 'artist1', 'artist2', and so forth, based on their spawn order. Bots with names containing odd numbers are designated as "parent bots" and tasked with navigating to predetermined target coordinates. Each bot's current location data is tracked using a custom 'Pose' message, encompassing x and y coordinates, z orientation, linear and angular acceleration, and the bot's name.
Path planning is crucial for "parent bots" to calculate the optimal angular and linear velocities required to reach their assigned target coordinates. Holonomic drive dynamics are employed for dynamic movement, coupled with a PID control system algorithm ensuring precise navigation. Upon nearing the target coordinates within a specific error range, the bot halts, indicating successful navigation.
"Child bots" strategically position themselves between the "parent bots" by determining the midpoint of their respective targets. They utilize the same drive and control system as the "parent bots" to navigate toward these calculated midpoint targets. This collaborative swarm movement aims to replicate the artwork created on the canvas within the ARTbot simulation. Real-time coordination between the canvas and the swarm allows for seamless integration of bot movements with the drawn art. The bot targets are reset accordingly whenever a new artwork is input through the canvas.
The script also utilizes ROS2 (Robot Operating System 2) for communication with other nodes and components of the ARTbot system. It creates a subscriber who receives messages containing the target coordinates. The different bots work in parallel to each other by taking advantage of the ROS2 feature to run parallel nodes, thereby reducing the time complexity of the code (suppose if 'N' bots were part of the swarm, then the time complexity 'T' of the code reduces to 'T*(1/N)') and ensures faster processing.
Through a series of extensive experiments, the Flock algorithm demonstrated its remarkable prowess. The bots seamlessly aligned themselves with precision and finesse, maintaining optimal spacing while faithfully replicating intricate patterns from the canvas. Undeterred by challenges, the ArtBot boldly ventured into various environments, quickly navigating different layouts and obstacles. It swiftly adapted and overcame each hurdle using decentralized communication and coordination mechanisms. The results were impressive; the ArtBot showcased its mettle, delivering a performance that exceeded expectations. Moreover, its use of open-source platforms and affordable hardware components renders it practical and accessible—a beacon of opportunity for aspiring researchers and educators alike in the dynamic realm of robotics.
- We're looking to scale the number of swarm bots from 30 to as many as possible, aiming for 50, 100, or even 200 bots. Additionally, we're keen on transitioning the system to operate in a 3D simulation environment. This expansion presents exciting challenges and opportunities for enhancing the capabilities of our swarm robotics system.
- Our next step is hardware implementation if the 3D simulation proves successful. We'll address minor bugs and ensure the system is robust for real-world deployment. This transition from simulation to hardware represents a significant milestone in our project, allowing us to test the system's performance in physical environments.
- The Flock Formation algorithm can be improved by making it possible for the "child bots" to sense the closest "parent bots" rather than assigning specific "parent bots" at the start. Further improvements can be made in the holonomic drive and PID control system. Adding a collision detection system and increasing the number of bots that can be operated simultaneously.
- Refining the Boid algorithm will enhance functionality and be a valuable educational resource for aspiring ROS programmers, offering them an intuitive entry point into bot synchronization and navigation. By optimizing its codebase, we can provide a seamless pathway for newcomers to grasp fundamental Turtlesim properties while mastering the intricacies of bot coordination. The Boid algorithm can catalyze collective growth and innovation in robotics education and research through iterative refinement.
Swarm Robotics: A Review from the Swarm Engineering Perspective
All the simulation files and source codes can be found here.
As executive members of IEEE NITK, we are incredibly grateful for the opportunity to learn and work on this project under the prestigious name of the IEEE NITK Student Chapter. We want to extend our heartfelt thanks to IEEE for providing us with the funds to complete this project successfully.
Report prepared on March 22, 2024, 12:06 a.m. by:
Report reviewed and approved by Shivani Chanda [Diode] on March 22, 2024, 5:17 p.m..