Sitemap

A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.

Pages

Posts

Using the Viam RDK with the Mini Pupper Robot

15 minute read

Published:

Having used ROS for many years now, I’ve always been curious how other programming middlewares would work in comparison such as a new or different paradigm of structring robotic software, hardware, etc.. However, there doesn’t seem to be many alternatives. The biggest I could find was YARP which I had known about for some time but never tried (Maybe another blog post in the future!). The second closest I found was LCM which isn’t being maintained anymore. Others seem like small projects tied to their respective goal/task in mind. I did, however, stumble upon one that caught my eye by the startup Viam called simply the Robot Development Kit (RDK).

Run a Custom Ubuntu OS for Mini Pupper

2 minute read

Published:

After playing with the Mini Pupper for a while, I noticed there isn’t a straight-forward guide to configuring a Ubuntu ARM desktop image for it. The following should help users set up an Ubuntu Mate installation for use with a Mini Pupper.

Giving a TurtleBot3 a Namespace for Multi-Robot Experiments

5 minute read

Published:

As I was working on my ICRA paper, I noticed that ROBOTIS doesn’t provided a guide on how to run multiple TurtleBot3 robots together. It is especially dangerous if you run them in the same network because they all run on the same topic names and node names, which can interfere with their individual operation. So to help run multiple TurtleBots on the same network, you need to give each robot a unique namespace. The following guide will show you how to do this for the TurtleBot3.

Setting-up the Turtlebot3 with ROS 2 on Ubuntu Server IoT 18.04

8 minute read

Published:

One of the cool things about ROS 2 is that ROS Master is finally gone. The new DDS approach allows for interesting ways of controlling multiple autonomous agents without having to rely on a centralized ROS Master running the show. For this guide, I’ll show you how to set up ROS 2 on a Turtlebot3 burger or waffle using Ubuntu Server IoT 18.04. Why use this instead of Ubuntu Mate 18.04 like the Turtlebot guide suggests? Well, I’ve run into a lot of problems during the installation and its just kind of bloated. I don’t Firefox, VLC, Thunderbird, Libreoffice etc. All I need is just a bash shell because I’m going to be writing most of the code for it on a different computer anyways. So let’s start!

Spawning Robots in Gazebo with ROS 2

7 minute read

Published:

Now that ROS 2 has done away with the old way of launching nodes (i.e. using XML .launch files), the process has become more stream-lined and versatile than ever before thanks to using Python. The new launch system can, however, be confusing to use the first time, and I’m probably going to do a deep-dive on it. For this blog post, I want to touch on something that is kind of missing from the old approach to the new one: spawning robots into Gazebo.

ROS 2 with VSCode and macOS

2 minute read

Published:

VSCode is one of the most powerful code editors I have tried in a long time. While I know it using Electron as a framework has gotten some people erked at the thought, I rather enjoy the customizability afforded to it by the JavaScript/HTML/CSS backend. However, not all Electron text editors are made the same. Atom, which is a competitior to VSCode (well…maybe not anymore), relies on Electron and a very similar way of doing things, but Microsoft had the advantage of adding some of that special Intellisense code to it. VSCode is far superior when it comes to coding in C++ because Atom really can’t do it. So when I program in ROS on both my Linux and macOS rigs, I tend to just use VSCode out of ease of use, but there are always a couple issues when first setting up the system which I will address in this blog post.

portfolio

publications

Exploiting POSS–sorbitol interactions: issues of reinforcement of isotactic polypropylene spun fibers

Published in Macromolecules, 2012

Abstract: This study investigates the issues involving reinforcement of isotactic polypropylene (iPP) spun fibers by molecular adducts originating from the synergistic interactions of polyhedral oligomeric silsesquioxane (POSS) containing silanol functionalities (silanol–POSS) and di(benzylidene)sorbitol (DBS). The molecular adducts of silanol–POSS and DBS were low viscosity liquids at fiber spinning temperature, turned into cylindrical domains during fiber spinning, and remained as nanoparticles in the fibers. The fibers were analyzed by differential scanning calorimetry, wide-angle X-ray diffraction , scanning electron microscopy, and transmission electron microscopy. It was observed that iPP compounds with 2–5 wt % silanol–POSS and 1 wt % DBS could be spun into fibers with close to 40% reduction in diameter compared to unfilled iPP. These fibers offered 60–80% increase in tensile modulus, 50–60% increase in tensile strength, and 100% increase in yield strength compared to unfilled iPP. The silanol–POSS particles were found to be of cylindrical shape with approximately 100 nm in diameter and 200–300 nm in length. The improvements in mechanical properties were correlated with iPP crystallinity and orientation factor.

Recommended citation: Roy, S., Lee, B. J., Kakish, Z. M., & Jana, S. C. (2012). Exploiting POSS–sorbitol interactions: issues of reinforcement of isotactic polypropylene spun fibers. Macromolecules, 45(5), 2420-2433.
Download Paper | Download Slides

Adaptive synergy control of a dexterous artificial hand to rotate objects in multiple orientations via EMG facial recognition

Published in IEEE International Conference on Robotics and Automation (ICRA), 2014

Abstract: An adaptive synergy controller is presented which allows a dexterous artificial hand to unscrew and screw an object using facial expressions derived from electromyogram (EMG) signals. In preliminary experiments, the finger joint motions of nine human test subjects were recorded as they unscrewed a bottle cap in multiple orientations of their hands with respect to the object. These data were used to develop a set of adaptive sinusoidal joint synergies to approximate the orientation-dependent human motions, which were then implemented on a dexterous robotic manipulator via the proposed adaptive synergy controller. The controller is driven through a noninvasive interface which allows a single input to drive the bioinspired human motions using facial expressions. The adaptive synergy controller was evaluated by four able-bodied subjects who were able to unscrew and screw an instrumented object using the artificial hand in two orientations with a 100% success rate.

Recommended citation: B. A. Kent, Z. M. Kakish, N. Karnati and E. D. Engeberg, "Adaptive synergy control of a dexterous artificial hand to rotate objects in multiple orientations via EMG facial recognition," 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 2014, pp. 6719-6725, doi: 10.1109/ICRA.2014.6907851.
Download Paper | Download Slides

Mean-Field Stabilization of Markov Chain Models for Robotic Swarms: Computational Approaches and Experimental Results

Published in IEEE Robotics and Automation Letters (RA-L), 2018

Abstract: In this letter, we present two computational approaches for synthesizing decentralized density-feedback laws that asymptotically stabilize a strictly positive target equilibrium distribution of a swarm of agents among a set of states. The agents’ states evolve according to a continuous-time Markov chain on a bidirected graph, and the density-feedback laws are designed to prevent the agents from switching between states at equilibrium. First, we use classical linear matrix inequality (LMI)-based tools to synthesize linear feedback laws that (locally) exponentially stabilize the desired equilibrium distribution of the corresponding mean-field model. Since these feedback laws violate positivity constraints on the control inputs, we construct rational feedback laws that respect these constraints and have the same stabilizing properties as the original feedback laws. Next, we present a sum-of-squares (SOS)-based approach to constructing polynomial feedback laws that globally stabilize an equilibrium distribution and also satisfy the positivity constraints. We validate the effectiveness of these control laws through numerical simulations with different agent populations and graph sizes and through multirobot experiments on spatial redistribution among four regions.

Recommended citation: V. Deshmukh, K. Elamvazhuthi, S. Biswal, Z. Kakish and S. Berman, "Mean-Field Stabilization of Markov Chain Models for Robotic Swarms: Computational Approaches and Experimental Results," in IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 1985-1992, July 2018, doi: 10.1109/LRA.2018.2792696.
Download Paper | Download Slides

Open-source AI assistant for cooperative multi-agent systems for lunar prospecting missions

Published in European Conference for Aeronautics and Space Sciences (EUCASS), 2019

Abstract: Mission planning for space prospecting missions is a complex undertaking that involves several phases, variables, and systems. MARMOT, Multi-Agent Resource Mission Operations Tools, is an extensible, open-source tool that allows operators to expand and improve current functionalities and automation capabilities of mission planning systems for extra-terrestrial operation by aiming to reduce the exhaustive mental load accumulated by mission operators. This paper presents the first iteration of the tool that provides homogeneous and heterogeneous multi-agent global trajectory planning and optimization suggestions for different tasks and goals in a Lunar environment. The resulting tool leverages collaborative action plans and optimization strategies for enhancing in-situ resource exploitation and discovery on the Moon.

Recommended citation: Kakish, Zahi M., et al. "Open-source AI assistant for cooperative multi-agent systems for lunar prospecting missions." 8th European Conference for Aeronautics and Space Sciences (EUCASS). 2019.
Download Paper | Download Slides

Information Correlated Lévy Walk Exploration and Distributed Mapping Using a Swarm of Robots

Published in IEEE Transactions on Robotics (T-RO), 2020

Abstract: In this article, we present a novel distributed method for constructing an occupancy grid map of an unknown environment using a swarm of robots with global localization capabilities and limited interrobot communication. The robots explore the domain by performing Lévy walks in which their headings are defined by maximizing the mutual information between the robot’s estimate of its environment in the form of an occupancy grid map and the distance measurements that it is likely to obtain when it moves in that direction. Each robot is equipped with laser range sensors, and it builds its occupancy grid map by repeatedly combining its own distance measurements with map information that is broadcast by neighboring robots. Using results on average consensus over time-varying graph topologies, we prove that all robots’ maps will eventually converge to the actual map of the environment. In addition, we demonstrate that a technique based on topological data analysis, developed in our previous work for generating topological maps, can be readily extended for adaptive thresholding of occupancy grid maps. We validate the effectiveness of our distributed exploration and mapping strategy through a series of two-dimensional simulations and multirobot experiments.

Recommended citation: R. K. Ramachandran, Z. Kakish and S. Berman, "Information Correlated Lévy Walk Exploration and Distributed Mapping Using a Swarm of Robots," in IEEE Transactions on Robotics, vol. 36, no. 5, pp. 1422-1441, Oct. 2020, doi: 10.1109/TRO.2020.2991612.
Download Paper | Download Slides

Towards decentralized human-swarm interaction by means of sequential hand gesture recognition

Published in arXiv, 2021

Abstract: In this work, we present preliminary work on a novel method for Human-Swarm Interaction (HSI) that can be used to change the macroscopic behavior of a swarm of robots with decentralized sensing and control. By integrating a small yet capable hand gesture recognition convolutional neural network (CNN) with the next-generation Robot Operating System (ROS 2), which enables decentralized implementation of robot software for multi-robot applications, we demonstrate the feasibility of programming a swarm of robots to recognize and respond to a sequence of hand gestures that capable of correspond to different types of swarm behaviors. We test our approach using a sequence of gestures that modifies the target inter-robot distance in a group of three Turtlebot3 Burger robots in order to prevent robot collisions with obstacles. The approach is validated in three different Gazebo simulation environments and in a physical testbed that reproduces one of the simulated environments.

Recommended citation: Kakish, Z., Vedartham, S., & Berman, S. (2021). Towards decentralized human-swarm interaction by means of sequential hand gesture recognition. arXiv preprint arXiv:2102.02439.
Download Paper | Download Slides

Controllability and Stabilization for Herding a Robotic Swarm Using a Leader: A Mean-Field Approach

Published in IEEE Transactions on Robotics (T-RO), 2021

Abstract: In this article, we introduce a model and a control approach for herding a swarm of “follower” agents to a target distribution among a set of states using a single “leader” agent. The follower agents evolve on a finite state space that is represented by a graph and transition between states according to a continuous-time Markov chain (CTMC), whose transition rates are determined by the location of the leader agent. The control problem is to define a sequence of states for the leader agent that steers the probability density of the forward equation of the Markov chain. For the case, when the followers are possibly interacting, we prove local approximate controllability of the system about equilibrium probability distributions. For the case, when the followers are noninteracting, we design two switching control laws for the leader that drive the swarm of follower agents asymptotically to a target probability distribution that is positive for all states. The first strategy is open-loop in nature, and the switching times of the leader are independent of the follower distribution. The second strategy is of feedback type, and the switching times of the leader are functions of the follower density in the leader’s current state. We validate our control approach through numerical simulations with varied numbers of follower agents that evolve on graphs of different sizes, through a 3-D multirobot simulation in which a quadrotor is used to control the spatial distribution of eight ground robots over four regions, and through a physical experiment in which a swarm of ten robots is herded by a virtual leader over four regions.

Recommended citation: K. Elamvazhuthi, Z. Kakish, A. Shirsat and S. Berman, "Controllability and Stabilization for Herding a Robotic Swarm Using a Leader: A Mean-Field Approach," in IEEE Transactions on Robotics, vol. 37, no. 2, pp. 418-432, April 2021, doi: 10.1109/TRO.2020.3031237.
Download Paper | Download Slides

Using Reinforcement Learning to Herd a Robotic Swarm to a Target Distribution

Published in Distributed Autonomous Robotic Systems (DARS), 2022

Abstract: In this paper, we present a reinforcement learning approach to designing a control policy for a “leader” agent that herds a swarm of “follower” agents, via repulsive interactions, as quickly as possible to a target probability distribution over a strongly connected graph. The leader control policy is a function of the swarm distribution, which evolves over time according to a mean-field model in the form of an ordinary difference equation. The dependence of the policy on agent populations at each graph vertex, rather than on individual agent activity, simplifies the observations required by the leader and enables the control strategy to scale with the number of agents. Two Temporal-Difference learning algorithms, SARSA and Q-Learning, are used to generate the leader control policy based on the follower agent distribution and the leader’s location on the graph. A simulation environment corresponding to a grid graph with 4 vertices was used to train and validate the control policies for follower agent populations ranging from 10 to 1000. Finally, the control policies trained on 100 simulated agents were used to successfully redistribute a physical swarm of 10 small robots to a target distribution among 4 spatial regions.

Recommended citation: Kakish, Z., Elamvazhuthi, K., Berman, S. (2022). Using Reinforcement Learning to Herd a Robotic Swarm to a Target Distribution. In: Matsuno, F., Azuma, Si., Yamamoto, M. (eds) Distributed Autonomous Robotic Systems. DARS 2021. Springer Proceedings in Advanced Robotics, vol 22. Springer, Cham. https://doi.org/10.1007/978-3-030-92790-5_31
Download Paper | Download Slides

Effectiveness of Warm-Start PPO for Guidance with Highly Constrained Nonlinear Fixed-Wing Dynamics

Published in American Control Conference (ACC), 2023

Abstract: Reinforcement learning (RL) may enable fixedwing unmanned aerial vehicle (UAV) guidance to achieve more agile and complex objectives than typical methods. However, RL has yet struggled to achieve even minimal success on this problem; fixed-wing flight with RL-based guidance has only been demonstrated in literature with reduced state and/or action spaces. In order to achieve full 6-DOF RL-based guidance, this study begins training with imitation learning from classical guidance, a method known as warm-staring (WS), before further training using Proximal Policy Optimization (PPO). We show that warm starting is critical to successful RL performance on this problem. PPO alone achieved a 2% success rate in our experiments. Warm-starting alone achieved 32% success. Warm-starting plus PPO achieved 57% success over all policies, with 40% of policies achieving 94% success.

Recommended citation: C. T. Coletti, K. A. Williams, H. C. Lehman, Z. M. Kakish, D. Whitten and J. J. Parish, "Effectiveness of Warm-Start PPO for Guidance with Highly Constrained Nonlinear Fixed-Wing Dynamics," 2023 American Control Conference (ACC), San Diego, CA, USA, 2023, pp. 3288-3295, doi: 10.23919/ACC55779.2023.10156267.
Download Paper | Download Slides

Machine learning at the edge to improve in-field safeguards inspections

Published in Annals of Nuclear Energy, 2024

Abstract: Artificial intelligence (AI) and machine learning (ML) are near-ubiquitous in day-to-day life; from cars with automated driver-assistance, recommender systems, generative content platforms, and large language chatbots. Implementing AI as a tool for international safeguards could significantly decrease the burden on safeguards inspectors and nuclear facility operators. The use of AI would allow inspectors to complete their in-field activities quicker, while identifying patterns and anomalies and freeing inspectors to focus on the uniquely human component of inspections. Sandia National Laboratories has spent the past two and a half years developing on-device machine learning to develop both a digital and robotic assistant. This combined platform, which we term inspecta, has numerous on-device machine learning capabilities that have been demonstrated at the laboratory scale. This work describes early successes implementing AI/ML capabilities to reduce the burden of tedious inspector tasks such as seal examination, information recall, note taking, and more.

Recommended citation: Shoman, N., Williams, K., Balsara, B., Ramakrishnan, A., Kakish, Z., Coram, J., ... & Smartt, H. (2024). Machine learning at the edge to improve in-field safeguards inspections. Annals of Nuclear Energy, 200, 110398.
Download Paper | Download Slides

Heterogeneous Policy Networks for Composite Robot Team Communication and Coordination

Published in IEEE Transactions on Robotics (T-RO), 2024

Abstract: High-performing human–human teams learn intelligent and efficient communication and coordination strategies to maximize their joint utility. These teams implicitly understand the different roles of heterogeneous team members and adapt their communication protocols accordingly. Multiagent reinforcement learning (MARL) has attempted to develop computational methods for synthesizing such joint coordination–communication strategies, but emulating heterogeneous communication patterns across agents with different state, action, and observation spaces has remained a challenge. Without properly modeling agent heterogeneity, as in prior MARL work that leverages homogeneous graph networks, communication becomes less helpful and can even deteriorate the team’s performance. In the past, we proposed heterogeneous policy networks (HetNet) to learn efficient and diverse communication models for coordinating cooperative heterogeneous teams. In this extended work, we extend HetNet to support scaling heterogeneous robot teams. Building on heterogeneous graph-attention networks, we show that HetNet not only facilitates learning heterogeneous collaborative policies, but also enables end-to-end training for learning highly efficient binarized messaging. Our empirical evaluation shows that HetNet sets a new state-of-the-art in learning coordination and communication strategies for heterogeneous multiagent teams by achieving an 5.84% to 707.65% performance improvement over the next-best baseline across multiple domains while simultaneously achieving a 200× reduction in the required communication bandwidth.

Recommended citation: E. Seraj et al., "Heterogeneous Policy Networks for Composite Robot Team Communication and Coordination," in IEEE Transactions on Robotics, vol. 40, pp. 3833-3849, 2024, doi: 10.1109/TRO.2024.3431829.
Download Paper | Download Slides

CrazySim: A Software-in-the-Loop Simulator for the Crazyflie Nano Quadrotor

Published in IEEE International Conference on Robotics and Automation (ICRA), 2024

Abstract: In this work we develop a software-in-the-loop simulator platform for Crazyflie nano quadrotor drone fleets. One of the challenges in maintaining a large fleet of drones is ensuring that the fleet performs its task as expected without collision, and this becomes more challenging as the number of drones scales, possibly into the hundreds. Software-in-the-loop simulation is an important component in verifying that drone fleets operate correctly and can significantly reduce development time. The simulator interface that we develop runs an instance of the Crazyflie flight stack firmware for each individual drone on a commercial, desktop machine along with a sensors and communication plugin on Gazebo Sim. The plugin transmits simulated sensor information to the firmware along with a socket link interface to run external scripts that would be run on a ground station during hardware deployment. The plugin simulates a radio communication delay between the drones and the ground station to test offboard control algorithms and high-level fleet commands. To validate the proposed simulator, we provide a case study of decentralized model predictive control (MPC) that is run on a ground station to command a fleet of sixteen drones to follow a specified trajectory. We first run the controller on the simulator interface to verify performance and robustness of the algorithm before deployment to a Crazyflie hardware experiment in the Georgia Tech Robotarium.

Recommended citation: Llanes, C., Kakish, Z., Williams, K., & Coogan, S. (2024). CrazySim: A Software-in-the-Loop Simulator for the Crazyflie Nano Quadrotor. In Int. Conf. Rob. Aut.
Download Paper | Download Slides

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.