top of page


Self-organizing systems contain a large number of entities whose local level interactions lead to collective emergent behavior at the global level. Our team seeks to better understand such systems, and is driven by five grand challenges in their science of complexity:


Quantifying and classifying diverse complex systems: This first challenge pertains to finding common modeling threads that connect different complex systems. For example, are vehicles on a roadway equivalent to a column of ants? Can a swarm of drones be modeled as a thermodynamic system?


Macrostate identification, estimation, and tracking: The second grand challenge relates to determining a reduced order representation of large-scale complex systems. For example, do we need to know the state of each vehicle on a roadway to determine the macroscopic behaviors of traffic flow, such as phantom traffic jams


Top-down control and inference in complex multi-agent systems: Another grand challenge is, given the observed macroscopic scale behavior of a system, what can we infer about the local interactions between agents?


Bottom-up synthesis of emergent behaviors: In this fourth grand challenge, we seek to determine the expected global behavior of a complex system, given only the local level interactions between agents. For example, if we know how AI agents or search and rescue robots cooperate, can we predict what might be their emergent team behavior?


Zones of influence: The fifth grand challenge we are interested in solving is to determine when and where do certain agents yield outsize influence on the macroscopic-scale behaviors of a complex multi-agent system? For example, are their 'special' zones of influence on roadways where a vehicle can have a significant impact on traffic congestion?

Scale-Dependent Observability of Emergent Dynamics: Application to Traffic Flow with Connected Vehicles

A single traffic flow model for 'any' spatiotemporal scale

Senior Investigator: Kshitij Jerath

Junior Investigator: Zhaohui (Brandon) Yang

Sponsor: National Science Foundation

Traffic flow modeling is typically performed at distinct scales: microscopic (that models individual vehicles), mesoscopic (that models clusters of vehicles) and macroscopic (that models traffic as a fluid flow). Creating a single modeling framework for traffic flow can not only simplify analysis and reduce computational effort, but it could also provide an opportunity to select the modeling scale that best fits a desired goal, such as prediction of congestion, platooning of connected autonomous vehicles or determining fuel efficient driving behaviors. Our physics-inspired work borrows from the field of statistical mechanics and **renormalization group theory**. We have shown that using just two vehicle-interaction and traffic flow parameters, we can perform systematic transformation from one scale to another while maintaining fidelity across multiple (but not arbitrarily large) spatial and time scales.

Renormalization group theory applied to traffic flow. Each plot represents a traffic flow simulation starting with the same initial conditions, but where the dynamic model is systematically rescaled using statistical physics.

Trust Network Emergence Amongst Resource-Constrained Human-Agent Teams

Enabling AI agent teams to care and be 'socially'-aware

Senior investigators: Kshitij Jerath, Paul Robinette, and Reza Ahmadzadeh

Junior investigators: Alden Daniels, Akshay Kolli, Hossein Haeri, Zahra Rezaei Khavas, Yasin Findik, Hamid Osooli, Alok Malik, Monish Kotturu, Huy Huynh, Kalvin McCallum, Nathan Uhunsere, Mike Fisher, Ashwin Nair

Sponsor: DEVCOM Army Research Lab (ARL) via STRONG (Strengthening Teamwork for Robust Operations in Novel Groups) Collaborative Research Alliance (CRA)

Teams succeed because of the network of relationships they possess, and the emergent behaviors this network facilitates. These emergent behaviors arise due to three constructs: (a) multiple agents in the team capable of taking actions, (b) interactions between the agents, and (c) emergence of global-scale patterns due to the interactions. The overall research strategy and objective tackles each of these constructs as separate tasks in the context of a search and rescue mission. Search and rescue operations are often severely resource constrained in terms of time, energy, and information organization. Operating in such resource-constrained scenarios can impact the ability of human-agent teams to tackle complex problems, resulting in sub-optimal outputs. In this project, we have been studying this problem from three aspects: how team network structures affect performance in search and rescue, how multiple agents can learn together, and how humans trust the search-and-rescue agents. Our work has shown that resource-constrained teams prefer structures that are more self-oriented. Similarly, we found that specific network structures can guide learning agents to more prosocial behaviors.

Team network structures can have significant impact on what is learned by the collective. Our agents are able to learn behaviors that are representative of social structures.

Agents in teams with communitarian network structures (bottom left) learn to assist each other, while survivalist agents do not (top left).

Automated Discovery of Data Validity for Safety-Critical Feedback Control in a Population of Connected Vehicles

Databases can be forgetful - and that is a good thing!

Senior investigators: Kshitij Jerath, Cindy Chen

Junior investigators: Hossein Haeri, Lorina Sinanaj, Niket Kathiriya, Rinith Pakala, Eric Fan, Usha Sravani Ganta

Sponsor: National Science Foundation via Cyber Physical Systems (CPS) program

When does data expire? From ignoring Yelp food reviews from long ago, or bypassing Waze driving instructions that lead into construction zones, our society’s cyber-mediated actions depend on a trust in the validity of data stored, aggregated, and shared by remote databases that are updated in feedback with our decisions. The proposed work is motivated by a cyber-physical transportation application: fleets of connected and autonomous vehicles (CAVs) driving on potentially icy roads, where safety-critical road friction information is shared via a wireless data link to a central database that mediates data averaging. If there is no more snow, does your connected vehicle need to drive slow? Our implementation-focused approach has developed novel algorithms that can enable systematic forgetting of previously collected data to ensure that our cyber systems base their decisions only on data that is valid in the current context. We have demonstrated that significant quantities of data stored in repositories can be successfully forgotten and abstracted without losing validity for decision making. We have implemented this using both real-time database implementations and stream machine learning techniques. Our work has made novel inroads by proposing concepts such as adaptive granulation of data in databases, stream learning that prioritizes model stability, and near-optimal algorithmic forgetting of expired data.

Data stored in database granules can be aggregated and forgotten systematically using Allan variance techniques. We keep only those data that are relevant for current decision-making operations. Note the order of magnitude reduction in stored data.

Database implementation of data forgetting coupled with model predictive control for individual vehicles operating in Simulink. Vehicles send friction data to the database (top left), which forgets data and keeps only relevant information (bottom left). This is returned as a response to queries from other vehicles (right), completing the cyber physical feedback loop.

Traffic congestion mitigation using connected vehicles

Automated vehicles vs. Phantom traffic jams

Senior investigator: Kshitij Jerath

Junior investigator: Taehooie Kim

Formation of self-organized vehicle clusters or phantom traffic jams is known to occur when the vehicular density exceeds a certain threshold, known as the critical density. In the absence of an external cause, one of the few ways of alleviating congestion is by changing the system internally, i.e., by modifying driver behavior. This project examines how connected autonomous vehicles in traffic flow can be leveraged to serve this purpose. We have analyzed congestion-aware CACC algorithms that responds to existing downstream congestion on the roadway, and examined the impact of such CACC-enabled vehicles on congestion for various penetration rates in a traffic system where such vehicles are randomly interspersed. Equally as importantly. we answer an important question that remains unanswered: where are the most impactful locations to disseminate information for connected vehicles in order to change traffic flow outcomes? We have developed the notion of zones of influence of connected vehicles, as well as null and event horizons, to move our understanding forward.

The concept of zones of influence of Connected Vehicles (CVs) and event horizons in freeway traffic. The four regions demarcate the zones where CVs have different impacts on traffic flow (i.e. the macrostate). Our results show that zones of influence can span several kilometers near bottleneck, providing significant opportunities to modify traffic flow.

Long-Term Underwater Autonomy for Surveillance and Manipulation

Enabling robot cooperation when communication is hard

Senior investigators: Holly Yanco, Reza Ahmadzadeh, Kshitij Jerath, Maru Cabrera, Adam Norton, Paul Robinette

Junior investigators: Kshitij Srivastava, Anveshak Rathore, Brendan Donoghue, Ernie Pellegrino, Ponita Ty, Rachel Major, Sal Sicari

Sponsor: Office of Naval Research (ONR)

Intelligent underwater robots (both autonomous underwater vehicles, AUVs, and remotely operated vehicles, ROVs) often need to operate in low bandwidth and highly complex environments performing surveillance, inspection, and maintenance tasks. These tasks rely on effective robot perception and manipulation capabilities both with and without a human diver present in the operating area. Accomplishing these tasks requires the development of long-term autonomy technologies. Within this context, autonomous teams comprised of several humans and autonomous vehicles can have a multiplicative effect on the performance of complex missions as compared to those carried out by a single individual. However, resource constraints such low bandwidth and poor visibility can significantly limit team performance and operational success. Our work seeks to resolve these issues in low-bandwidth operating environments by studying: (a) how can we succinctly represent real-time sensory information collected by the team over long timescales (temporal data aggregation) (b) how can we generate concise control actions to the team that are applicable across diverse timescales (supervisory control), and (c) how can we facilitate these two processes by creating effective reduced-order models of the team (macrostate modeling)?

Event and data aggregation is a key challenge in low-bandwidth environments. This issue is also observed in the supervisory control problem as well, where supervisors need to communicate plans of action with underwater individuals (humans or robots).

Influence on robot collectives inspired by thermodynamics, entropy, and impedance control

Human-guided swarms

Senior investigator: Kshitij Jerath

Junior investigators: Mitchell Scott, Spencer Barclay, Hossein Haeri, Daniel Kusmaul

As the potential for societal integration of multi-agent robotic systems increases, the need to manage the collective behaviors of such systems also increases. Agent-agent interactions in a swarm of small unmanned aerial systems (sUAS) lead to the emergence of collective behaviors that enable effective coverage and exploration across large spatial extents. However, the same inherent collective behaviors can occasionally limit the ability of the sUAS swarm to focus on specific objects of interest during coverage or exploration missions. Our work has focused on creating macroscopic models and fine-tuned intuitive interfaces for a human supervisor to influence or guide an sUAS swarm with dynamic levels of incursion on decentralized control afforded by these systems. With the objective of creating more predictable behaviors, this approach can enable the fully utilization of swarm capabilities, while also retaining an ongoing macroscopic-level of swarm control. We demonstrated this capability through experiments in a virtual reality environment by using an impedance control-inspired method to guide 16 drones through a canyon.

Human-in-virtual reality-loop: A human can provide nudges to a swarm and guide it towards desired objectives.

Virtual reality framework with for performing macroscopic level control of the swarm. The objective here is to guide a swarm through a canyon.

DECISIVE: Development and Execution of Comprehensive and Integrated Subterranean Intelligent Vehicle Evaluations

Testing drones with designed “crashes”

Senior investigators: Holly Yanco, Reza Ahmadzadeh, Kshitij Jerath, Adam Norton, Paul Robinette, Jay Weitzen, Thanuka Wickramarathne

Junior investigators: Edwin Meriaux, Gregg Willcox, Minseop Choi, Ryan Donald, Brendan Donoghue, Christian Dumas, Peter Gavriel, Alden Giedraitis, Brendan Hertel, Jack Houle, Nathan Letteri, Zahra Rezaei Khavas, Rakshith Singh, Naye Yoni

Sponsor: U.S. Army Combat Capabilities Development Command Soldier Center

Modern small unmanned aerial systems (sUAS) platforms are being designed for and used in a wide variety of operating environments and application scenarios, such as search and rescue. Within this wide scope of applications, it is intuitive to hypothesize that the performance of different sUAS platforms will vary depending on the use case scenario being tested. The goal of this project is to evaluate several sUAS platforms and develop the ability to determine the ‘best’ sUAS for a specific mission or use case - a capability that may prove to be immensely helpful in the deployment of such platforms in indoor and subterranean (subT) environments. Our work created four tests that form a practical evaluation methodology for the performance of sUAS platform in two general cases: navigation and collision tolerance. The navigation tests a spectrum of cases such as wall-following, linear path traversal, corner navigation, door navigation, and aperture navigation. These individual tests show critical steps a sUAS needs to be able to conduct in various subT missions. For collision tolerance tests, we created categorical metrics and numerical metrics (inspired by vehicle collision research) where the drone collides with different obstacle at different incidence angles. Our test results show that in some collision tolerance conditions, it is easy to distinguish better performing drones, while in others the distinction is much more nuanced.

Underground testing space at the NERVE center. We used ultra-wide band localization to obtain navigation and collision tolerance metrics for several drone platforms.

Our vehicle collision research-inspired work uses acceleration severity and maximum delta-v metrics to study drone performance.

Individualized Adaptations to Calibrate Multi-Human Multi-Agent Team Trust

Infusing trustworthiness in robots

Senior investigators: Paul Robinette, Kshitij Jerath, Reza Ahmadzadeh

Junior investigator: Russ Perkins

Sponsor: DEVCOM Army Research Lab (ARL) via STRONG (Strengthening Teamwork for Robust Operations in Novel Groups) Collaborative Research Alliance (CRA)

An individual human or autonomous agent will trust other teammates to perform tasks based on prior experience with the agents, situational factors, their own propensity to trust, and the characteristics of the agents themselves. In many scenarios, one may not have significant prior experience with each other, so a trust decision will be based on the small subset of teammates’ abilities that they have seen so far. Agents currently do not have capabilities to enable a human to calibrate their over- or under-trust in agents. In our work we designed strategies for an agent to convince their teammates that they should or should not be trusted, so as to become effective teammates, including: (a) Agent adaptation: change themselves to align with human trust requirements. (b) Human preparation: help humans learn the actual capabilities of the agents to better calibrate their trust in these agents.

Computational HAT model of status sensitivity to facilitate team trust and performance under suboptimal conditions

Forming first impressions of robots

Senior investigators: Kshitij Jerath, Paul Robinette, Reza Ahmadzadeh

Junior investigators: Hamid Osooli, Mike Fisher, Nathan Uhunsere

Sponsor: DEVCOM Army Research Lab (ARL) via STRONG (Strengthening Teamwork for Robust Operations in Novel Groups) Collaborative Research Alliance (CRA) via University of Delaware

Social status is a critical factor that fundamentally shapes how humans interact and impacts our trust and cooperation. In military contexts, rank and competency play pivotal roles and shape team interactions. Yet, it remains unclear how non-human agents are integrated as teammates and how they alter emergent team states and processes. Team performance varies in different environments, suggesting that context is also critical in shaping emergent team states/processes This project examines how robotic systems can be designed to better assist in the study of human interactions within the context of impression formation of robots.

Development of a Calibration System for Stereophotogrammetry to Enable Large-Scale Measurement and Monitoring

Using lasers to measure drone proximity

Senior investigators: Alessandro Sabato, Christopher Niezrecki, Yan Luo, Kshitij Jerath

Junior investigators: Michael Buckley, Zachary Seguin, Fabio Bottalico, Austin Mackey

Sponsor: National Science Foundation via Major Research Instrumentation program

Existing stereophotogrammetry measurements rely on a camera pair that is calibrated by using a cumbersome procedure that require cameras’ relative positions to be fixed in space during measurements. We have developed a sensing system is based on a one-of-a-kind suite of integrated sensors that can determine the 3D-DIC extrinsic calibration parameters in real-time and therefore eliminating the need for calibration scale-bars, as well as the requirement for the cameras to remain fixed once set up. This transformative new approach for computer-vision measurement can be used for long-term full-field structural health monitoring and assessment of physical parameters such as displacement, deformation, strain, and structural dynamics in a variety of engineering and geographical science domains. This system not only streamlines the calibration procedure, but also enable stereophotogrammetry measurements to be made from moving platforms, such as unmanned vehicles or drones.
bottom of page