top of page

Overview

Autonodyne is a Boston-based AI software company. We got our start in aviation but have since branched out into other domains (air, sea, land) to use our software to move one, a few, or many dissimilar vehicles in multiple domains autonomously.

 

Autonomy is sometimes a difficult thing to define. There is a long-term model that may eventually eliminate all human involvement. However, for the foreseeable future, Autonodyne believes a supervisory human role will be essential. By continuing to develop new autonomous behaviors for uncrewed vehicles, we are fostering the age of true autonomy and AI. We are providing “additive autonomy” and sophisticated “Reasoning-On-The-Edge (ROTE™)” to enable Unmanned Vehicle (UV) products and services.

We subscribe to the school of thought as described in ‘Our Robots, Ourselves’ that involves the human and machine working together by trading control and shifting levels of automation to suit the situation at hand. In certain times and places, the vehicle is very autonomous while in others, more human involvement is needed.

Software to Use on One or Many Unmanned Vehicles (UVs)

Our portfolio of work includes control capabilities for a broad spectrum of vehicles, and we have developed autonomous behaviors that permit operating groups of vehicles (swarms or hives) in collective formations and maneuvers.

Sometimes operators want to pilot different types of vehicles simultaneously during a mission or need the vehicles to be able to interact as a team. Allowing dissimilar vehicles to work together is a cornerstone of our autonomous control technologies.

Image depicting different dissimilar land UV's

Land UVs (UGV)

Air Unmanned Vehicle

Air UVs (UAV)

An image of a Sea UV

Sea UVs (USV)

Image depicting multiple different vehicles of different domains

We began with air vehicles (air domain) but have branched out into applying our technology to the land domain (unmanned ground vehicles or UGVs) and both on the surface (unmanned surface vehicles or USVs) and underwater (unmanned underwater vehicles or UUVs) in the sea domain. 

This image shows a single operator on a dock only a few hundred meters away from our Boston office simultaneously controlling a UGV driving on the dock, a tethered UUV operating below the surface, and a UAV taking off and landing on a homemade USV "aircraft carrier".

Common teamings involve UAVs and USVs cueing objects of interest for other UV teammates to checkout.

Human-System Interface

Human-System interface describes how human and machines work together. It is a combination of software, user interface, and autonomy to create a team of humans and robots that are more capable together than by themselves.

It includes planning, conducting , and analyzing success (i.e. mission planning, mission execution, and mission debrief). We try to optimize the ideal interface for the scenario you find yourself in - we've been using voice control for UxS since 2019, started experimenting with augmented and virtual reality in 2018, used gesture control, and all forms of tactile controllers so that you have flexibility and options.

If the human-machine team is performing a complex operation, or if there are large quantities of robots involved, an advanced and powerful interface is required. What human can think for 20, 100, or 1,000 robots?

This is where we come in - we combine the right user interface with advanced autonomy software. One without the other is an incomplete solution in our opinion.

Vehicle Control & Management

As we progress from one human controller responsible for a single vehicle (“human-in-the-loop”) to the human monitoring multiple missions being performed by multiple vehicles (“human-on/over-the-loop”) to maybe eventually no humans involved (“What’s-a-loop?”), Autonodyne is striving to create modern interfaces designed with the user and the mission in mind.

 

We are working not to build engineering interfaces for engineers but instead, to find that perfect design that blends simplicity and power. We aim to provide the human operator/supervisor/monitor the right level of situational awareness and the ability to affect changes if they need to be done. From providing 100% manual input (e.g. joystick and throttle control) and optional voice control, to approving a suggested course of action, to just being a means to provide “commander intent”, the Autonodyne control stations are that powerful form of functional artwork.

A CBX and ruggerdized tablet running Autonodyne software

Networked, Collaborative, Autonomous (NCA) Systems

An infographic about the Spectrum of Control

Networked, Collaborative, Autonomous (NCA) systems describe a capability that needs very little, to no, human involvement.

 

These systems largely think for themselves to conduct highly complex operations. Here we apply machine learning techniques to create software "AI agents" that were trained to learn how to sense their environment, generate a shared "world view", ingest human or commander intent, and decide on their own how to collaboratively accomplish the mission.

In a recent DoD year-long program, we trained a group of 30-50 of these NCA agents (each having an identical copy of the AI agent running on-board) to fly into a heavily-defended anti-access/area-denied (A2/AD) environment and conduct military operations. Humans were monitoring the mission but were not directly controlling any of it.

Many Behaviors Increase Capabilities

Autonodyne’s library of software behaviors permits your vehicle or team of vehicles to perform a variety of mission-specific maneuvers. 

After a vehicle is commanded with a set of behaviors, it allows humans in the human-machine team to make better informed decisions, expand their reach and access, and increase safety and productivity while permitting the vehicles to focus on what they do best.

 

For a complete inventory of our behaviors see Autonomy Behaviors.

bottom of page