top of page
An image of clouds in the sky

Overview

Autonodyne is a Boston-based AI software company. We got our start in aviation but have since branched out into other domains (air, sea, land) to use our software to move one, a few, or many dissimilar vehicles in multiple domains autonomously.

 

Autonomy is sometimes a difficult thing to define. There is a long-term model that may eventually eliminate all human involvement. However, for the foreseeable future, Autonodyne believes a supervisory human role will be essential. By continuing to develop new autonomous behaviors for uncrewed vehicles, we are fostering the age of true autonomy and AI. We are providing “additive autonomy” and sophisticated “Reasoning-On-The-Edge (ROTE™)” to enable Unmanned Vehicle (UV) products and services.

We subscribe to the school of thought as described in ‘Our Robots, Ourselves’ that involves the human and machine working together by trading control and shifting levels of automation to suit the situation at hand. In certain times and places, the vehicle is very autonomous while in others, more human involvement is needed.

Software to Use on One or Many Unmanned Vehicles (UVs)

Our portfolio of work includes control capabilities for a broad spectrum of vehicles, and we have developed autonomous behaviors that permit operating groups of vehicles (swarms or hives) in collective formations and maneuvers.

Sometimes operators want to pilot different types of vehicles simultaneously during a mission or need the vehicles to be able to interact as a team. Allowing dissimilar vehicles to work together is a cornerstone of our autonomous control technologies.

Image depicting different dissimilar land UV's

Land UVs

Air Unmanned Vehicle

Air UVs

An image of a Sea UV

Sea UVs

Connections and Communication

Autonodyne’s software engines are link-agnostic, minimizing transmission and communication problems. The software enables Line of Sight (LOS) or Beyond-Visual-Line-Of-Sight (BVLOS) data communication to any combination of UVs.

 

When we don’t have communication connectivity with a vehicle, our on-board processing uses our “Reasoning-On-The-Edge” (ROTE™) capability. This allows the mission to run and the vehicle to communicate back to the human supervisor to provide situational awareness when/if connectivity is reestablished.

Autonodyne already supports a myriad of datalink options and message set standards in an effort to provide flexibility and adaptability to existing vehicle configurations.

Control Input Options

An image showing different input options supported by Autonodyne

Sometimes it makes sense to use alternative control input methods. Our software is designed to support a variety of human-machine interface devices ranging from the traditional mouse and keyboard, to commonplace game controllers and have previously supported  mixed or augmented reality devices, voice control, and gesture control. Please ask our sales team to help you with your selection:  sales@autonodyne.com.

Many Behaviors Increase Capabilities

Autonodyne’s growing library of software behaviors permits your vehicle or team of vehicles to perform a variety of mission-specific maneuvers. 

After a vehicle is commanded with a set of behaviors, it allows humans in the human-machine team to make better informed decisions, expand their reach and access, and increase safety and productivity while permitting the vehicles to focus on what they do best. For a complete inventory of our behaviors see Behaviors.

bottom of page