top of page

Autonomy Behaviors

Behaviors Magnify Functionality

Autonodyne is continuing to create a comprehensive list of autonomous behaviors or skills that can be used to perform missions. They control a wide range of autonomous, semi-autonomous, and/or highly automated vehicles (air, sea, and land). ​ Each behavior below can be individually and manually selected to populate a “task list” or “mission plan” for the vehicle or vehicles. The name of the behavior is the lexicon we use to both command the behavior and indicate to the human operator/supervisor what the vehicle or vehicles are doing. Aside from designating the waypoint and points of interest, the operator doesn’t need to do anything else to run the mission. Even the need to designate Fly Over That waypoints is not necessary if the autonomous path planning algorithm is also running. ​ We are now moving to a point where the software, through semantic reasoning or "sense-making", is starting to stitch together those behaviors automatically. The software is working to determine optimal courses of action (COA), and, based on those COAs, suggest a sequence of behaviors. The operator can do nothing and let it run its course, manually approve the plan, or reject/modify the plan. We are building a capability where these software “agents” are creating a much more capable human-machine team.

For more information regarding Autonomy Behaviors; please click here; posted 26 October 2019

Autonodyne puts the human in a supervisory role commanding high-level behaviors. We use rich sensor data and algorithmically-enhanced models of the environment to move across the spectrum of automation, moment by moment, driving in and out of clouds of autonomy and risk.

We make extensive use of agent technology in appropriate places such as calculating and suggesting re-routing options, looking up and suggesting procedures, drawing upon a database of past events to offer situationally appropriate suggestions, and off-loading high human workload/highly deterministic tasks.

Image depicting AI software identifying items in Times Square

Autonodyne is stitching together artificial intelligence capability with the goal that it can one day serve as a trusted agent for humans. When we get there, this will relieve many of the cognitive and physical burdens on humans and radically expand the art of the possible.

 

This is more than just a force multiplier – it has the power and potential to unlock so much more.

An image depicting our list of behaviors

Vehicle Control Behaviors

Fly Over That

Exists Now

Waypoint Navigation – this command instructs the vehicle to proceed directly to a designated waypoint.

Loiter (Hold Over There)

Exists Now

Commands a vehicle to perform a loiter (e.g., circular, figure-8, racetrack, etc.) movement at a designated location with default/designated parameters (e.g. altitude, leg length, direction of turn, etc.)

Hover

Exists Now

Vertical Takeoff/Land (VTOL) air vehicles hover in-place with options to define the hover height and hover duration.

Freeze

Exists Now

Commands a fixed wing aircraft to enter a circular orbit, VTOL, uncrewed ground, and uncrewed maritime vehicles freeze in-place.

Follow It

Exists Now

Keeps a defined X-Y-Z offset from the designated “lead” vehicle.

Hold Current/ Commanded

Exists Now

Holds a commanded vehicle speed, heading, altitude indefinitely.

Return-To-Base (RTB)

Exists Now

Creates a specific 3D profile to return the vehicle to a designated recovery location.

Deliver

Exists Now

Automatically computes a path for delivery of vehicle or supplies. It executes that path, performs a precision landing, and releases the payload.

Path Plan/ "5Ds"

Exists Now

Computes a path around any known or sensed static or dynamic obstacles and continuously computes an optimal path. Works in real-time and in 2D and 3D. Uses spline interpolation to more accurately calculate optimal maneuvers and paths that reflect real-world vehicle turning dynamics. Takes cost functions into account to assist in “dodge, duck, dive, dart, and dodge” behaviors for obstacle avoidance.

Follow This Path

Exists Now

When selected, traces a desired path on the 2D map for the desired vehicle(s) to ground track.

Point/Stare At That

Exists Now

The uncrewed vehicle follows a path that keeps its sensors aligned on the point-of-interest (POI). If the sensor is gimballed, the orientation of the vehicle is irrelevant.

Point At That

Exists Now

The uncrewed vehicle maneuvers as required to continually point at the designated point of interest.

Funnel

Future Work

Considered a “standing” lasso, it funnels or necks vehicles down to a single point. Once vehicles have been directed into the funnel, sequencing and deconfliction are automatically handled, even for heterogeneous vehicles, down to the target point (typically a landing area).

Intelligence, Surveillance, & Reconnaissance (ISR) Behaviors

Light It Up

Exists Now

A form of sensor-based navigation for vehicles equipped with sensors optimized for certain conditions (e.g., detection of a specific type of RF emitter). When a sensor gets “a hit”, the vehicle automatically repositions itself to achieve the ideal geometry so the sensor can precisely determine the location of the item of interest.

Track

Exists Now

This uses either the inherent target track capability of the vehicle under control (e.g., Raytheon Coyote) or our own on-board image processing/identification/ tracking capability. The vehicle will move as needed to retain the lock on the target of interest.

Follow Me/Tether

Exists Now

Principally intended for sUAS and overwatch mode. The sUAS will follow a designated POI (for example, you) at a specified X/Y/Z offset.

Perch

Exists Now

Intended for a small UAS (sUAS) to direct it to a perch location and observe.

Observe

Exists Now

Trains sensors on a designated point or area of interest and maintains that observation state for as long as feasible.

Inspect

Exists Now

After designating a structure to be inspected, this behavior computes an ideal 3D geometry that takes vehicle performance and sensor type/capability into account to compute and command that inspection pattern.

Monitor

Exists Now

Slightly different than Observe or Surveil. Whereas Observe can draw conclusions from viewing the scene, Monitor is a non-attribution behavior. For example, Monitor may count all vehicles passing over a bridge but it is not storing any parameters or attributes about those vehicles.

Document

Exists Now

Used an aid to post-mishap investigation efforts by documenting an accident or crime scene

Envelop

Exists Now

After assets are selected and a point-of-interest (POI) is identified, they all take up observation positions around the POI. If 3 vehicles are involved, they take a 120-degree spread offset a distance away from the POI. If 36 vehicles, they take a 10-degree spread.

Surveil/Survey

Exists Now

Intelligently divides the selected geographic area into the most appropriate surveillance patterns based on the vehicles tasked, their fuel/energy states, and the sensors they have onboard. Maintains persistent presence

Sentinel

Exists Now

When the tripwire of external or onboard sensors are activated, the vehicle(s) launches and provides perimeter defense/monitoring capability. Until the virtual tripwire is triggered, the UAS is on the ground in watching/listening mode, almost indefinitely.

Comm Relay

Work In Progress

Places the vehicle into an appropriate loiter or path to serve as a communications relay. It automatically computes the appropriate position and altitude to maintain line-of-sight comm connections.

Offensive Behaviors

Impact

Exists Now

Drives the vehicle directly into the designated point of interest. This is a less sophisticated version of Strike.

OCA

Exists Now

Capable of performing Offensive Counter Air roles. This behavior actually has a number of subset behaviors which initially appear as a series of “plays”. This is first being employed as a set of adversary air training tools.

Strike

Work In Progress

Orchestrates a kinetic strike. Programs vehicle to loiter around a target area for a given time, search for targets, and attack one of them once it is located.

Target Grid

Exists Now

Generates Category 1 target grid coordinates for designated points-of-interest (POIs).

Distract

Future Work

Creates a series of highly visible maneuvers by the vehicle(s) under control that are meant to be attention-grabbing in an effort to distract others in the vicinity.

Self-Defensive Behaviors

Defend/Stack

Exists Now

Similar to DCA, this behavior defends “the queen”. It is largely a means to put a group of vehicles into a desired physical configuration (e.g., a wall between a high-value asset and perceived threats, a 360° coverage shield around a high-value asset, etc.)

DCA

Work In Progress

Capable of performing defensive Counter Air roles. This behavior actually has a number of subset behaviors which initially appear as a series of “plays”. This is first being employed as a set of adversary air training tools.

Greased Pig/ Dither

Exists Now

Preemptively generates a random series of quick movements as a means of self-protection against counter-UV threats. Think of constant jinking that makes the vehicle tough to target and hit.

Decoy

Work In Progress

Intentionally maneuvers vehicles in a pattern or manner that would be confusing to an observer. It enables them to be mistaken as a different platform or performing a different role.

Aerial Refueling

Future Work

When low fuel vehicles are designated, the system will navigate them to the refueling vehicle and sequence them for refueling.

Hive/Swarm Behaviors

Marsupial

Exists Now

At least one vehicle carries another on its back and releases then recovers that vehicle. The behavior is a force multiplier and can be used with ground-to-air vehicles or air-to-air vehicles. Our typical use is to have a ground rover carry a UAS to a specific spot and launch it.

Morphing Swarm

Exists Now

Changes the relative positioning of a swarm of UAS to deal with known and sensed geographic or volumetric constraints. The number of swarm members can be reduced or increased.

Rejoin

Work In Progress

The software is constantly computing an optimal rejoin path to a moving ground or air target.

Stage

Future Work

A staging area is defined by a swoop of your fingers or mouse. Any vehicle directed into the staging area will remain in the "staging loiter" state until commanded out of it, or until low fuel or energy requires its departure. Sequencing and deconfliction are handled by the behavior.

Sacrifice

Work In Progress

Will use pure pursuit and intentionally generate a sacrificial path to impact a designated item or vehicle of interest.

Learner/Sharer

Work In Progress

Principally useful in swarm/hive operations when a few scouts are sent out to survey or map an area, return to the swarm/hive (or just communicate back) and share what was learned.

Mimic Me

Future Work

Target air vehicle mimics the actions of a “master” vehicle in pitch, roll, course, and speed.

Spawn

Work In Progress

Typically an air vehicle behavior where the “mothership” air vehicle can release other air vehicles to perform other missions.

Lasso

Exists Now

When selected, this behavior allows numerous entities to be instantly grouped into a formation or swarm. It handles alignment and deconfliction.

bottom of page