CrowdAI Ship Detector

The algorithm locates every individual ship in the image. A neural network is used to determine the position and size of each ships. Minimum ship detection size is 7 meters in length.

GBDX Task Name: crowdai-ship-detector
Provider: CrowdAI Inc.
Input Data Support: Requires DigitalGlobe panchromatic imagery with 50cm GSD and an off-nadir angle of less than 10 degrees.
Output Processing Result: Output is a GeoJson file with ship bounding boxes.

This algorithm documentation aligns with the most recent (highest) task version number. Older versions of the algorithm task which remain on GBDX for backwards compatibility may have slightly different behavior or execution parameters.

This GBDX algorithm is an extra-cost premium analytic. For information on how to purchase access please contact

Algorithm Output Images

After: WorldView 3 image after ship detection.

After: WorldView 3 image after ship detection.

QuickStart Example

This is a workflow example for basic processing.


# Initialize the Environment.
from gbdxtools import Interface

gbdx = Interface()

tasks = []
cat_id = "104001002F8DEE00"
aop_output_location = "CrowdAI/Ship_Detector/aop/"
task_output_location = "CrowdAI/Ship_Detector/task/"

# Auto ordering task parameters
order = gbdx.Task("Auto_Ordering")
order.inputs.cat_id = cat_id
order.impersonation_allowed = True
order.persist = True
order.timeout = 36000
order_data_loc = order.outputs.s3_location.value

# AOP task parameters
aop_task = gbdx.Task("AOP_Strip_Processor") = order_data_loc
aop_task.inputs.bands = "PAN"
aop_task.inputs.enable_acomp = False
aop_task.inputs.enable_dra = False
aop_task.inputs.enable_pansharpen = False
aop_task.inputs.ortho_epsg = "UTM"
aop_task.inputs.ortho_pixel_size = "0.5"

# Algorithm task parameters
crowdai_ship_detector_task = gbdx.Task("crowdai-ship-detector")
crowdai_ship_detector_task.inputs.data_in =

# Set up workflow save data
workflow = gbdx.Workflow(tasks)
workflow.savedata(, location=aop_output_location)
    crowdai_ship_detector_task.outputs.data_out, location=task_output_location

# Execute workflow

Input Data Support

This algorithm requires panchromatic imagery from a DigitalGlobe sensor with a Ground Sampling Distance of 50cm. An off-nadir angle of less than 10 degrees. Images from WorldView-1, WorldView-2, WorldView-3 and GeoEye-1 sensors are supported.

Output Processing Result

The output is a GeoJson file with the detected ship length as a property.

Task Processing Options

The following table lists all crowdai-ship-detector input processing options.
Mandatory (optional) settings are listed as Required = True (Required = False).

Name Required Default Valid Values Description
data_in True N/A directory Input data directory.

The following table lists all crowdai-ship-detector output processing options.
Mandatory (optional) settings are listed as Required = True (Required = False).

Name Required Default Valid Values Description
data_out True N/A directory Output data directory.

Training/Testing Locations and Seasons

This algorithm has specifically been tuned for use in the United States. It does not detect docks, ships on land (other than dry-docks), stationary floating objects, e.g. fish farms. Additionally, the model is designed for use over ocean- or sea-facing ports, and is not designed to detect ships in lakes (however it may still provide adequate results in some areas)

This algorithm was trained in:

  • Hong Kong
  • Singapore
  • Panama
  • United States of America
  • Iran
  • Russian Federation
  • China
  • United Arab Emirates
  • and Saudi Arabia

Note: Algorithms return the best results when your spatial processing area and temporal data acquisition timeframe is similar to the locations and seasons utilized when the algorithm was trained and tested.

Accuracy Verification and Validation

The algorithm developer has reported accuracies of 0.72 recall, 0.72 precision, and F1 Score of 0.72 for the verification and validation they have performed using a set of sample input testing datasets.

DigitalGlobe also performed an accuracy assessment during the algorithm certification review process in the following locations:

Location Image Type
Singapore WorldView-3
Tianjin WorldView-3
Vancouver WorldView-3

For this limited set of testing datasets the observed accuracies were 0.83 (0.79 - 0.86) recall, 0.96 (0.93 - 0.98) precision, and F1 Score of 0.89 (0.86 - 0.91) (95% confidence limits reported within parentheses).

Accuracy metrics estimated by DigitalGlobe (2.5, 25, 50, 75, and 97.5 quantiles)

Accuracy metrics estimated by DigitalGlobe (2.5, 25, 50, 75, and 97.5 quantiles)

Note: The sample input testing datasets used by the algorithm developer and DigitalGlobe may vary, resulting in different reported accuracies. Furthermore, these accuracy metrics were characterized while testing the algorithm over a limited set of geographic locations and seasonal timeframes so they do not represent a guarantee of absolute algorithm accuracy.

Processing Speed Benchmark

During algorithm certification, DigitalGlobe observed a processing time of approximately 2 hours 45 minutes per 1 GB of pan-sharpened imagery


If you have any questions or issues with this task, please contact