{"_id":"57dc3ba1ea7c0d1700f1d4a9","category":{"_id":"573b4f62ef164e2900a2b881","__v":0,"project":"55faeacad0e22017005b8265","version":"55faeacad0e22017005b8268","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2016-05-17T17:05:38.443Z","from_sync":false,"order":7,"slug":"algorithm-guide","title":"Certified Algorithms"},"project":"55faeacad0e22017005b8265","user":"55fae9d4825d5f19001fa379","__v":1,"parentDoc":null,"version":{"_id":"55faeacad0e22017005b8268","project":"55faeacad0e22017005b8265","__v":33,"createdAt":"2015-09-17T16:31:06.800Z","releaseDate":"2015-09-17T16:31:06.800Z","categories":["55faeacbd0e22017005b8269","55faf550764f50210095078e","55faf5b5626c341700fd9e96","55faf8a7825d5f19001fa386","560052f91503430d007cc88f","560054f73aa0520d00da0b1a","56005aaf6932a00d00ba7c62","56005c273aa0520d00da0b3f","5601ae7681a9670d006d164d","5601ae926811d00d00ceb487","5601aeb064866b1900f4768d","5601aee850ee460d0002224c","5601afa02499c119000faf19","5601afd381a9670d006d1652","561d4c78281aec0d00eb27b6","561d588d8ca8b90d00210219","563a5f934cc3621900ac278c","5665c5763889610d0008a29e","566710a36819320d000c2e93","56ddf6df8a5ae10e008e3926","56e1c96b2506700e00de6e83","56e1ccc4e416450e00b9e48c","56e1ccdfe63f910e00e59870","56e1cd10bc46be0e002af26a","56e1cd21e416450e00b9e48e","56e3139a51857d0e008e77be","573b4f62ef164e2900a2b881","57c9d1335fd8ca0e006308ed","57e2bd9d1e7b7220000d7fa5","57f2b992ac30911900c7c2b6","58adb5c275df0f1b001ed59b","58c81b5c6dc7140f003c3c46","595412446ed4d9001b3e7b37"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"v1","version_clean":"1.0.0","version":"1"},"updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-09-16T18:36:17.260Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"settings":"","results":{"codes":[]},"auth":"required","params":[],"url":""},"isReference":false,"order":5,"body":"## Table of Contents\n\nSection | Description\n--- | ---\n[Imagery Examples](#Imagery Examples) | Before and after examples\n[Quickstart](#Quickstart) | Get started with a Python-based quickstart tutorial\n[Task Runtime](#Task Runtime) | Benchmark runtimes for the algorithm\n[Input Options](#Input Options) | Required and optional task inputs\n[Outputs](#Outputs) | Task outputs and example contents\n[Advanced Options](#Advanced Options) | Additional information for advanced users\n[Known Issues](#Known Issues) | Issues users should be aware of\n\n\n## <a name=\"Imagery Examples\"></a>Imagery Example\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/bf47964-LULC800x320.png\",\n        \"LULC800x320.png\",\n        800,\n        320,\n        \"#443d2f\"\n      ],\n      \"caption\": \"Output image with Automated Land Cover Classification applied\"\n    }\n  ]\n}\n[/block]\n## <a name=\"Quickstart\"></a>Quickstart Tutorial\n\nThis script gives the example of Automated Land Cover Classification with a single tif file as input.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"# Quickstart Example producing an unsupervised Landuse Landcover Classification from a tif file.\\n# First Initialize the Environment\\n\\nfrom gbdxtools import Interface\\ngbdx = Interface()\\n\\n#Edit the following path to reflect a specific path to an image\\nraster = 's3://gbd-customer-data/CustomerAccount#/PathToImage/'\\nprototask = gbdx.Task(\\\"protogenV2LULC\\\", raster=raster)\\n\\nworkflow = gbdx.Workflow([ prototask ])\\n#Edit the following line(s) to reflect specific folder(s) for the output file (example location provided)  \\nworkflow.savedata(prototask.outputs.data, location='LULC')\\nworkflow.execute()\\n\\nprint workflow.id\\nprint workflow.status\",\n      \"language\": \"python\"\n    }\n  ]\n}\n[/block]\n## <a name=\"Task Runtime\"></a>Task Runtime\nThese are the average runtimes for this algorithm. All benchmark tests were run using a standard set of images, based on our most common customer scenarios. Runtime benchmarks apply to the specific algorithm, and don’t represent the runtime of a complete workflow.\n\n  Sensor Name  |  Total Pixels  |  Total Area (k2)  |  Time(secs)  |  Time/Area k2\n--------|:----------:|-----------|----------------|--------------\nWV02|35,872,942|329.87|328.19 |0.99\nWV03|35,371,971|196.27| 459.06|2.34\n\n   \n\n## <a name=\"Input Options\"></a>Input Options\nThis task will process only WorldView 2 or WorldView 3 multi-spectral imagery (8-band optical and VNIR data sets) that has been atmospherically compensated by the Advanced Image Preprocessor. Supported formats are .TIF, .TIL, .VRT, .HDR.\n\n## <a name=\"Outputs\"></a>Outputs\n\nRGB .TIF image of type UINT8x3. The data will be displayed using the following color codes:\n\n Color |  RGB Value     |Class Description\n:-------|:----------------|--------\n  Green  | [0,255,0] |All types of vegetation (healthy chlorophyll content)\n   Blue  | [0,0,128] | All types of water, excluding flood waters (murky)\n  Brown | [128,64,0} | All types of soils, excluding rocks and stone\n  Light Blue  | [128,255,255] | All types of clouds excluding smoke\n  Purple  | [164,74,164] | Shadows\n  Gray | [128,128,128]  |  Unclassified (equivalent to man-made  materials, rock, stone)    \n  Black  | [0,0,0]   | No-data  \n\n\n\n## <a name=\"Advanced Options\"></a>Advanced Options\nIf you need to generate the 8-Band multi-spectral data required as input for this task, you can use  the following example script to preprocess your data. This example runs the Advanced Image Preprocessor and Automated Land Cover Classification from end to end.  \n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"# Initialize the gbdxtools Interface\\n gbdx = Interface()\\n\\n # Make sure DRA is disabled if you are processing both the PAN+MS files\\n #Edit the following line(s) to reflect specific folder(s) for the output file (example location provided)  \\n data='s3://gbd-customer-data/CustomerAccount#/PathToImage/'\\n aoptask = gbdx.Task(\\\"AOP_Strip_Processor\\\", data=data, enable_acomp=True, bands=\\\"MS\\\", enable_pansharpen=False, enable_dra=False)\\n\\n# Capture AOP task outputs\\nlog = task.get_output('log')\\northoed_output = task.get_output('data')\\n\\n# Stage AOP output for the Protogen Task using the Protogen Prep Task\\npp_task = gbdx.Task(\\\"ProtogenPrep\\\",raster=aoptask.outputs.data.value)    \\n\\n# Setup ProtogenV2LULC Task\\nprot_lulc = gbdx.Task(\\\"protogenV2LULC\\\",raster=pp_task.outputs.data.value)\\n\\t\\t\\n# Run Combined Workflow\\nworkflow = gbdx.Workflow([ aoptask, pp_task, prot_lulc ])\\n\\n# Send output to  s3 Bucket. \\n# Once you are familiar with the process it is not necessary to save the output from the intermediate steps.\\n#Edit the following line(s) to reflect specific folder(s) for the output file (example location provided)  \\nworkflow.savedata(aoptask.outputs.data,location='s3://gbd-customer-data/CustomerAccount#/Protogen_LULC/')\\nworkflow.savedata(pp_task.outputs.data,location='s3://gbd-customer-data/CustomerAccount#/ProtoPrep/')\\nworkflow.savedata(prot_lulc.outputs.data,location='s3://gbd-customer-data/CustomerAccount#/Protogen_LULC/LULC/')\\nworkflow.execute()\\n\\nprint(workflow.id)\\nprint(workflow.status)\",\n      \"language\": \"python\"\n    }\n  ]\n}\n[/block]\n\t\n\n##<a name=\"Known Issues\"></a>Known Issues\n\nVegetation: Thin cloud (cloud edges) might be misinterpreted as vegetation.\n\nWater: False positives maybe present due to certain types of concrete roofs or shadows.\n\nSoils: Ceramic roofing material and some types of asphalt may be misinterpreted as soil.\n\nClouds: Certain types of concrete might be misinterpreted as cloud. Thin cloud areas may be interpreted as soil or vegetation.\n\nLimitations: The layer uses smoothing operators in cross-class interfaces for noise reduction. This might result in loss/misinterpretation of small class patches 8m^2.","excerpt":"This task produces coarse, unsupervised classification, best used to narrow the scope of an area of interest for further processing. It produces 7 basic classes: Vegetation, Water, Soil, Rock and Man-made materials, Clouds, Shadows, and No-Data. \n\n**GBDX Registered Name**: protogenV2LULC\n**Provider**: GBDX\n**Inputs**: TIF, .TIL,  .HDR\n**Outputs**: RGB .TIF image of type UINT8x3\n**Compatible bands & sensors**: WorldView 2 or WorldView 3 multi-spectral imagery (8-band optical and VNIR data sets) that has been atmospherically compensated by the AOP processor","slug":"automated-land-cover-classification","type":"basic","title":"Automated Land Cover Classification"}

Automated Land Cover Classification

This task produces coarse, unsupervised classification, best used to narrow the scope of an area of interest for further processing. It produces 7 basic classes: Vegetation, Water, Soil, Rock and Man-made materials, Clouds, Shadows, and No-Data. **GBDX Registered Name**: protogenV2LULC **Provider**: GBDX **Inputs**: TIF, .TIL, .HDR **Outputs**: RGB .TIF image of type UINT8x3 **Compatible bands & sensors**: WorldView 2 or WorldView 3 multi-spectral imagery (8-band optical and VNIR data sets) that has been atmospherically compensated by the AOP processor

## Table of Contents Section | Description --- | --- [Imagery Examples](#Imagery Examples) | Before and after examples [Quickstart](#Quickstart) | Get started with a Python-based quickstart tutorial [Task Runtime](#Task Runtime) | Benchmark runtimes for the algorithm [Input Options](#Input Options) | Required and optional task inputs [Outputs](#Outputs) | Task outputs and example contents [Advanced Options](#Advanced Options) | Additional information for advanced users [Known Issues](#Known Issues) | Issues users should be aware of ## <a name="Imagery Examples"></a>Imagery Example [block:image] { "images": [ { "image": [ "https://files.readme.io/bf47964-LULC800x320.png", "LULC800x320.png", 800, 320, "#443d2f" ], "caption": "Output image with Automated Land Cover Classification applied" } ] } [/block] ## <a name="Quickstart"></a>Quickstart Tutorial This script gives the example of Automated Land Cover Classification with a single tif file as input. [block:code] { "codes": [ { "code": "# Quickstart Example producing an unsupervised Landuse Landcover Classification from a tif file.\n# First Initialize the Environment\n\nfrom gbdxtools import Interface\ngbdx = Interface()\n\n#Edit the following path to reflect a specific path to an image\nraster = 's3://gbd-customer-data/CustomerAccount#/PathToImage/'\nprototask = gbdx.Task(\"protogenV2LULC\", raster=raster)\n\nworkflow = gbdx.Workflow([ prototask ])\n#Edit the following line(s) to reflect specific folder(s) for the output file (example location provided) \nworkflow.savedata(prototask.outputs.data, location='LULC')\nworkflow.execute()\n\nprint workflow.id\nprint workflow.status", "language": "python" } ] } [/block] ## <a name="Task Runtime"></a>Task Runtime These are the average runtimes for this algorithm. All benchmark tests were run using a standard set of images, based on our most common customer scenarios. Runtime benchmarks apply to the specific algorithm, and don’t represent the runtime of a complete workflow. Sensor Name | Total Pixels | Total Area (k2) | Time(secs) | Time/Area k2 --------|:----------:|-----------|----------------|-------------- WV02|35,872,942|329.87|328.19 |0.99 WV03|35,371,971|196.27| 459.06|2.34 ## <a name="Input Options"></a>Input Options This task will process only WorldView 2 or WorldView 3 multi-spectral imagery (8-band optical and VNIR data sets) that has been atmospherically compensated by the Advanced Image Preprocessor. Supported formats are .TIF, .TIL, .VRT, .HDR. ## <a name="Outputs"></a>Outputs RGB .TIF image of type UINT8x3. The data will be displayed using the following color codes: Color | RGB Value |Class Description :-------|:----------------|-------- Green | [0,255,0] |All types of vegetation (healthy chlorophyll content) Blue | [0,0,128] | All types of water, excluding flood waters (murky) Brown | [128,64,0} | All types of soils, excluding rocks and stone Light Blue | [128,255,255] | All types of clouds excluding smoke Purple | [164,74,164] | Shadows Gray | [128,128,128] | Unclassified (equivalent to man-made materials, rock, stone) Black | [0,0,0] | No-data ## <a name="Advanced Options"></a>Advanced Options If you need to generate the 8-Band multi-spectral data required as input for this task, you can use the following example script to preprocess your data. This example runs the Advanced Image Preprocessor and Automated Land Cover Classification from end to end. [block:code] { "codes": [ { "code": "# Initialize the gbdxtools Interface\n gbdx = Interface()\n\n # Make sure DRA is disabled if you are processing both the PAN+MS files\n #Edit the following line(s) to reflect specific folder(s) for the output file (example location provided) \n data='s3://gbd-customer-data/CustomerAccount#/PathToImage/'\n aoptask = gbdx.Task(\"AOP_Strip_Processor\", data=data, enable_acomp=True, bands=\"MS\", enable_pansharpen=False, enable_dra=False)\n\n# Capture AOP task outputs\nlog = task.get_output('log')\northoed_output = task.get_output('data')\n\n# Stage AOP output for the Protogen Task using the Protogen Prep Task\npp_task = gbdx.Task(\"ProtogenPrep\",raster=aoptask.outputs.data.value) \n\n# Setup ProtogenV2LULC Task\nprot_lulc = gbdx.Task(\"protogenV2LULC\",raster=pp_task.outputs.data.value)\n\t\t\n# Run Combined Workflow\nworkflow = gbdx.Workflow([ aoptask, pp_task, prot_lulc ])\n\n# Send output to s3 Bucket. \n# Once you are familiar with the process it is not necessary to save the output from the intermediate steps.\n#Edit the following line(s) to reflect specific folder(s) for the output file (example location provided) \nworkflow.savedata(aoptask.outputs.data,location='s3://gbd-customer-data/CustomerAccount#/Protogen_LULC/')\nworkflow.savedata(pp_task.outputs.data,location='s3://gbd-customer-data/CustomerAccount#/ProtoPrep/')\nworkflow.savedata(prot_lulc.outputs.data,location='s3://gbd-customer-data/CustomerAccount#/Protogen_LULC/LULC/')\nworkflow.execute()\n\nprint(workflow.id)\nprint(workflow.status)", "language": "python" } ] } [/block] ##<a name="Known Issues"></a>Known Issues Vegetation: Thin cloud (cloud edges) might be misinterpreted as vegetation. Water: False positives maybe present due to certain types of concrete roofs or shadows. Soils: Ceramic roofing material and some types of asphalt may be misinterpreted as soil. Clouds: Certain types of concrete might be misinterpreted as cloud. Thin cloud areas may be interpreted as soil or vegetation. Limitations: The layer uses smoothing operators in cross-class interfaces for noise reduction. This might result in loss/misinterpretation of small class patches 8m^2.