{"_id":"59dfb9aa1a63a80024086d59","project":"55faeacad0e22017005b8265","version":{"_id":"55faeacad0e22017005b8268","project":"55faeacad0e22017005b8265","__v":35,"createdAt":"2015-09-17T16:31:06.800Z","releaseDate":"2015-09-17T16:31:06.800Z","categories":["55faeacbd0e22017005b8269","55faf550764f50210095078e","55faf5b5626c341700fd9e96","55faf8a7825d5f19001fa386","560052f91503430d007cc88f","560054f73aa0520d00da0b1a","56005aaf6932a00d00ba7c62","56005c273aa0520d00da0b3f","5601ae7681a9670d006d164d","5601ae926811d00d00ceb487","5601aeb064866b1900f4768d","5601aee850ee460d0002224c","5601afa02499c119000faf19","5601afd381a9670d006d1652","561d4c78281aec0d00eb27b6","561d588d8ca8b90d00210219","563a5f934cc3621900ac278c","5665c5763889610d0008a29e","566710a36819320d000c2e93","56ddf6df8a5ae10e008e3926","56e1c96b2506700e00de6e83","56e1ccc4e416450e00b9e48c","56e1ccdfe63f910e00e59870","56e1cd10bc46be0e002af26a","56e1cd21e416450e00b9e48e","56e3139a51857d0e008e77be","573b4f62ef164e2900a2b881","57c9d1335fd8ca0e006308ed","57e2bd9d1e7b7220000d7fa5","57f2b992ac30911900c7c2b6","58adb5c275df0f1b001ed59b","58c81b5c6dc7140f003c3c46","595412446ed4d9001b3e7b37","59e76ce41938310028037295","5a009de510890d001c2aabfe"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"v1","version_clean":"1.0.0","version":"1"},"category":{"_id":"573b4f62ef164e2900a2b881","__v":0,"project":"55faeacad0e22017005b8265","version":"55faeacad0e22017005b8268","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2016-05-17T17:05:38.443Z","from_sync":false,"order":8,"slug":"algorithm-guide","title":"Algorithms"},"user":"55fae9d4825d5f19001fa379","__v":0,"parentDoc":null,"updates":[],"next":{"pages":[],"description":""},"createdAt":"2017-10-12T18:51:22.895Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":5,"body":"## Table of Contents\n\nSection | Description\n--- | ---\n[Overview](#Overview) | Detailed description\n[Imagery Examples](#Imagery Examples) | Before and after examples\n[Quickstart](#Quickstart) | Get started with a Python-based quickstart tutorial\n[Task Runtime](#Task Runtime) | Benchmark runtimes for the algorithm\n[Input Options](#Input Options) | Required and optional task inputs\n[Outputs](#Outputs) | Task outputs and example contents\n[Known Issues](#Known Issues) | Issues users should be aware of\n\n\n## <a name=\"Overview\"></a>Overview\nThe underlying algorithm classifies each pixel in the input image using spectral fitting to known material spectral signatures and certain class-specific shape and size filters. The input image must be an atmospherically compensated WorldView-2 or WorldView-3 multispectral image.\n\nBy default, the task produces an RGB image where each class is color coded.\n\nClass     | Color               | Description\n--------------|--------------------|------------\nvegetation    | green [0,255,0]                    | All types of vegetation (healthy chlorophyll content)\nwater         | blue [0,0,128]                     | All types of water, including murky/impure water\nbare soil     | brown [128,64,0]                   | All types of soils, excluding rocks and stone\nclouds        | light blue [128,255,255]           | All types of clouds excluding smoke\nshadows       | purple [164,74,164]                | Shadows\nunclassified  | gray [128,128,128]                 | Unclassified (equivalent to man-made materials, rock, stone)\n\nThe task can also produce a mask for selected classes, where pixels corresponding to the selected classes are white and all remaining pixels are black.\n\nNo data zones in the input image are colored black in the output image.\n\n\n\n\n\n\n## <a name=\"Imagery Examples\"></a>Imagery Example\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/3b1aed2-lulc_rgb.png\",\n        \"lulc rgb.png\",\n        714,\n        685,\n        \"#7c552d\"\n      ],\n      \"caption\": \"This example shows the rgb output from the Automated Land Cover Classification\"\n    }\n  ]\n}\n[/block]\n\n[block:image]\n{\n  \"images\": [\n    {\n      \"image\": [\n        \"https://files.readme.io/006eb84-lulc_unclassified_mask.png\",\n        \"lulc unclassified_mask.png\",\n        712,\n        683,\n        \"#070707\"\n      ],\n      \"caption\": \"This is the corresponding unclassified mask\"\n    }\n  ]\n}\n[/block]\nSee [Know Issues](#section--a-name-known-issues-a-known-issues) for help in interpreting the results. \n\n## <a name=\"Quickstart\"></a>Quickstart Tutorial\n\nIn a Python terminal:\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"import gbdxtools\\n\\ngbdx = gbdxtools.Interface()\\n\\nlulc = gbdx.Task('lulc')\\nlulc.inputs.image = 's3://gbd-customer-data/32cbab7a-4307-40c8-bb31-e2de32f940c2/platform-stories/coastal-change/images/pre'\\n\\n# Run workflow and save results\\nwf = gbdx.Workflow([lulc])\\nwf.savedata(lulc.outputs.image, 'platform-stories/trial-runs/lulc')\\nwf.execute()\",\n      \"language\": \"python\"\n    }\n  ]\n}\n[/block]\n## <a name=\"Task Runtime\"></a>Task Runtime\nThere is no runtime data available for this algorithm.\n\n   \n\n## <a name=\"Input Options\"></a>Input Options\n\n| Name  | Type |  Description | Required |\n|-------|--------------|----------------|----------------|\n| image | Directory | Contains input image. The input image must be a WV02/WV03 multispectral image which is atmospherically compensated. If more than one images are contained in this directory, one is picked arbitrarily. | True |\n| vegetation | String | If True, the output is a vegetation mask. Default is False. | False |\n| water | String | If True, the output is a water mask. Default is False. | False |\n| soil | String | If True, the output is a bare soil mask. Default is False. | False |\n| clouds | String | If True, the output is a cloud mask. Default is False. | False |\n| shadows | String | If True, the output is a shadow mask. Default is False. | False |\n| unclassified | String | If True, the output is an unclassified material mask. Default is False. | False |\n| tiles | String | Number of tiles to tile input image into if it is too big. In that case, the recommended number is 2. Only use this if the default option fails. Default is 1. | False |\n| verbose | String | If True, save algorithm config files in output directory. To be used for debugging purposes. Default is False. | False |\n\nNote that if more than one class is set to True, the corresponding mask includes all the classes set to True.\n\n## <a name=\"Outputs\"></a>Outputs\n\n| Name  | Type | Description                                    |\n|-------|---------|---------------------------------------------------|\n| image | Directory | Contains output image. |\n\n##<a name=\"Known Issues\"></a>Known Issues\n\n* Shadows may be misinterpreted as water.\n* Thin water bodies may be discarded.\n* Small vegetation patches may be lost.\n* Cloud holes are to be expected.\n* Regions than appear as bare soil in the original image and interpreted as vegetation in the LULC are due to spatial aggregation of small grass patches which are not necessary evident.\n* The shadows class contains a limited subset of the true set of all shadow regions.\n* The unclassified class can be used as a rough approximation of built-up.","excerpt":"This task performs unsupervised land use land cover classification on the GBDX platform. There are six classes: vegetation, water, bare soil, clouds, shadows, and unclassified.\n\n**GBDX Registered Name**: lulc\n**Provider**: GBDX\n**Inputs**: This task requires an atmospherically compensated WorldView-2 or WorldView-3 multispectral image\n**Outputs**: RGB \n**Compatible bands & sensors**: WorldView 2 or WorldView 3 multi-spectral imagery (8-band optical and VNIR data sets) that has been atmospherically compensated by the Advanced Image Preprocessor task.","slug":"automated-land-cover-classification-1","type":"basic","title":"Automated Land Cover Classification (LULC)"}

Automated Land Cover Classification (LULC)

This task performs unsupervised land use land cover classification on the GBDX platform. There are six classes: vegetation, water, bare soil, clouds, shadows, and unclassified. **GBDX Registered Name**: lulc **Provider**: GBDX **Inputs**: This task requires an atmospherically compensated WorldView-2 or WorldView-3 multispectral image **Outputs**: RGB **Compatible bands & sensors**: WorldView 2 or WorldView 3 multi-spectral imagery (8-band optical and VNIR data sets) that has been atmospherically compensated by the Advanced Image Preprocessor task.

## Table of Contents Section | Description --- | --- [Overview](#Overview) | Detailed description [Imagery Examples](#Imagery Examples) | Before and after examples [Quickstart](#Quickstart) | Get started with a Python-based quickstart tutorial [Task Runtime](#Task Runtime) | Benchmark runtimes for the algorithm [Input Options](#Input Options) | Required and optional task inputs [Outputs](#Outputs) | Task outputs and example contents [Known Issues](#Known Issues) | Issues users should be aware of ## <a name="Overview"></a>Overview The underlying algorithm classifies each pixel in the input image using spectral fitting to known material spectral signatures and certain class-specific shape and size filters. The input image must be an atmospherically compensated WorldView-2 or WorldView-3 multispectral image. By default, the task produces an RGB image where each class is color coded. Class | Color | Description --------------|--------------------|------------ vegetation | green [0,255,0] | All types of vegetation (healthy chlorophyll content) water | blue [0,0,128] | All types of water, including murky/impure water bare soil | brown [128,64,0] | All types of soils, excluding rocks and stone clouds | light blue [128,255,255] | All types of clouds excluding smoke shadows | purple [164,74,164] | Shadows unclassified | gray [128,128,128] | Unclassified (equivalent to man-made materials, rock, stone) The task can also produce a mask for selected classes, where pixels corresponding to the selected classes are white and all remaining pixels are black. No data zones in the input image are colored black in the output image. ## <a name="Imagery Examples"></a>Imagery Example [block:image] { "images": [ { "image": [ "https://files.readme.io/3b1aed2-lulc_rgb.png", "lulc rgb.png", 714, 685, "#7c552d" ], "caption": "This example shows the rgb output from the Automated Land Cover Classification" } ] } [/block] [block:image] { "images": [ { "image": [ "https://files.readme.io/006eb84-lulc_unclassified_mask.png", "lulc unclassified_mask.png", 712, 683, "#070707" ], "caption": "This is the corresponding unclassified mask" } ] } [/block] See [Know Issues](#section--a-name-known-issues-a-known-issues) for help in interpreting the results. ## <a name="Quickstart"></a>Quickstart Tutorial In a Python terminal: [block:code] { "codes": [ { "code": "import gbdxtools\n\ngbdx = gbdxtools.Interface()\n\nlulc = gbdx.Task('lulc')\nlulc.inputs.image = 's3://gbd-customer-data/32cbab7a-4307-40c8-bb31-e2de32f940c2/platform-stories/coastal-change/images/pre'\n\n# Run workflow and save results\nwf = gbdx.Workflow([lulc])\nwf.savedata(lulc.outputs.image, 'platform-stories/trial-runs/lulc')\nwf.execute()", "language": "python" } ] } [/block] ## <a name="Task Runtime"></a>Task Runtime There is no runtime data available for this algorithm. ## <a name="Input Options"></a>Input Options | Name | Type | Description | Required | |-------|--------------|----------------|----------------| | image | Directory | Contains input image. The input image must be a WV02/WV03 multispectral image which is atmospherically compensated. If more than one images are contained in this directory, one is picked arbitrarily. | True | | vegetation | String | If True, the output is a vegetation mask. Default is False. | False | | water | String | If True, the output is a water mask. Default is False. | False | | soil | String | If True, the output is a bare soil mask. Default is False. | False | | clouds | String | If True, the output is a cloud mask. Default is False. | False | | shadows | String | If True, the output is a shadow mask. Default is False. | False | | unclassified | String | If True, the output is an unclassified material mask. Default is False. | False | | tiles | String | Number of tiles to tile input image into if it is too big. In that case, the recommended number is 2. Only use this if the default option fails. Default is 1. | False | | verbose | String | If True, save algorithm config files in output directory. To be used for debugging purposes. Default is False. | False | Note that if more than one class is set to True, the corresponding mask includes all the classes set to True. ## <a name="Outputs"></a>Outputs | Name | Type | Description | |-------|---------|---------------------------------------------------| | image | Directory | Contains output image. | ##<a name="Known Issues"></a>Known Issues * Shadows may be misinterpreted as water. * Thin water bodies may be discarded. * Small vegetation patches may be lost. * Cloud holes are to be expected. * Regions than appear as bare soil in the original image and interpreted as vegetation in the LULC are due to spatial aggregation of small grass patches which are not necessary evident. * The shadows class contains a limited subset of the true set of all shadow regions. * The unclassified class can be used as a rough approximation of built-up.