Flooding has become a regular part of day-to-day life in Ho Chi Minh City (HCMC), Vietnam but so has urbanization. HCMC is Vietnam's biggest metropolis and one of the fastest-growing economic centers in Asia, and yet 40 to 45 % of the city is less than 1 m above sea level.
Above GIF: Interactive map of precipitation changes by the 2050s, zooming into HCMC where significant increases to precipitation are predicted. Author: Dipika Kadaba, 2018.
Today, it is estimated that a 100-year flood could cause $200 to 300 million in infrastructure damages, with real estate damages possibly reaching $1.5 billion. With a projected growth of HCMC, compounded by climate change, it is estimated that by 2050, the same probability of flood would do 3x the physical damage and 20x the ripple effects. Yet, this isn't an issue just in HCMC; it's a common threat among many coastal cities in Asia. Even more, the impacts on local economies could have global consequences, especially regarding certain industries like apparel and textiles. Most importantly, without proper preparation and response, thousands of lives could be lost and livelihoods devastated.
Above Image: Infographic of projected 100-year flood losses, both direct and indirect, for HCMC, Vietnam under forecasted conditions for 2050. © McKinsey & Company.
The changes of the buildings within a city over time directly reflect the dynamics of urbanization, the economy, and the population. Today, many cities have cadaster data as a means of studying these phenomena, but for rapidly growing cities that might also face challenges of high quantities of undocumented buildings as HCMC does, keeping this data accurate and up-to-date may seem like an impossible task. What's worse, failing to keep this data relevant means failing to accurately project both risks and opportunities to face the challenges of urbanization and climate change, specifically when it comes to flood risk in coastal megacities like HCMC. Current open data sources like OpenStreetMap, although they have fairly reliable information for roads and waterways, have very sparse building information.
Above Image: Pleiades data of HCMC, Vietnam with OpenStreetMap data overlain. PLEIADES © CNES 2021, Distribution Airbus DS.
Automatic building extraction in the urban environment has been an ongoing challenge due to the high-density, complex surroundings and various shapes of the buildings. That's where leveraging very high resolution (VHR) satellite data, cutting-edge deep learning technology, and scalable cloud processing come together to form a trifecta of a solution. While there are other options for generating building footprints, none have the speed of deep learning solutions while still maintaining high degrees of accuracy. Studies have shown that deep-learning paradigms on high-resolution images can perform with high degrees of accuracy not just for building detection/urban estimation but also for building classification.
Above Image: Comparison of building extraction among a U-net (deep learning) approach, an object-based image analysis, and a Random Forest, where OA stands for overall accuracy. Source: Pan et al., 2020. https://www.mdpi.com/2072-4292/12/10/1574
There are many building detection, settlement mapping, and urban estimation algorithms on the UP42 platforms from partners like Aventior, Pink Matter, Vasundharaa, and Picterra, just to name a few. However, for this case study, we used a newly released algorithm from our partner, Spacept, a leader in infrastructure and risk analytics. For this example, we will look at just one timestamp from 2020 to run building detection and conduct further analysis. In order to do this, we need to:
- Sign up to get started with the UP42 platform, unlocking 100 EUR worth of credits for testing, and then create a project and workflow
- Easily click together blocks in UI, namely Pleiades Display (Streaming), Raster Tiling, and Spacept's Building Detector algorithm
- Using the Catalog Search, find a dataset with minimal cloud cover*
- Import ID's and AOI parameters into the Job Configuration page
- Ensure tile parameters are 768 x 768 and run the job!
** Ideally, data is not taken after extreme weather conditions (drought, flood, etc.) for optimal generalization of environmental conditions*
If you're not familiar with deep learning, you may wonder why tiling is a necessary step before running the building detector algorithm. The answer: raster tiling serves as a data preparation step, splitting the larger raster datasets into subsets or "tiles" to be fed into neural networks, both normalizing the data for processing and reducing memory demand as the algorithm runs. Anytime you use a deep learning algo, you'll need raster tiling.
Left Image: Selecting VHR (Pleiades Display for processing) data and your AOI with the UP42 Catalog Search. Right Image: Click together the Pleiades Display block, Raster Tiling, and Spacept's Building Detector block for the DinSAR calculation using the UP42 UI.
The results of this building detection algorithm have a high degree of accuracy, and for a 50 sq km area, the analysis took only 23 minutes. This particular algorithm was benchmarked against human annotators on 10,000+ Pleiades images, and the model showed an accuracy of up to 90% depending upon the area and, moreover, is continuously being improved. The model was trained using labeled images taken from but not limited to parts of the United States, Mexico, France, Belgium, Portugal, India, Nepal, Indonesia, and Australia.
Spacecept's Building Detector returns raster tiles of probability values between 1 and 0, where a given pixel value of 1 represents a high likelihood of being a building and 0 is a low likelihood, as shown in the sample below.
Above Image: Results of Spacept Building Detector algorithm (transparency of 50%), overlaying Pleiades VHR data © CNES 2021.
Despite the quality of these results, there are some straightforward steps where the results can be further improved using just the UP42 platform and open source software like QGIS. One suggestion is to run an NDVI algorithm on the same dataset, as shown in the figure below, and then using QGIS, reclassify this NDVI raster into two thresholds, creating a binary raster of vegetation or non-vegetation. Or, to make this process even simpler, just run UP42's NDVI Threshold algorithm directly from the console. For this example, the threshold used here was defined as:
- NDVI > 0.36 is vegetation
- NDVI < 0.36 is non-vegetated area
Above Image: Results of NDVI calculation on Pleiades data of HCMC, Vietnam with vegetated areas depicted in red.
Next, mosaic the building detection raster tiles, reclassifying values as you see fit for buildings. With these binary rasters, we can convert both the building area and the vegetated area into polygon shapefiles, and with these shapefiles, you can easily erase or exclude all vegetated areas from the building areas to refine your results.
Above Image: Slider of two images comparing the results of the refined building polygons and the vegetation polygons extracted from the NDVI threshold calculation.
So you may be wondering, now that you know how to run off-the-shelf building detection easily, what can I do to derive deeper insights? Here are some ideas to start you off:
This building detection can serve as a building mask, allowing you to assess environmental parameters.
- With the same VHR data used to derive the building detection, you can run NDVI, excluding the building areas, to determine what percent of the urban environment is green space or could be converted to green space.
- Leveraging OpenStreetMap (OSM) data, you can convert street polylines into polygons of blocks to conduct a quick block-level analysis, such as urban density per block or green area ratios per block.
The inverse of the above point is also true. As shown in the previous section, you can convert the building raster to polygons and then derive more parameters on the buildings and potentially yield socio-economic estimates.
- Pairing refined building results with other parameters such as the aforementioned block-level analysis, as well as geometric information on the polygons, such as shape area and/or perimeter-area-ratios, you could yield true building identification results, also known as urban morphologies.
Helpful hint: Be careful with multipart polygons, ensuring to change them to single-part polygons to yield accurate size and shape information.
- Depending on your level of expertise, you could conduct everything from simple unsupervised classification, like K-means clustering, to various supervised methods, such as Support Vector Machines (SVM). With your building mask, you can solely classify the rooftop information to yield an initial basis for building identification and further refine building polygons.
Helpful hint: SVM algorithms are effective in high-dimensional spaces, so you could create additional derivatives from the Pleiades data such as Principal Component Analysis (PCA) to train your model.
Another helpful hint: Leveraging free OSM street polylines can help you further refine your building detection results to exclude pixels of road from your analysis.
Above Image: Slider of two images comparing the results of the refined building polygons with one multipart polygon (a discontinuous polygon, where the area was calculated as > 2000 sq m) selected and the results of the same selection after a multipart to singlepart function was applied (one continuous polygon, where the area was calculated as between 90 and 200 sq m).
Above Image: Slider of two images comparing the results of two PCA’s, where for RGB, R= PCA1, G=PCA2, and B=PCA3. The first (left) image is a PCA without any corrections, adjustments or masking of the data, and the second (right) image is a result of PCA on solely using pixels from the areas identified as buildings.
Above Image: Example results of what a quick SVM classification of rooftop types (or building material) may look like, after using refined building detection results.
Automating building extraction can be a highly cumbersome task, often with less than desirable results. However, by kick-starting your analysis with off-the-shelf deep learning analytics, you set yourself up for success and empower yourself to derive deeper insights. Moreover, these building extraction results are vital for risk mitigation and regional planning, especially when it comes to rapidly growing cities like HCMC, where cadaster data is also lacking or outdated. When it comes to building data, keeping this data relevant translates to more accurate projections of risks and an increased ability to face the challenges of urbanization and climate change. This building information is critical for flood management not only for identifying where there is vulnerable infrastructure, but also in drilling-down further socio-economic factors like estimated population or building cost based on derived attributes like building size, rooftop material, etc.
Building detection algorithms, powered by deep learning and the scalable UP42 infrastructure, are a great way to get situational awareness of the urban landscape from a distance. Even more components can be derived from geospatial data that can help indicate urban conditions like ground elevation and building heights from DTMs and DSMs, subsidence information from SAR and PSI, and even weather data. All of these not only contribute to a comprehensive picture of the urban landscape, but are readily available through the UP42 platform. Sign up to get started with the platform today and unlock 100 EUR worth of credits that you can use to replicate this case study.
To learn more about this particular case study, visit Mapscaping's blog to read more or listen to the Urban Estimation episode. Try out Spacept's capabilities on the UP42 marketplace, as well as the many previously mentioned data sources like OSM and Pleiades.