Sidewalk Extraction from Big Visual Data
Sidewalk extraction using aerial and street view images
A reliable, punctual, and spatially accurate dataset of sidewalks is vital for identifying where improvements can be made upon the urban environment to enhance multi-modal accessibility, social cohesion, and residents’ physical activity. This project develops a synthetically new spatial procedure to extract the sidewalk by integrating the detected results from aerial and street view imagery. We first train neural networks to extract sidewalks from aerial images, and then use pre-trained models to restore occluded and missing sidewalks from street view images.
By combining the results from both data sources, a complete network of sidewalks can be produced. Our case study includes four counties in the U.S., and both precision and recall reach about 0.9. The street view imagery helps restore the occluded sidewalks and largely enhances the sidewalk network’s connectivity by linking 20% of dangles.
Specifically, this study focuses on developing an automatic method to extract sidewalks from aerial and street view imagery. We extract sidewalks from aerial images, then restore the occluded segments from street view images. Aerial imagery is the primary data source for sidewalk extraction because its data volume is relatively small, and it can cover the entire area of interest. Street view imagery has high discrimination but brings more data volume and computation, and the location of the result needs sophisticated workflow to be restored. In addition, the full coverage of the area of interest is difficult to obtain, especially private properties where the street image mapping car cannot access.
Therefore, we treat the street view image as supplementary data and use it for the occluded or other missing sidewalks in aerial images. The extracted result of sidewalks can be polygon or polyline, depending on the resolution of the aerial image. If the resolution is no better than 0.3 m, sidewalks are represented as linear objects in areal images. We use aerial images of 0.3 m in the study case, so the extracted sidewalks will be converted to polylines.
Publication: Ning, H., Ye, X., Chen, Z., Liu, T., & Cao, T. (2022). Sidewalk extraction using aerial and street view images. Environment and Planning B: Urban Analytics and City Science, 49(1), 7-22.
Converting street view images to land cover maps: sidewalk network extraction for the wheelchair users
In this study, we present a method to convert street view images to measurable land cover maps using their associated depthmap data. The proposed method can autonomously extract and measure land cover objects over large areas covered by a mosaic of street view images. In the case study, we demonstrated the use of land cover maps, derived from Google Street View images, to extract sidewalk features and to measure sidewalk widths for wheelchair users. Sidewalk feature slopes were also extracted from the metadata of street view images.
Using the Washington D.C., U.S. study area, our method extracted a sidewalk network of 2,500 km in length with the precision of 0.8662 and recall of 0.8525. For sidewalks with widths between 1 – 2 m, the mean width error was 0.24 m. The slope mean error was 0.676°. These results demonstrate the converted land cover maps from street view images can be used for metric mapping purposes. The extracted sidewalk network can serve as a valuable inventory for urban planners to promote equitable walkability for mobility disabled users.
- Publication under review: Ning H., Li Z., Wang C., Hodgson M., Huang X., Li X., Using deep learning to convert street view images to land cover maps for metric mapping: a case study on sidewalk network extraction for the wheelchair users, Computers, Environment and Urban Systems