Accurate prediction of subsurface bearing stratum depth is essential for earthquake-resistant construction
In earthquake-prone regions like Tokyo, where the risk of soil liquefaction is high, knowing this depth helps engineers design safer buildings and prevent soil-related disasters.
Usual methods like the Standard Penetration Test (SPT) have been used for this purpose. Although these methods have been proven to be reliable, they are time-consuming, labour-intensive, and costly.
Researchers from the Shibaura Institute of Technology (SIT), Japan have now demonstrated that machine learning (ML) can offer a powerful, cost-effective alternative. Their study shows that ML not only improves depth prediction accuracy but also allows for scalable and efficient disaster risk assessments in urban areas.
Applying ML to geotechnical engineering
The research team used a large dataset consisting of 942 geological surveys and SPT records from the Tokyo metropolitan area. They applied three popular ML algorithms, random forest (RF), artificial neural network (ANN), and support vector machine (SVM), to predict the bearing layer depth.
To evaluate the effectiveness of these models, two scenarios were tested. The first (Case-1) used only geographical data such as latitude, longitude, and elevation. The second (Case-2) added detailed stratigraphic classification data, which includes the type and structure of underground soil layers.
Among the three ML algorithms, the random forest model consistently delivered the most accurate and reliable results. In Case-2, which included more detailed data, the RF model achieved a mean absolute error of just 0.86 meters, significantly outperforming the other methods and even its own performance in the simpler Case-1 scenario (1.26 meters error).
This improvement shows the importance of including stratigraphic data in ML models for geotechnical applications. Not only does it enhance prediction accuracy, but it also increases robustness to data noise, an important factor when working with real-world datasets.
Impact of data density on accuracy
Taking their research further, the team investigated how the density of spatial data points affects prediction accuracy. They created six datasets with varying densities, ranging from 0.5 to 3.0 data points per square kilometre. Results showed that the prediction accuracy of the RF model improved with higher spatial data density. This indicates that denser datasets, when available, can significantly enhance the reliability of ML predictions.
ML-based prediction models offer a practical solution for urban planners and civil engineers working in seismically active regions. Unlike traditional SPT surveys, ML models require less time and fewer resources, making them ideal for large-scale applications.
As computing technologies evolve and more geological data becomes accessible, ML can be integrated into real-time systems for dynamic infrastructure planning. This approach holds promise for more innovative, safer urban development, especially in earthquake-prone cities like Tokyo. From optimising the locations of buildings and bridges to planning underground transportation networks, ML offers a versatile tool for resilient infrastructure.
This study marks a significant step forward in geotechnical engineering. By combining machine learning with existing geological data, stakeholders can reduce costs, enhance safety, and streamline the planning process.