\`x^2+y_1+z_12^34\`
Article Contents
Article Contents

Downscaling shallow water simulations using artificial neural networks and boosted trees

  • *Corresponding author: Antoine Rousseau

    *Corresponding author: Antoine Rousseau 
Abstract / Introduction Full Text(HTML) Figure(7) / Table(20) Related Papers Cited by
  • We present the application of two statistical artificial intelligence tools for multi-scale shallow water simulations. Artificial neural networks (ANNs) and boosted trees (BTs) are used to model the relationship between low-resolution (LR) and high-resolution (HR) information derived from simulations provided in the learning phase. The two statistical models are analyzed (and compared) through hyper-parameters such as the number of epochs and the network structure for ANNs, and the learning rate, tree depth and number for BTs. This analysis is performed through 4 numerical experiments the input datasets of which (for the learning, validation and test phases) are varied through the boundary conditions of the flow numerical simulation.

    The performance of the ANNs is remarkably consistent, regardless of the choice made for the training/validation/testing set. The performance improves with the number of epochs and the number of neurons. For a given number of neurons, a single-layer structure performs better than multi-layer structures. BTs perform significantly better than ANNs in 2 experiments, with an error 10 to 100 times lower and a computational cost 5 to 10 times larger). However, when the validation datasets differ from the training datasets, the performance of BTs performance is strongly degraded, with a modelling error more than one order of magnitude larger than that of ANNs.

    Used in conjunction with upscaled flood models such as porosity models, these techniques appear as a promising operational alternative to direct flood hazard assessment from HR flow simulations.

    Mathematics Subject Classification: Primary: 86A05, 86A32; Secondary: 68T07.

    Citation:

    \begin{equation} \\ \end{equation}
  • 加载中
  • Figure 1.  Upscaling and downscaling. Definition sketch

    Figure 2.  Definition sketch for shallow water models structure and variables. The red and blue lines represent respectively the (steady) bottom elevation and the (unsteady) free surface elevation

    Figure 3.  Definition sketch for the experiment plan. Top : representation in the $ (x, h) $ plane for a given time $ t $. Bottom : water depth contour lines in the $ (x, t) $ plane. Owing to solution self-similarity, the speeds of Points A and B are constant, hence the straight, $ h $-contour lines in the $ (x, t) $ plane (bottom)

    Figure 4.  Experiment 1 - ANN and BT best Mean Squared Error (MSE) as a function of the Upscaling Ratio (UR) for the various reconstructed variables

    Figure 5.  Experiment 2 - ANN and BT best Mean Squared Error (MSE) as a function of the Upscaling Ratio (UR) for the various reconstructed variables

    Figure 6.  Experiment 3 -ANN and BT best Mean Squared Error (MSE) as a function of the Upscaling Ratio (UR) for the various reconstructed variables

    Figure 7.  Experiment 4 - ANN and BT best Mean Squared Error (MSE) as a function of the Upscaling Ratio (UR) for the various reconstructed variables

    Table 1.  BVPs : model parameters

    Symbol Meaning Numerical value
    $ L $ Domain length 100 m
    $ \Delta x_{HR} $ HR cell size 0.125 m
    $ \Delta x_{LR} $ LR cell size 0.625 m, 5 m, 10 m
     | Show Table
    DownLoad: CSV

    Table 2.  ANN hyperparameters. The number of epochs is the number of times the entire data set is used in the training process

    Hyperparameters Numerical values
    Number of epochs 50,150 or 500
    Batch size 32
    Number of neurons (1-layer configuration) 100 or 500
    Number of neurons (2-layer configuration) (50, 50) or (100,100)
    Number of neurons (3-layer configuration) (50, 50, 50) or (75, 75, 75)
     | Show Table
    DownLoad: CSV

    Table 3.  BT hyperparameters

    Hyperparameter Numerical values
    Learning rate 0.1
    Maximum depth 2 or 4
    Minimum samples per leaf 1 sample, 2% of set
    Number of trees 7, 20 or 50
    Subsample share used for each tree 50%
     | Show Table
    DownLoad: CSV

    Table 4.  Experiment plan. $ h_0 = $ 1 m for all simulations

    Experiment Training Validation Test
    $ h_1 $ (m) sample size $ h_1 $ (m) sample size $ h_1 $ (m) sample size
    1 $ {0.7 , 0.9 } $ 880 $ {0.7 , 0.9 } $ 222 $ 0.8 $ 551
    2 $ {0.7 , 0.9 } $ 1102 $ {0.75 , 0.85 } $ 1102 $ 0.8 $ 551
    3 $ {0.7 , 0.8 } $ 880 $ {0.7 , 0.8 } $ 222 $ 0.9 $ 551
    4 $ {0.7 , 0.85} $ 1102 $ {0.75 , 0.8 } $ 1102 $ 0.9 $ 551
     | Show Table
    DownLoad: CSV

    Table 5.  Experiment 1 - ANN best performance. UR: Upscaling Ratio. CPU time format: $ (T_1, T_2) $, with $ T_1 $ the training time, $ T_2 $ the evaluation time. Hyperparameter format (Param): $ [a, [b_i]] $, with $ a $ the number of epochs, $ b_i $ the number of neurons in Layer $ i $

    UR Best performance Reconstructed
    $ (h, q) $ $ \sqrt{h} $ $ h $
    5 MSE (m$ ^2 $) $ 1.2 \times 10^{-5} $ $ 4.4\times 10^{-5} $ $ 1.7\times 10^{-5} $
    CPU time (s) (170, 0.38) (83, 0.23) (63, 0.24)
    Param [500, [100]] [500, [100]] [500, [100]]
    20 MSE (m$ ^2 $) $ 1.7\times 10^{-5} $ $ 3.9\times 10^{-5} $ $ 1.3\times 10^{-5} $
    CPU time (s) (140, 0.15) (83, 0.23) (66, 0.27)
    Param [500, [100,100]] [500, [100]] [500, [100]]
    80 MSE (m$ ^2 $) $ 7.4\times 10^{-5} $ $ 7.5\times 10^{-5} $ $ 5.5\times 10^{-5} $
    CPU time (57, 0.14) (130, 0.067) (69, 0.32)
    Param [500, [75, 75, 75]] [500, [500]] [500, [100,100]]
     | Show Table
    DownLoad: CSV

    Table 6.  Experiment 1 - BT best performance. CPU time format: $ (T_1, T_2) $, with $ T_1 $ the training time, $ T_2 $ the evaluation time. Hyperparameter format (Param): $ [a, b, c] $, with $ a $ the maximum depth, $ b $ the minimum fraction of samples per leaf, $ c $ the number of trees

    UR Best performance Reconstructed variable
    $ (h, q) $ $ \sqrt{h} $ $ h $
    5 MSE (m$ ^2 $) $ 2.1\times 10^{-6} $ $ 3.9\times 10^{-7} $ $ 4.3e\times 10^{-7} $
    CPU time (s) ($ 1.8\times 10^{3} $, $ 8.8\times 10^{-1}) $ ($ 3.8e\times 10^{2} $, $ 3.6\times 10^{-1}) $ ($ 4.2\times 10^{2} $, $ 3.7\times 10^{-1} $)
    Param [4, 1.0, 50] [4, 1.0, 50] [4, 1.0, 50]
    20 MSE (m$ ^2 $) $ 4.\times 10^{-6} $ $ 1.\times 10^{-6} $ $ 8.\times 10^{-7} $
    CPU time (s) ($ 4.4\times 10^{2} $, $ 6.7\times 10^{-1} $) ($ 1.3\times 10^{2} $, $ 3.1\times 10^{-1} $) ($ 1.3\times 10^{2} $, $ 3.1\times 10^{-1} $)
    Param [4, 1.0, 50] [4, 1.0, 50] [4, 1.0, 50]
    80 MSE (m$ ^2 $) $ 1.1\times 10^{-5} $ $ 1.1\times 10^{-5} $ $ 9.4\times 10^{-6} $
    CPU time (s) ($ 1.5\times 10^{2} $, $ 6.3\times 10^{-1} $) ($ 3.8e\times 10^{1} $, $ 2.9\times 10^{-1} $) ($ 4.5\times 10^{1} $, $ 3.8\times 10^{-1} $)
    Param [4, 1.0, 50] [4, 0.02, 50] [2, 1.0, 50]
     | Show Table
    DownLoad: CSV

    Table 7.  Experiment 1 - ANN performance for various hyperparameter sets. Reconstructed variable: $ h $. UR: 20

    hyperparameter Value MSE (m$ ^2 $) Training time (s) Evaluation time (ms)
    Epochs 50 $ 3.0\times 10^{-4} $ 10.9 0.23
    150 $ 7.2\times 10^{-5} $ 24.7 0.24
    500 $ 3.0\times 10^{-5} $ 80.6 0.16
    Layer structure [100] $ 1.1\times 10^{-4} $ 31.3 0.20
    [500] $ 6.8\times 10^{-5} $ 64.1 0.18
    [50, 50] $ 2.0\times 10^{-4} $ 38.7 0.25
    [100,100] $ 1.1\times 10^{-4} $ 32.9 0.16
    [50, 50, 50] $ 2.0\times 10^{-4} $ 38.2 0.23
    [75, 75, 75] $ 1.1\times 10^{-4} $ 27.1 0.22
     | Show Table
    DownLoad: CSV

    Table 8.  Experiment 1 - BT performance for various hyperparameter sets. Reconstructed variable: $ h $. UR: 20

    hyperparameter Value MSE (m$ ^2 $) Training time Evaluation time
    Maximum depth 2 $ 4.9\times 10^{-4} $ 47.4 0.33
    4 $ 4.7\times 10^{-4} $ 60.5 0.30
    Minimum samples per leaf 1 $ 4.7\times 10^{-4} $ 58.8 0.33
    2% $ 4.8\times 10^{-4} $ 49.2 0.31
    Number of trees 7 $ 1.3\times 10^{-3} $ 17.3 0.30
    20 $ 9.2\times 10^{-5} $ 43.3 0.31
    50 $ 2.6\times 10^{-6} $ 101 0.35
     | Show Table
    DownLoad: CSV

    Table 9.  Experiment 2 - ANN best performance. UR: Upscaling Ratio. CPU time format: $ (T_1, T_2) $, with $ T_1 $ the training time, $ T_2 $ the evaluation time. Hyperparameter format (Param): $ [a, [b_i]] $, with $ a $ the number of epochs, $ b_i $ the number of neurons in Layer $ i $

    UR Best performance Reconstructed variable
    $ (h, q) $ $ \sqrt{h} $ $ h $
    5 MSE (m$ ^2 $) $ 5.7\times 10^{-5} $ $ 1.8\times 10^{-5} $ $ 3.8\times 10^{-6} $
    CPU time (s) ($ 3.6\times 10^{1} $, $ 4.0\times 10^{-1} $) ($ 1.8e\times 10^{2} $, $ 1.3\times 10^{-1} $) ($ 1.8\times 10^{2} $, $ 1.3\times 10^{-1} $)
    Param [150, [100]] [500, [500]] [500, [500]]
    20 MSE (m$ ^2 $) $ 2.\times 10^{-5} $ $ 1.7\times 10^{-5} $ $ 6.7\times 10^{-6} $
    CPU time (s) ($ 1.1\times 10^{2} $, $ 4.4\times 10^{-1} $) ($ 6.9\times 10^{+1} $, $ 4.9\times 10^{-1} $) ($ 7.9\times 10^{1} $, $ 2.7\times 10^{-1} $)
    Param [500, [100]] [500, [100]] [500, [100]]
    80 MSE (m$ ^2 $) $ 1.8\times 10^{-4} $ $ 6.7\times 10^{-5} $ $ 4.1\times 10^{-5} $
    CPU time (s) ($ 1.4\times 10^{2} $, $ 3.9\times 10^{-1} $) ($ 6.6\times 10^{1} $, $ 1.9\times 10^{-1} $) ($ 6.4\times 10^{1} $, $ 1.7\times 10^{-1} $)
    Param [500, [100]] [500, [75, 75, 75]] [500, [75, 75, 75]]
     | Show Table
    DownLoad: CSV

    Table 10.  Experiment 2 - BT best performance. CPU time format: $ (T_1, T_2) $, with $ T_1 $ the training time, $ T_2 $ the evaluation time. Hyperparameter format (Param): $ [a, b, c] $, with $ a $ the maximum depth, $ b $ the minimum fraction of samples per leaf, $ c $ the number of trees

    UR Best performance Reconstructed variable
    $ (h, q) $ $ \sqrt{h} $ $ h $
    5 MSE (m$ ^2 $) $ 6.5\times 10^{-3} $ $ 1.6\times 10^{-3} $ $ 1.6\times 10^{-3} $
    CPU time ($ 2.3\times 10^{3} $, 3.6) ($ 5.3\times 10^{2} $, 1.2) ($ 5.3\times 10^{2} $, 1.2)
    Param [4, 1.0, 50] [4, 1.0, 50] [4, 1.0, 50]
    20 MSE (m$ ^2 $) $ 6.0\times 10^{-3} $ $ 1.3\times 10^{-3} $ $ 1.4\times 10^{-3} $
    CPU time ($ 5.5\times 10^{2} $, 2.0) ($ 1.9\times 10^{2} $, $ 8.9\times 10^{-1} $) $ (1.2\times 10^{2}, 8.4\times 10^{-1}) $
    Param [4, 1.0, 50] [4, 1.0, 50] [4, 0.02, 50]
    80 MSE (m$ ^2 $) $ 5.5\times 10^{-3} $ $ 1.6\times 10^{-3} $ $ 1.6\times 10^{-3} $
    CPU time $ (1.5\times 10^{2}, 1.6) $ $ (4.4\times 10^{1}, 7.8\times 10^{-1}) $ $ (5.2\times 10^{1}, 7.7\times 10^{-1}) $
    Param [4, 0.02, 50] [4, 0.02, 50] [4, 0.02, 50]
     | Show Table
    DownLoad: CSV

    Table 11.  Experiment 2 - ANN performance for various hyperparameter sets. Reconstructed variable: $ h $. UR: 20

    hyperparameter Value MSE (m$ ^2 $) Training time Evaluation time
    Epochs 50 $ 2.6\times 10^{-4} $ 11.4 0.32
    150 $ 5.3\times 10^{-5} $ 32.8 0.27
    500 $ 2.4\times 10^{-5} $ 92.3 0.27
    Layer Structure [100] $ 1.1\times 10^{-4} $ 37.7 0.29
    [500] $ 4.1\times 10^{-5} $ 76.0 0.20
    [50, 50] $ 1.4\times 10^{-4} $ 44.7 0.31
    [100,100] $ 1.2\times 10^{-4} $ 46.6 0.38
    [50, 50, 50] $ 1.6\times 10^{-4} $ 33.5 0.22
    [75, 75, 75] $ 1.1\times 10^{-4} $ 34.5 0.31
     | Show Table
    DownLoad: CSV

    Table 12.  Experiment 2 - BT performance for various hyperparameter sets. Reconstructed variable: $ h $. UR: 20

    hyperparameter Value MSE (m$ ^2 $) Training time Evaluation time
    Maximum depth 2 $ 2\times 10^{-3} $ 57.6 0.67
    4 $ 2\times 10^{-3} $ 75.2 0.67
    Minimum samples per leaf 1 $ 2\times 10^{-3} $ 72.4 0.70
    2% $ 2\times 10^{-3} $ 60.4 0.64
    Number of trees 7 $ 2\times 10^{-3} $ 21.3 0.49
    20 $ 2\times 10^{-3} $ 53.6 0.62
    50 $ 2\times 10^{-3} $ 124.3 0.89
     | Show Table
    DownLoad: CSV

    Table 13.  Experiment 3 - ANN best performance. UR: Upscaling Ratio. CPU time format: $ (T_1, T_2) $, with $ T_1 $ the training time, $ T_2 $ the evaluation time. Hyperparameter format (Param): $ [a, [b_i]] $, with $ a $ the number of epochs, $ b_i $ the number of neurons in Layer $ i $

    UR Best performance Reconstructed variable
    $ (h, q) $ $ \sqrt{h} $ $ h $
    5 MSE (m$ ^2 $) $ 1.5\times 10^{-5} $ $ 7.1\times 10^{-5} $ $ 1.1\times 10^{-5} $
    CPU time (s) $ (1.4e\times 10^{2} 1.6\times 10^{-1}) $ $ (5.6\times 10^{1}, 1.3\times 10^{-1}) $ $ (6.2\times 10^{1}, 2.4\times 10^{-1}) $
    Param [500, [50, 50, 50]] [500, [75, 75, 75]] [500, [100]]
    20 MSE (m$ ^2 $) $ 1.3\times 10^{-5} $ $ 2.6\times 10^{-5} $ $ 9.8\times 10^{-6} $
    CPU time (s) $ (7.9\times 10^{1}, 1.7\times 10^{-1}) $ $ (5.5\times 10^{1}, 1.7\times 10^{-1}) $ $ (5.9\times 10^{1}, 3.2\times 10^{-1}) $
    Param [500, [75, 75, 75]] [500, [75, 75, 75]] [500, [100]]
    80 MSE (m$ ^2 $) $ 3.76\times 10^{-5} $ $ 5.99\times 10^{-5} $ $ 4.45\times 10^{-5} $
    CPU time (s) $ (8.0\times 10^{1}, 1.6\times 10^{-1}) $ $ (8.3\times 10^{1}, 3.1\times 10^{-1}) $ $ (8.3\times 10^{1}, 7.7\times 10^{-2}) $
    Param [500, [50, 50, 50]] [500, [100,100]] [500, [75, 75, 75]]
     | Show Table
    DownLoad: CSV

    Table 14.  Experiment 3 - BT best performance. CPU time format: $ (T_1, T_2) $, with $ T_1 $ the training time, $ T_2 $ the evaluation time. Hyperparameter format (Param): $ [a, b, c] $, with $ a $ the maximum depth, $ b $ the minimum fraction of samples per leaf, $ c $ the number of trees

    UR Best performance Reconstructed variable
    $ (h, q) $ $ \sqrt{h} $ $ h $
    5 MSE (m$ ^2 $) $ 1.9\times 10^{-6} $ $ 4.9\times 10^{-7} $ $ 4.1\times 10^{-7} $
    CPU time (s) $ (2.7\times 10^{3}, 1.5) $ $ (4.6\times 10^{2}, 3.7\times 10^{-1}) $ $ (3.9\times 10^{2}, 3.8\times 10^{-1}) $
    Param [4, 1.0, 50] [4, 1.0, 50] [4, 1.0, 50]
    20 MSE (m$ ^2 $) $ 3.4\times 10^{-6} $ $ 7.8\times 10^{-7} $ $ 6.9\times 10^{-7} $
    CPU time (s) $ (7.2\times 10^{2}, 1.3) $ $ (1.4\times 10^{2}, 3.2\times 10^{-1}) $ $ (1.4\times 10^{2}, 3.2\times 10^{-1}) $
    Param [4, 1.0, 50] [4, 1.0, 50] [4, 1.0, 50]
    80 MSE (m$ ^2 $) $ 1.78\times 10^{-5} $ $ 7.34\times 10^{-6} $ $ 7.53\times 10^{-6} $
    CPU time (s) $ (1.7\times 10^{2}, 6.7\times 10^{-1}) $ $ (4.5\times 10^{1}, 3.8\times 10^{-1}) $ $ (4.6\times 10^{1}, 3.8\times 10^{-1}) $
    Param [4, 1.0, 50] [2, 1.0, 50] [2, 1.0, 50]
     | Show Table
    DownLoad: CSV

    Table 15.  Experiment 3 - ANN performance for various hyperparameter sets. Reconstructed variable: $ h $. UR: 20

    hyperparameter Value MSE (m$ ^2 $) Training time Evaluation time
    Epochs 50 $ 3.4\times 10^{-4} $ 10.3 0.21
    150 $ 7.5\times 10^{-5} $ 28.1 0.31
    500 $ 1.5\times 10^{-5} $ 71.5 0.20
    Layers structure [100] $ 1.3\times 10^{-4} $ 29.4 0.24
    [500] $ 5.9\times 10^{-5} $ 62.7 0.18
    [50, 50] $ 1.771\times 10^{-4} $ 37.5 0.22
    [100,100] $ 1.3\times 10^{-4} $ 31.3 0.26
    [50, 50, 50] $ 2.1\times 10^{-4} $ 28.2 0.31
    [75, 75, 75] $ 1.5\times 10^{-4} $ 30.7 0.23
     | Show Table
    DownLoad: CSV

    Table 16.  Experiment 3 - BT performance for various hyperparameter sets. Reconstructed variable: $ h $. UR: 20

    hyperparameter Value MSE (m$ ^2 $) Training time Evaluation time
    Maximum depth 2 $ 4.8\times 10^{-4} $ 50.1 0.34
    4 $ 4.6\times 10^{-4} $ 66.1 0.32
    Minimum samples per leaf 1 $ 4.7\times 10^{-4} $ 63.1 0.33
    2% $ 4.8\times 10^{-4} $ 53.1 0.33
    Number of trees 7 $ 1.3\times 10^{-3} $ 18.2 0.30
    20 $ 9.3\times 10^{-5} $ 50.1 0.34
    50 $ 3.8\times 10^{-6} $ 106 0.36
     | Show Table
    DownLoad: CSV

    Table 17.  Experiment 4 - ANN best performance. UR: Upscaling Ratio. CPU time format: $ (T_1, T_2) $, with $ T_1 $ the training time, $ T_2 $ the evaluation time. Hyperparameter format (Param): $ [a, [b_i]] $, with $ a $ the number of epochs, $ b_i $ the number of neurons in Layer $ i $

    UR Best performance Reconstructed variable
    $ (h, q) $ $ \sqrt{h} $ $ h $
    5 MSE (m$ ^2 $) $ 9.3\times 10^{-5} $ $ 8.4\times 10^{-5} $ $ 1.8\times 10^{-5} $
    CPU time (s) $ (1.4\times 10^{2}, 1.0) $ $ (1.4\times 10^{2}, 8.1\times 10^{-1}) $ $ (1.0\times 10^{2}, 9.4\times 10^{-1}) $
    Param [150, [500]] [500, [100, 100]] [500, [100]]
    20 MSE (m$ ^2 $) $ 7.6\times 10^{-5} $ $ 5.6\times 10^{-5} $ $ 2.1\times 10^{-5} $
    CPU time (s) $ (3.3\times 10^{2}, 1.6\times 10^{-1}) $ $ (1.9\times 10^{2}, 2.2\times 10^{-1}) $ $ (9.9\times 10^{^1}, 6.1\times 10^{-1}) $
    Param [500, [500]] [500, [500]] [500, [50, 50]]
    80 MSE (m$ ^2 $) $ 2.8\times 10^{-4} $ $ 1.0\times 10^{-4} $ $ 1.0\times 10^{-4} $
    CPU time (s) $ (1.1\times 10^{2}, 4.9\times 10^{-1}) $ $ (9.2\times 10^{1}, 4.4\times 10^{-1}) $ $ (5.1\times 10^{1}, 1.2\times 10^{-1}) $
    Param [500, [100]] [500, [100]] [500, [75, 75, 75]]
     | Show Table
    DownLoad: CSV

    Table 18.  Experiment 4 - BT best performance. CPU time format: $ (T_1, T_2) $, with $ T_1 $ the training time, $ T_2 $ the evaluation time. Hyperparameter format (Param): $ [a, b, c] $, with $ a $ the maximum depth, $ b $ the minimum fraction of samples per leaf, $ c $ the number of trees

    UR Best performance Reconstructed variable
    $ (h, q) $ $ \sqrt{h} $ $ h $
    5 MSE (m$ ^2 $) $ 4.4\times 10^{-3} $ $ 1.6\times 10^{-3} $ $ 1.6\times 10^{-3} $
    CPU time (s) $ (2.0\times 10^{3}, 6.0) $ $ (4.6\times 10^{2}, 1.7) $ $ (4.2\times 10^{2}, 1.5) $
    Param [2, 1.0, 50] [2, 0.02, 50] [2, 0.02, 50]
    20 MSE (m$ ^2 $) $ 4.4\times 10^{-3} $ $ 1.5\times 10^{-3} $ $ 1.6\times 10^{-3} $
    CPU time (s) $ (5.6\times 10^{2}, 3.1) $ $ (1.5\times 10^{2}, 1.2) $ $ (1.3\times 10^{2}, 1.0) $
    Param [2, 1.0, 50] [2, 1.0, 50] [2, 0.02, 50]
    80 MSE (m$ ^2 $) $ 4.4\times 10^{-3} $ $ 1.6\times 10^{-3} $ $ 1.6\times 10^{-3} $
    CPU time (s) $ (2.0\times 10^{2}, 1.9) $ $ (6.8\times 10^{1}, 8.9\times 10^{-1}) $ $ (5.8\times 10^{1}, 9.5\times 10^{-1}) $
    Param [2, 0.02, 50] [2, 1.0, 50] [2, 0.02, 50]
     | Show Table
    DownLoad: CSV

    Table 19.  Experiment 4 - ANN performance for various hyperparameter sets. Reconstructed variable: $ h $. UR: 20

    hyperparameter Value MSE (m$ ^2 $) Training time Evaluation time
    Epochs 50 $ 5.8\times 10^{-4} $ 14.1 0.61
    150 $ 1.3\times 10^{-4} $ 41.5 0.61
    500 $ 2.9\times 10^{-5} $ 118 0.32
    Layers structure [100] $ 2.2\times 10^{-4} $ 50.7 0.59
    [500] $ 7.8\times 10^{-5} $ 100 0.45
    [50, 50] $ 3.2\times 10^{-4} $ 47.4 0.51
    [100, 100] $ 2.5\times 10^{-4} $ 64.0 0.52
    [50, 50, 50] $ 3.9\times 10^{-4} $ 42.3 0.59
    [75, 75, 75] $ 2.2\times 10^{-4} $ 42.6 0.44
     | Show Table
    DownLoad: CSV

    Table 20.  Experiment 4 - BT performance for various hyperparameter sets. Reconstructed variable: $ h $. UR: 20

    hyperparameter MSE (m$ ^2 $) Training time Evaluation time
    Maximum depth 2 $ 2\times 10^{-3} $ 66.4 0.76
    4 $ 2\times 10^{-3} $ 87.2 0.79
    Minimum samples per leaf 1 $ 2\times 10^{-3} $ 82.4 0.79
    2% $ 2\times 10^{-3} $ 71.2 0.76
    Number of trees 7.00 $ 4\times 10^{-3} $ 24.3 0.52
    20 $ 2\times 10^{-3} $ 64.1 0.78
    50 $ 2\times 10^{-3} $ 142 1.02
     | Show Table
    DownLoad: CSV
  • [1] J. L. Auriault, Heterogeneous medium. Is an equivalent macroscopic description possible?, International Journal of Engineering Science, 29 (1991), 785-795.  doi: 10.1016/0020-7225(91)90001-J.
    [2] J. L. Auriault and P. M. Adler, Taylor dispersion in porous media: analysis by multiple scale expansions, Advances in Water Resources, 18 (1995), 217-226.  doi: 10.1016/0309-1708(95)00011-7.
    [3] A. Bensoussan, J.-L. Lions and G. Papanicolaou, Asymptotic Analysis for Periodic Structures, North-Holland, Amsterdam, 1978.
    [4] F. BernardinM. BossyC. ChauvinJ.-F. Jabir and A. Rousseau, Stochastic Lagrangian method for downscaling problems in computational fluid dynamics, ESAIM: M2AN, 44 (2010), 885-920.  doi: 10.1051/m2an/2010046.
    [5] C. M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006.
    [6] M. BruwierP. ArchambeauS. ErpicumM. Pirotton and B. Dewals, Shallow-water models with anisotropic porosity and merging for flood modelling on Cartesian grids, Journal of Hydrology, 554 (2017), 693-709.  doi: 10.1016/j.jhydrol.2017.09.051.
    [7] J. G. Caldas Steinstraesser, V. Guinot and A. Rousseau, Modified parareal method for solving the two-dimensional nonlinear shallow water equations using finite volumes, The SMAI Journal of Computational Mathematics, 7 (2021), 159-184. https://smai-jcm.centre-mersenne.org/articles/10.5802/smai-jcm.75/.
    [8] A. J. Cannon and P. H. Whitfield, Downscaling recent streamflow conditions in British Columbia, Canada using ensemble neural network models, Journal of Hydrology, 259 (2002), 136-151.  doi: 10.1016/S0022-1694(01)00581-9.
    [9] J. Carreau and V. Guinot, A PCA spatial pattern based artificial neural network downscaling model for urban flood hazard assessment, Advances in Water Resources, 147 (2021), 103821.  doi: 10.1016/j.advwatres.2020.103821.
    [10] A. ChenB. EvansS. Djordjevic and D. Savic, A coarse-grid approach to represent building blockage effects in 2d urban flood modelling, Journal of Hydrology, 426-427 (2012), 1-16. 
    [11] A. J.-C. de Saint-Venant, Théorie du mouvement non-permanent des eaux, avec application aux crues des rivières et à l'introduction des marées dans leur lit, Comptes Rendus de l'Académie des Sciences, 73 (1871), 147-154. 
    [12] A. Defina, Two-dimensional shallow flow equations for partially dry areas, Water Resour. Res., 36 (2000), 3251-3264.  doi: 10.1029/2000WR900167.
    [13] Y. B. Dibike and P. Coulibaly, Temporal neural networks for downscaling climate variability and extremes, Neural Networks, 19 (2006), 135-144.  doi: 10.1109/IJCNN.2005.1556124.
    [14] H. Ene and E. Sanchez-Palencia, Non-Homogeneous Media and Vibration Theory, Springer, Berlin, 1980.
    [15] C. L. Farmer, Upscaling: A review, International Journal for Numerical Methods in Fluids, 40 (2002), 63-78.  doi: 10.1002/fld.267.
    [16] N. FraehrQ. WangW. Wu and R. Nathan, Upskilling low-fidelity hydrodynamic models of flood inundation through spatial analysis and Gaussian process learning, Water Resources Research, 58 (2022).  doi: 10.1029/2022WR032248.
    [17] Y. Freund, Boosting a weak learning algorithm by majority, Information and Computation, 121 (1995), 256-285, Also appeared in COLT90. doi: 10.1016/B978-1-55860-146-8.50019-9.
    [18] Y. Freund and R. E. Schapire, Experiments with a new boosting algorithm, in Proceedings of the 13th International Conference on Machine Learning, Morgan Kaufmann, 1996,148-156.
    [19] Y. Freund and R. Schapire, A short introduction to boosting, in In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence, Morgan Kaufmann, 1999, 1401-1406.
    [20] J. H. Friedman, Greedy function approximation: A gradient boosting machine, The Annals of Statistics, 29 (2001), 1189-1232. http://www.jstor.org/stable/2699986. doi: 10.1214/aos/1013203451.
    [21] J. FriedmanT. Hastie and R. Tibshirani, Additive logistic regression: A statistical view of boosting, Annals of Statistics, 28 (1998), 2000. 
    [22] J.-F. Gerbeau and B. Perthame, Derivation of viscous Saint-Venant system for laminar shallow water; Numerical validation, Discrete and Continuous Dynamical Systems-Series B., 1 (2001), 89-102.  doi: 10.3934/dcdsb.2001.1.89.
    [23] S. Godunov, A difference method for numerical calculation of discontinuous solutions of the equations of hydrodynamics, Mat. Sb., 47 (1959), 271-306. 
    [24] I. Goodfellow, Y. Bengio and A. Courville, Deep Learning, MIT Press, Cambridge, MA, USA, 2016. http://www.deeplearningbook.org.
    [25] V. Guinot, Multiple porosity shallow water models for macroscopic modelling of urban floods, Advances in Water Resources, 37 (2012), 40-72.  doi: 10.1016/j.advwatres.2011.11.002.
    [26] V. Guinot, C. Delenne and S. Soares-Frazão, Urban dambreak experiments - riverflow paper, in RiverFlow 2018 Ibternational Conference, 2018.
    [27] V. GuinotB. F. Sanders and J. E. Schubert, Dual integral porosity shallow water model for urban flood modelling, Advances in Water Resources, 103 (2017), 16-31.  doi: 10.1016/j.advwatres.2017.02.009.
    [28] V. Guinot and S. Soares-Frazão, Flux and source term discretization in two-dimensional shallow water models with porosity on unstructured grids, International Journal for Numerical Methods in Fluids, 50 (2006), 309-345.  doi: 10.1002/fld.1059.
    [29] A. HartenP. D. Lax and B. van Leer, On upstream differencing and godunov-type schemes for hyperbolic conservation laws, SIAM Review, 25 (1983), 35-61.  doi: 10.1137/1025002.
    [30] J. Hervouet, R. Samie and B. Moreau, Modelling urban areas in dam-break floodwave numerical simulations, in Proceedings of the International Seminar and Workshop on Rescue Actions based on Dambreak Flow Analysis, Seinâjoki, Finland, 1–6 October 2000, 2000.
    [31] P. Indelman and G. Dagan, Upscaling of permeability of anisotropic heterogeneous formations. 1. The general framework, Water Resources Research, 29 (1993), 917-923.  doi: 10.1029/92WR02446.
    [32] B. KimB. F. SandersJ. S. Famiglietti and V. Guinot, Urban flood modeling with porous shallow-water equations: A case study of model errors in the presence of anisotropic porosity, Journal of Hydrology, 523 (2015), 680-692.  doi: 10.1016/j.jhydrol.2015.01.059.
    [33] P. D. Lax, Hyperbolic systems of conservation laws Ⅱ, Communications on Pure and Applied Mathematics, 10 (1957), 537-566.  doi: 10.1002/cpa.3160100406.
    [34] Y. LeCunY. Bengio and G. Hinton, Deep learning, Nature Cell Biology, 521 (2015), 436-444.  doi: 10.1038/nature14539.
    [35] A. LukeB. F. SandersK. A. GoodrichD. L. FeldmanD. BoudreauA. EguiarteK. SerranoA. ReyesJ. E. SchubertA. AghaKouchakV. Basolo and R. A. Matthew, Going beyond the flood insurance rate map: Insights from flood hazard map co-production, Natural Hazards and Earth System Sciences, 18 (2018), 1097-1120.  doi: 10.5194/nhess-18-1097-2018.
    [36] I. ÖzgenJ. ZhaoD. Liang and R. Hinkelmann, Urban flood modeling using shallow water eqations with depth-dependent anisotropic porosity, Journal of Hydrology, 541 (2016), 1165-1184. 
    [37] P. Renard and G. D. Marsily, Calculating equivalent permeability: A review, Advances in Water Resources, 20 (1997), 253-278.  doi: 10.1016/S0309-1708(96)00050-4.
    [38] M. V. T. SalamehP. Drobinski and P. Naveau, Statistical downscaling of near surface wind field over complex terrain in southern france, Meteorology and Atmospheric Physics, 103 (2009), 253-265. 
    [39] B. F. Sanders and J. E. Schubert, PRIMo: Parallel raster inundation model, Advances in Water Resources, 126 (2019), 79-95.  doi: 10.1016/j.advwatres.2019.02.007.
    [40] B. SandersJ. Schubert and H. Gallegos, Integral formulation of shallow water models with anisotropic porosity for urban flood modelling, Journal of Hydrology, 362 (2008), 19-38. 
    [41] J. Schmidhuber, Deep learning in neural networks: An overview, Neural Networks, 61 (2015), 85-117.  doi: 10.1016/j.neunet.2014.09.003.
    [42] J. E. Schubert and B. F. Sanders, Building treatments for urban flood inundation models and implications for predictive skill and modeling efficiency, Advances in Water Resources, 41 (2012), 49-64.  doi: 10.1016/j.advwatres.2012.02.012.
    [43] D. P. Viero, Modelling urban floods using a finite element staggered scheme with an anisotropic dual porosity model, Journal of Hydrology, 568 (2019), 247-259.  doi: 10.1016/j.jhydrol.2018.10.055.
    [44] D. P. Viero and M. Valipour, Modeling anisotropy in free-surface overland and shallow inundation flows, Advances in Water Resources, 104 (2017), 1-14.  doi: 10.1016/j.advwatres.2017.03.007.
    [45] M. Vrac and P. V. Ayar, Influence of bias correcting predictors on statistical downscaling models, Journal of Applied Meteorology and Climatology, 56 (2016), 5-26.  doi: 10.1175/JAMC-D-16-0079.1.
  • 加载中

Figures(7)

Tables(20)

SHARE

Article Metrics

HTML views(2580) PDF downloads(126) Cited by(0)

Access History

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return