Concise API Reference#

This page documents the sensitivity analysis methods supported by SALib.

FAST - Fourier Amplitude Sensitivity Test#

SALib.sample.fast_sampler.sample(problem, N, M=4, seed=None)[source]

Generate model inputs for extended Fourier Amplitude Sensitivity Test.

Returns a NumPy matrix containing the model inputs required by the extended Fourier Amplitude sensitivity test. The resulting matrix contains N * D rows and D columns, where D is the number of parameters.

The samples generated are intended to be used by SALib.analyze.fast.analyze().

Parameters:
  • problem (dict) – The problem definition

  • N (int) – The number of samples to generate

  • M (int) – The interference parameter, i.e., the number of harmonics to sum in the Fourier series decomposition (default 4)

  • seed (int) – Seed to generate a random number

References

  1. Cukier, R.I., Fortuin, C.M., Shuler, K.E., Petschek, A.G., Schaibly, J.H., 1973. Study of the sensitivity of coupled reaction systems to uncertainties in rate coefficients. I theory. Journal of Chemical Physics 59, 3873-3878. https://doi.org/10.1063/1.1680571

  2. Saltelli, A., S. Tarantola, and K. P.-S. Chan (1999). A Quantitative Model-Independent Method for Global Sensitivity Analysis of Model Output. Technometrics, 41(1):39-56, doi:10.1080/00401706.1999.10485594.

SALib.analyze.fast.analyze(problem, Y, M=4, num_resamples=100, conf_level=0.95, print_to_console=False, seed=None)[source]

Perform extended Fourier Amplitude Sensitivity Test on model outputs.

Returns a dictionary with keys ‘S1’ and ‘ST’, where each entry is a list of size D (the number of parameters) containing the indices in the same order as the parameter file.

Notes

Compatible with:

fast_sampler : SALib.sample.fast_sampler.sample()

Examples

>>> X = fast_sampler.sample(problem, 1000)
>>> Y = Ishigami.evaluate(X)
>>> Si = fast.analyze(problem, Y, print_to_console=False)
Parameters:
  • problem (dict) – The problem definition

  • Y (numpy.array) – A NumPy array containing the model outputs

  • M (int) – The interference parameter, i.e., the number of harmonics to sum in the Fourier series decomposition (default 4)

  • print_to_console (bool) – Print results directly to console (default False)

  • seed (int) – Seed to generate a random number

References

  1. Cukier, R. I., C. M. Fortuin, K. E. Shuler, A. G. Petschek, and J. H. Schaibly (1973). Study of the sensitivity of coupled reaction systems to uncertainties in rate coefficients. J. Chem. Phys., 59(8):3873-3878 doi:10.1063/1.1680571

  2. Saltelli, A., S. Tarantola, and K. P.-S. Chan (1999). A Quantitative Model-Independent Method for Global Sensitivity Analysis of Model Output. Technometrics, 41(1):39-56, doi:10.1080/00401706.1999.10485594.

  3. Pujol, G. (2006) fast99 - R sensitivity package cran/sensitivity

RBD-FAST - Random Balance Designs Fourier Amplitude Sensitivity Test#

SALib.sample.latin.sample(problem, N, seed=None)[source]

Generate model inputs using Latin hypercube sampling (LHS).

Returns a NumPy matrix containing the model inputs generated by Latin hypercube sampling. The resulting matrix contains N rows and D columns, where D is the number of parameters.

Parameters:
  • problem (dict) – The problem definition

  • N (int) – The number of samples to generate

  • seed (int) – Seed to generate a random number

References

  1. McKay, M.D., Beckman, R.J., Conover, W.J., 1979.

    A comparison of three methods for selecting values of input variables in the analysis of output from a computer code. Technometrics 21, 239-245. https://doi.org/10.2307/1268522

  2. Iman, R.L., Helton, J.C., Campbell, J.E., 1981.

    An Approach to Sensitivity Analysis of Computer Models: Part I—Introduction, Input Variable Selection and Preliminary Variable Assessment. Journal of Quality Technology 13, 174-183. https://doi.org/10.1080/00224065.1981.11978748

SALib.analyze.rbd_fast.analyze(problem, X, Y, M=10, num_resamples=100, conf_level=0.95, print_to_console=False, seed=None)[source]

Performs the Random Balanced Design - Fourier Amplitude Sensitivity Test (RBD-FAST) on model outputs.

Returns a dictionary with keys ‘S1’, where each entry is a list of size D (the number of parameters) containing the indices in the same order as the parameter file.

Notes

Compatible with:

all samplers

Examples

>>> X = latin.sample(problem, 1000)
>>> Y = Ishigami.evaluate(X)
>>> Si = rbd_fast.analyze(problem, X, Y, print_to_console=False)
Parameters:
  • problem (dict) – The problem definition

  • X (numpy.array) – A NumPy array containing the model inputs

  • Y (numpy.array) – A NumPy array containing the model outputs

  • M (int) – The interference parameter, i.e., the number of harmonics to sum in the Fourier series decomposition (default 10)

  • print_to_console (bool) – Print results directly to console (default False)

  • seed (int) – Seed to generate a random number

References

  1. S. Tarantola, D. Gatelli and T. Mara (2006) Random Balance Designs for the Estimation of First Order Global Sensitivity Indices, Reliability Engineering and System Safety, 91:6, 717-727 https://doi.org/10.1016/j.ress.2005.06.003

  2. Elmar Plischke (2010)

    An effective algorithm for computing global sensitivity indices (EASI), Reliability Engineering & System Safety, 95:4, 354-360. doi:10.1016/j.ress.2009.11.005

  3. Jean-Yves Tissot, Clémentine Prieur (2012)

    Bias correction for the estimation of sensitivity indices based on random balance designs, Reliability Engineering and System Safety, Elsevier, 107, 205-213. doi:10.1016/j.ress.2012.06.010

  4. Jeanne Goffart, Mickael Rabouille & Nathan Mendes (2015)

    Uncertainty and sensitivity analysis applied to hygrothermal simulation of a brick building in a hot and humid climate, Journal of Building Performance Simulation. doi:10.1080/19401493.2015.1112430

Method of Morris#

SALib.sample.morris.sample(problem: Dict, N: int, num_levels: int = 4, optimal_trajectories: int = None, local_optimization: bool = True, seed: int = None) ndarray[source]

Generate model inputs using the Method of Morris.

Three variants of Morris’ sampling for elementary effects is supported:

  • Vanilla Morris (see [1]) when optimal_trajectories is None/False and local_optimization is False

  • Optimised trajectories when optimal_trajectories=True using

    Campolongo’s enhancements (see [2]) and optionally Ruano’s enhancement (see [3]) when local_optimization=True

  • Morris with groups when the problem definition specifies groups of parameters

Results from these model inputs are intended to be used with SALib.analyze.morris.analyze().

Notes

Campolongo et al., [2] introduces an optimal trajectories approach which attempts to maximize the parameter space scanned for a given number of trajectories (where optimal_trajectories \(\in {2, ..., N}\)). The approach accomplishes this aim by randomly generating a high number of possible trajectories (500 to 1000 in [2]) and selecting a subset of r trajectories which have the highest spread in parameter space. The r variable in [2] corresponds to the optimal_trajectories parameter here.

Calculating all possible combinations of trajectories can be computationally expensive. The number of factors makes little difference, but the ratio between number of optimal trajectories and the sample size results in an exponentially increasing number of scores that must be computed to find the optimal combination of trajectories. We suggest going no higher than 4 levels from a pool of 100 samples with this “brute force” approach.

Ruano et al., [3] proposed an alternative approach with an iterative process that maximizes the distance between subgroups of generated trajectories, from which the final set of trajectories are selected, again maximizing the distance between each. The approach is not guaranteed to produce the most optimal spread of trajectories, but are at least locally maximized and significantly reduce the time taken to select trajectories. With local_optimization = True (which is default), it is possible to go higher than the previously suggested 4 levels from a pool of 100 samples.

Parameters:
  • problem (dict) – The problem definition

  • N (int) – The number of trajectories to generate

  • num_levels (int, default=4) – The number of grid levels (should be even)

  • optimal_trajectories (int) – The number of optimal trajectories to sample (between 2 and N)

  • local_optimization (bool, default=True) – Flag whether to use local optimization according to Ruano et al. (2012) Speeds up the process tremendously for bigger N and num_levels. If set to False brute force method is used, unless gurobipy is available

  • seed (int) – Seed to generate a random number

Returns:

sample_morris – Array containing the model inputs required for Method of Morris. The resulting matrix has \((G/D+1)*N/T\) rows and \(D\) columns, where \(D\) is the number of parameters, \(G\) is the number of groups (if no groups are selected, the number of parameters). \(T\) is the number of trajectories \(N\), or optimal_trajectories if selected.

Return type:

np.ndarray

References

  1. Morris, M.D., 1991. Factorial Sampling Plans for Preliminary Computational Experiments. Technometrics 33, 161-174. https://doi.org/10.1080/00401706.1991.10484804

  2. Campolongo, F., Cariboni, J., & Saltelli, A. 2007. An effective screening design for sensitivity analysis of large models. Environmental Modelling & Software, 22(10), 1509-1518. https://doi.org/10.1016/j.envsoft.2006.10.004

  3. Ruano, M.V., Ribes, J., Seco, A., Ferrer, J., 2012. An improved sampling strategy based on trajectory design for application of the Morris method to systems with many input factors. Environmental Modelling & Software 37, 103-109. https://doi.org/10.1016/j.envsoft.2012.03.008

SALib.analyze.morris.analyze(problem: Dict, X: ndarray, Y: ndarray, num_resamples: int = 100, conf_level: float = 0.95, scaled: bool = False, print_to_console: bool = False, num_levels: int = 4, seed=None) Dict[source]

Perform Morris Analysis on model outputs.

Returns a result set with keys mu, mu_star, sigma, and mu_star_conf, where each entry corresponds to the parameters defined in the problem spec or parameter file.

  • mu metric indicates the mean of the distribution

  • mu_star metric indicates the mean of the distribution of absolute values

  • sigma is the standard deviation of the distribution

When scaled is True, the elementary effects are scaled by the ratio of standard deviation of X and Y according to [3]. When using this option it is important to ensure that X contains the actual values passed into the model, as the elementary effects are divided by the step calculated from X rather than using delta which is calculated from the number of levels used in the sample. This could be the case if you perform post-processing on the values before passing them to the model.

Scaled elementary effects are useful when comparing different model outputs with each other when the input and output parameters have different scales. The ranking between the ordinary elementary effects and the scaled should be the same.

Notes

When applied with groups, the mu metric is less reliable as the effect from parameters within a group become averaged out.

The mu_star metric avoids this issue as it indicates the mean of the absolute values. If the direction of effects is important, Campolongo et al., [2] suggest comparing mu_star with mu. If mu is low and mu_star is high, then the effects are of different signs.

sigma is used as an indicator of interactions between parameters, or groups of parameters.

Compatible with:

morris : SALib.sample.morris.sample()

Examples

>>> X = morris.sample(problem, 1000, num_levels=4)
>>> Y = Ishigami.evaluate(X)
>>> Si = morris.analyze(problem, X, Y, conf_level=0.95,
>>>                     print_to_console=True, num_levels=4)
Parameters:
  • problem (dict) – The problem definition

  • X (numpy.array) – The NumPy matrix containing the model inputs of dtype=float

  • Y (numpy.array) – The NumPy array containing the model outputs of dtype=float

  • scaled (bool, default=False) – If True, the elementary effects are scaled by the ratio of standard deviation of X and Y according to [3]

  • num_resamples (int) – The number of resamples used to compute the confidence intervals (default 1000)

  • conf_level (float) – The confidence interval level (default 0.95)

  • print_to_console (bool) – Print results directly to console (default False)

  • num_levels (int) – The number of grid levels, must be identical to the value passed to SALib.sample.morris (default 4)

  • seed (int) – Seed to generate a random number

Returns:

Si – A dictionary of sensitivity indices containing the following entries.

  • mu - the mean elementary effect

  • mu_star - the absolute of the mean elementary effect

  • sigma - the standard deviation of the elementary effect

  • mu_star_conf - the bootstrapped confidence interval

  • names - the names of the parameters

Return type:

dict

References

  1. Morris, M. (1991). Factorial Sampling Plans for Preliminary Computational Experiments. Technometrics, 33(2):161-174, doi:10.1080/00401706.1991.10484804.

  2. Campolongo, F., J. Cariboni, and A. Saltelli (2007). An effective screening design for sensitivity analysis of large models. Environmental Modelling & Software, 22(10):1509-1518, doi:10.1016/j.envsoft.2006.10.004.

  3. Sin and Gearney (2009) Improving the Morris Method for Sensitivity Analysis by Scaling the Elementary Effects. 19th European Symposium on Computer Aided Process Engineering ESCAPE19:925-930

  4. Moret et al. (2017)

    Characterization of input uncertainties in strategic energy planning models. Applied Energy, Volume 202, 15 September 2017, Pages 597-617 https://doi.org/10.1016/j.apenergy.2017.05.106

Sobol’ Sensitivity Analysis#

SALib.sample.saltelli.sample(problem: Dict, N: int, calc_second_order: bool = True, skip_values: int = None)[source]

Generates model inputs using Saltelli’s extension of the Sobol’ sequence

The Sobol’ sequence is a popular quasi-random low-discrepancy sequence used to generate uniform samples of parameter space.

Returns a NumPy matrix containing the model inputs using Saltelli’s sampling scheme. Saltelli’s scheme extends the Sobol’ sequence in a way to reduce the error rates in the resulting sensitivity index calculations. If calc_second_order is False, the resulting matrix has N * (D + 2) rows, where D is the number of parameters. If calc_second_order is True, the resulting matrix has N * (2D + 2) rows. These model inputs are intended to be used with SALib.analyze.sobol.analyze().

Deprecated since version 1.4.6.

Notes

The initial points of the Sobol’ sequence has some repetition (see Table 2 in Campolongo [1]), which can be avoided by setting the skip_values parameter. Skipping values reportedly improves the uniformity of samples. It has been shown that naively skipping values may reduce accuracy, increasing the number of samples needed to achieve convergence (see Owen [2]).

A recommendation adopted here is that both skip_values and N be a power of 2, where N is the desired number of samples (see [2] and discussion in [5] for further context). It is also suggested therein that skip_values >= N.

The method now defaults to setting skip_values to a power of two that is >= N. If skip_values is provided, the method now raises a UserWarning in cases where sample sizes may be sub-optimal according to the recommendation above.

Parameters:
  • problem (dict) – The problem definition

  • N (int) – The number of samples to generate. Ideally a power of 2 and <= skip_values.

  • calc_second_order (bool) – Calculate second-order sensitivities (default True)

  • skip_values (int or None) – Number of points in Sobol’ sequence to skip, ideally a value of base 2 (default: a power of 2 >= N, or 16; whichever is greater)

References

  1. Campolongo, F., Saltelli, A., Cariboni, J., 2011.

    From screening to quantitative sensitivity analysis. A unified approach. Computer Physics Communications 182, 978-988. https://doi.org/10.1016/j.cpc.2010.12.039

  2. Owen, A. B., 2020.

    On dropping the first Sobol’ point. arXiv:2008.08051 [cs, math, stat]. Available at: http://arxiv.org/abs/2008.08051 (Accessed: 20 April 2021).

  3. Saltelli, A., 2002.

    Making best use of model evaluations to compute sensitivity indices. Computer Physics Communications 145, 280-297. https://doi.org/10.1016/S0010-4655(02)00280-1

  4. Sobol’, I.M., 2001.

    Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Mathematics and Computers in Simulation, The Second IMACS Seminar on Monte Carlo Methods 55, 271-280. https://doi.org/10.1016/S0378-4754(00)00270-6

  5. Discussion: scipy/scipy#10844

    scipy/scipy#10844 scipy/scipy#10844

SALib.sample.sobol.sample(problem: Dict, N: int, *, calc_second_order: bool = True, scramble: bool = True, skip_values: int = 0, seed: int | Generator | None = None)[source]

Generates model inputs using Saltelli’s extension of the Sobol’ sequence.

The Sobol’ sequence is a popular quasi-random low-discrepancy sequence used to generate uniform samples of parameter space. The general approach is described in [1].

Returns a NumPy matrix containing the model inputs using Saltelli’s sampling scheme.

Saltelli’s scheme reduces the number of required model runs from N(2D+1) to N(D+1) (see [2]).

If calc_second_order is False, the resulting matrix has N * (D + 2) rows, where D is the number of parameters.

If calc_second_order is True, the resulting matrix has N * (2D + 2) rows.

These model inputs are intended to be used with SALib.analyze.sobol.analyze().

Notes

The initial points of the Sobol’ sequence has some repetition (see Table 2 in Campolongo [3]__), which can be avoided by scrambling the sequence.

Another option, not recommended and available for educational purposes, is to use the skip_values parameter. Skipping values reportedly improves the uniformity of samples. But, it has been shown that naively skipping values may reduce accuracy, increasing the number of samples needed to achieve convergence (see Owen [4]__).

Parameters:
  • problem (dict,) – The problem definition.

  • N (int) – The number of samples to generate. Ideally a power of 2 and <= skip_values.

  • calc_second_order (bool, optional) – Calculate second-order sensitivities. Default is True.

  • scramble (bool, optional) – If True, use LMS+shift scrambling. Otherwise, no scrambling is done. Default is True.

  • skip_values (int, optional) – Number of points in Sobol’ sequence to skip, ideally a value of base 2. It’s recommended not to change this value and use scramble instead. scramble and skip_values can be used together. Default is 0.

  • seed ({None, int, numpy.random.Generator}, optional) – If seed is None the numpy.random.Generator generator is used. If seed is an int, a new Generator instance is used, seeded with seed. If seed is already a Generator instance then that instance is used. Default is None.

References

  1. Sobol’, I.M., 2001. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Mathematics and Computers in Simulation, The Second IMACS Seminar on Monte Carlo Methods 55, 271-280. https://doi.org/10.1016/S0378-4754(00)00270-6

  2. Saltelli, A. (2002). Making best use of model evaluations to compute sensitivity indices. Computer Physics Communications, 145(2), 280-297. https://doi.org/10.1016/S0010-4655(02)00280-1

  3. Campolongo, F., Saltelli, A., Cariboni, J., 2011. From screening to quantitative sensitivity analysis. A unified approach. Computer Physics Communications 182, 978-988. https://doi.org/10.1016/j.cpc.2010.12.039

  4. Owen, A. B., 2020. On dropping the first Sobol’ point. arXiv:2008.08051 [cs, math, stat]. Available at: http://arxiv.org/abs/2008.08051 (Accessed: 20 April 2021).

SALib.analyze.sobol.analyze(problem, Y, calc_second_order=True, num_resamples=100, conf_level=0.95, print_to_console=False, parallel=False, n_processors=None, keep_resamples=False, seed=None)[source]

Perform Sobol Analysis on model outputs.

Returns a dictionary with keys ‘S1’, ‘S1_conf’, ‘ST’, and ‘ST_conf’, where each entry is a list of size D (the number of parameters) containing the indices in the same order as the parameter file. If calc_second_order is True, the dictionary also contains keys ‘S2’ and ‘S2_conf’.

There are several approaches to estimating sensitivity indices. The general approach is described in [1]. The implementation offered here follows [2] for first and total order indices, whereas estimation of second order sensitivities follows [3]. A noteworthy point is the improvement to reduce error rates in sensitivity estimation is introduced in [4].

Notes

Compatible with:

saltelli : SALib.sample.saltelli.sample() sobol : SALib.sample.sobol.sample()

Examples

>>> X = saltelli.sample(problem, 512)
>>> Y = Ishigami.evaluate(X)
>>> Si = sobol.analyze(problem, Y, print_to_console=True)
Parameters:
  • problem (dict) – The problem definition

  • Y (numpy.array) – A NumPy array containing the model outputs

  • calc_second_order (bool) – Calculate second-order sensitivities (default True)

  • num_resamples (int) – The number of resamples (default 100)

  • conf_level (float) – The confidence interval level (default 0.95)

  • print_to_console (bool) – Print results directly to console (default False)

  • parallel (bool) – Perform analysis in parallel if True

  • n_processors (int) – Number of parallel processes (only used if parallel is True)

  • keep_resamples (bool) – Whether or not to store intermediate resampling results (default False)

  • seed (int) – Seed to generate a random number

References

  1. Sobol, I. M. (2001). Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Mathematics and Computers in Simulation, 55(1-3):271-280, doi:10.1016/S0378-4754(00)00270-6.

  2. Saltelli, A., P. Annoni, I. Azzini, F. Campolongo, M. Ratto, and S. Tarantola (2010). Variance based sensitivity analysis of model output. Design and estimator for the total sensitivity index. Computer Physics Communications, 181(2):259-270, doi:10.1016/j.cpc.2009.09.018.

  3. Saltelli, A. (2002). Making best use of model evaluations to compute sensitivity indices. Computer Physics Communications, 145(2):280-297 doi:10.1016/S0010-4655(02)00280-1.

  4. Sobol’, I. M., Tarantola, S., Gatelli, D., Kucherenko, S. S., & Mauntz, W. (2007). Estimating the approximation error when fixing unessential factors in global sensitivity analysis. Reliability Engineering & System Safety, 92(7), 957-960. https://doi.org/10.1016/j.ress.2006.07.001

Delta Moment-Independent Measure#

SALib.analyze.delta.analyze(problem: Dict, X: ndarray, Y: ndarray, num_resamples: int = 100, conf_level: float = 0.95, print_to_console: bool = False, seed: int = None, y_resamples: int = None, method: str = 'all') Dict[source]

Perform Delta Moment-Independent Analysis on model outputs.

Returns a dictionary with keys ‘delta’, ‘delta_conf’, ‘S1’, and ‘S1_conf’ (first-order sobol indices), where each entry is a list of size D (the number of parameters) containing the indices in the same order as the parameter file.

Notes

Compatible with:

all samplers

Examples

>>> X = latin.sample(problem, 1000)
>>> Y = Ishigami.evaluate(X)
>>> Si = delta.analyze(problem, X, Y, print_to_console=True)
Parameters:
  • problem (dict) – The problem definition

  • X (numpy.matrix) – A NumPy matrix containing the model inputs

  • Y (numpy.array) – A NumPy array containing the model outputs

  • num_resamples (int) – The number of resamples when computing confidence intervals (default 100)

  • conf_level (float) – The confidence interval level (default 0.95)

  • print_to_console (bool) – Print results directly to console (default False)

  • y_resamples (int, optional) – Number of samples to use when resampling (bootstrap) (default None)

  • method ({"all", "delta", "sobol"}, optional) – Whether to compute “delta”, “sobol” or both (“all”) indices (default “all”)

References

  1. Borgonovo, E. (2007). “A new uncertainty importance measure.”

    Reliability Engineering & System Safety, 92(6):771-784, doi:10.1016/j.ress.2006.04.015.

  2. Plischke, E., E. Borgonovo, and C. L. Smith (2013). “Global

    sensitivity measures from given data.” European Journal of Operational Research, 226(3):536-550, doi:10.1016/j.ejor.2012.11.047.

Derivative-based Global Sensitivity Measure (DGSM)#

SALib.analyze.dgsm.analyze(problem, X, Y, num_resamples=100, conf_level=0.95, print_to_console=False, seed=None)[source]

Calculates Derivative-based Global Sensitivity Measure on model outputs.

Returns a dictionary with keys ‘vi’, ‘vi_std’, ‘dgsm’, and ‘dgsm_conf’, where each entry is a list of size D (the number of parameters) containing the indices in the same order as the parameter file.

Notes

Compatible with:

finite_diff : SALib.sample.finite_diff.sample()

Examples

>>> X = finite_diff.sample(problem, 1000)
>>> Y = Ishigami.evaluate(X)
>>> Si = dgsm.analyze(problem, Y, print_to_console=False)
Parameters:
  • problem (dict) – The problem definition

  • X (numpy.matrix) – The NumPy matrix containing the model inputs

  • Y (numpy.array) – The NumPy array containing the model outputs

  • num_resamples (int) – The number of resamples used to compute the confidence intervals (default 1000)

  • conf_level (float) – The confidence interval level (default 0.95)

  • print_to_console (bool) – Print results directly to console (default False)

  • seed (int) – Seed to generate a random number

References

  1. Sobol, I. M. and S. Kucherenko (2009). “Derivative based global

    sensitivity measures and their link with global sensitivity indices.” Mathematics and Computers in Simulation, 79(10):3009-3017, doi:10.1016/j.matcom.2009.01.023.

Fractional Factorial#

SALib.sample.ff.sample(problem, seed=None)[source]

Generates model inputs using a fractional factorial sample.

Returns a NumPy matrix containing the model inputs required for a fractional factorial analysis. The resulting matrix has D columns, where D is smallest power of 2 that is greater than the number of parameters. These model inputs are intended to be used with SALib.analyze.ff.analyze().

The problem file is padded with a number of dummy variables called dummy_0 required for this procedure. These dummy variables can be used as a check for errors in the analyze procedure.

This algorithm is an implementation of that contained in Saltelli et al [Saltelli et al. 2008]

Parameters:
  • problem (dict) – The problem definition

  • seed (int) – Seed to generate a random number

Returns:

sample

Return type:

numpy.array

References

  1. Saltelli, A., Ratto, M., Andres, T., Campolongo, F., Cariboni, J., Gatelli, D., Saisana, M., Tarantola, S., 2008. Global Sensitivity Analysis: The Primer. Wiley, West Sussex, U.K. http://doi.org/10.1002/9780470725184

SALib.analyze.ff.analyze(problem, X, Y, second_order=False, print_to_console=False, seed=None)[source]

Perform a fractional factorial analysis

Returns a dictionary with keys ‘ME’ (main effect) and ‘IE’ (interaction effect). The techniques bulks out the number of parameters with dummy parameters to the nearest 2**n. Any results involving dummy parameters could indicate a problem with the model runs.

Notes

Compatible with:

ff : SALib.sample.ff.sample()

Examples

>>> X = sample(problem)
>>> Y = X[:, 0] + (0.1 * X[:, 1]) + ((1.2 * X[:, 2]) * (0.2 + X[:, 0]))
>>> analyze(problem, X, Y, second_order=True, print_to_console=True)
Parameters:
  • problem (dict) – The problem definition

  • X (numpy.matrix) – The NumPy matrix containing the model inputs

  • Y (numpy.array) – The NumPy array containing the model outputs

  • second_order (bool, default=False) – Include interaction effects

  • print_to_console (bool, default=False) – Print results directly to console

  • seed (int) – Seed to generate a random number

Returns:

Si – A dictionary of sensitivity indices, including main effects ME, and interaction effects IE (if second_order is True)

Return type:

dict

References

  1. Saltelli, A., Ratto, M., Andres, T., Campolongo, F.,

    Cariboni, J., Gatelli, D., Saisana, M., Tarantola, S., 2008. Global Sensitivity Analysis: The Primer. Wiley, West Sussex, U.K. http://doi.org/10.1002/9780470725184

PAWN Sensitivity Analysis#

SALib.analyze.pawn.analyze(problem: Dict, X: ndarray, Y: ndarray, S: int = 10, print_to_console: bool = False, seed: int = None)[source]

Performs PAWN sensitivity analysis.

The PAWN method [1] is a moment-independent approach to Global Sensitivity Analysis (GSA). It is described as producing robust results at relatively low sample sizes (see [2]) for the purpose of factor ranking and screening.

The distribution of model outputs is examined rather than their variation as is typical in other common GSA approaches. The PAWN method further distinguishes itself from other moment-independent approaches by characterizing outputs by their cumulative distribution function (CDF) as opposed to their probability distribution function. As the CDF for a given random variable is typically normally distributed, PAWN can be more appropriately applied when outputs are highly-skewed or multi-modal, for which variance-based methods may produce unreliable results.

PAWN characterizes the relationship between inputs and outputs by quantifying the variation in the output distributions after conditioning an input. A factor is deemed non-influential if distributions coincide at all S conditioning intervals. The Kolmogorov-Smirnov statistic is used as a measure of distance between the distributions.

This implementation reports the PAWN index at the min, mean, median, and max across the slides/conditioning intervals as well as the coefficient of variation (CV). The median value is the typically reported value. As the CV is (standard deviation / mean), it indicates the level of variability across the slides, with values closer to zero indicating lower variation.

Notes

Compatible with:

all samplers

This implementation ignores all NaNs.

When applied to grouped factors, the analysis is conducted on each factor individually, and the mean of their results are reported.

Examples

>>> X = latin.sample(problem, 1000)
>>> Y = Ishigami.evaluate(X)
>>> Si = pawn.analyze(problem, X, Y, S=10, print_to_console=False)
Parameters:
  • problem (dict) – The problem definition

  • X (numpy.array) – A NumPy array containing the model inputs

  • Y (numpy.array) – A NumPy array containing the model outputs

  • S (int) – Number of slides; the conditioning intervals (default 10)

  • print_to_console (bool) – Print results directly to console (default False)

  • seed (int) – Seed value to ensure deterministic results

References

  1. Pianosi, F., Wagener, T., 2015.

    A simple and efficient method for global sensitivity analysis based on cumulative distribution functions. Environmental Modelling & Software 67, 1-11. https://doi.org/10.1016/j.envsoft.2015.01.004

  2. Pianosi, F., Wagener, T., 2018.

    Distribution-based sensitivity analysis from a generic input-output sample. Environmental Modelling & Software 108, 197-207. https://doi.org/10.1016/j.envsoft.2018.07.019

  3. Baroni, G., Francke, T., 2020.

    An effective strategy for combining variance- and distribution-based global sensitivity analysis. Environmental Modelling & Software, 134, 104851. https://doi.org/10.1016/j.envsoft.2020.104851

  4. Baroni, G., Francke, T., 2020.

    GSA-cvd Combining variance- and distribution-based global sensitivity analysis baronig/GSA-cvd

High-Dimensional Model Representation#

SALib.analyze.hdmr.analyze(problem: Dict, X: ndarray, Y: ndarray, maxorder: int = 2, maxiter: int = 100, m: int = 2, K: int = 20, R: int = None, alpha: float = 0.95, lambdax: float = 0.01, print_to_console: bool = False, seed: int = None) Dict[source]

Compute global sensitivity indices using the meta-modeling technique known as High-Dimensional Model Representation (HDMR).

HDMR itself is not a sensitivity analysis method but a surrogate modeling approach. It constructs a map of relationship between sets of high dimensional inputs and output system variables [1]. This I/O relation can be constructed using different basis functions (orthonormal polynomials, splines, etc.). The model decomposition can be expressed as

\[\widehat{y} = \sum_{u \subseteq \{1, 2, ..., d \}} f_u\]

where \(u\) represents any subset including an empty set.

HDMR becomes extremely useful when the computational cost of obtaining sufficient Monte Carlo samples are prohibitive, as may be the case with Sobol’s method. It uses least-square regression to reduce the required number of samples and thus the number of function (model) evaluations. Another advantage of this method is that it can account for correlation among the model input. Unlike other variance-based methods, the main effects are the combination of structural (uncorrelated) and correlated contributions.

This method uses as input

  • a N x d matrix of N different d-vectors of model inputs (factors/parameters)

  • a N x 1 vector of corresponding model outputs

Notes

Compatible with:

all samplers

Sets an emulate method allowing re-use of the emulator.

Examples

 1sp = ProblemSpec({
 2    'names': ['X1', 'X2', 'X3'],
 3    'bounds': [[-np.pi, np.pi]] * 3,
 4    # 'groups': ['A', 'B', 'A'],
 5    'outputs': ['Y']
 6})
 7
 8(sp.sample_saltelli(2048)
 9    .evaluate(Ishigami.evaluate)
10    .analyze_hdmr()
11)
12
13sp.emulate()
Parameters:
  • problem (dict) – The problem definition

  • X (numpy.matrix) – The NumPy matrix containing the model inputs, N rows by d columns

  • Y (numpy.array) – The NumPy array containing the model outputs for each row of X

  • maxorder (int (1-3, default: 2)) – Maximum HDMR expansion order

  • maxiter (int (1-1000, default: 100)) – Max iterations backfitting

  • m (int (2-10, default: 2)) – Number of B-spline intervals

  • K (int (1-100, default: 20)) – Number of bootstrap iterations

  • R (int (100-N/2, default: N/2)) – Number of bootstrap samples. Will be set to length of Y if K is set to 1.

  • alpha (float (0.5-1)) – Confidence interval F-test

  • lambdax (float (0-10, default: 0.01)) – Regularization term

  • print_to_console (bool) – Print results directly to console (default: False)

  • seed (bool) – Seed to generate a random number

Returns:

Si – Sa : Uncorrelated contribution of a term

Sa_conf : Confidence interval of Sa

Sb : Correlated contribution of a term

Sb_conf : Confidence interval of Sb

STotal contribution of a particular term

Sum of Sa and Sb, representing first/second/third order sensitivity indices

S_conf : Confidence interval of S

ST : Total contribution of a particular dimension/parameter

ST_conf : Confidence interval of ST

select : Number of selection (F-Test)

EmEmulator result set

C1: First order coefficient C2: Second order coefficient C3: Third Order coefficient

Return type:

ResultDict,

References

  1. Rabitz, H. and Aliş, Ö.F., “General foundations of high dimensional model representations”, Journal of Mathematical Chemistry 25, 197-233 (1999) https://doi.org/10.1023/A:1019188517934

  2. Genyuan Li, H. Rabitz, P.E. Yelvington, O.O. Oluwole, F. Bacon, C.E. Kolb, and J. Schoendorf, “Global Sensitivity Analysis for Systems with Independent and/or Correlated Inputs”, Journal of Physical Chemistry A, Vol. 114 (19), pp. 6022 - 6032, 2010, https://doi.org/10.1021/jp9096919

Regional Sensitivity Analysis#

SALib.analyze.rsa.analyze(problem: Dict, X: ndarray, Y: ndarray, bins: int = 20, target: str = 'Y', print_to_console: bool = False, seed: int = None)[source]

Perform Regional Sensitivity Analysis (RSA), also known as Monte Carlo Filtering.

In a usual RSA, a desirable region of output space is defined. Outputs which fall within this region is categorized as being “behavioral” (\(B\)), and those outside are described as being “non-behavioral” (\(\bar{B}\)). The input factors are also partitioned into behavioral and non-behavioral subsets, such that \(f(X_{i}|B) \rightarrow (Y|B)\) and \(f(X_{i}|\bar{B}) \rightarrow (Y|\bar{B})\). The distribution between the two sub-samples are compared for each factor. The greater the difference between the two distributions, the more important the given factor is in driving model outputs.

The approach implemented in SALib partitions factor or output space into \(b\) bins (default: 20) according to their percentile values. Output space is targeted for analysis by default (target="Y"), such that \((Y|b_{i})\) is mapped back to \((X_{i}|b_{i})\). In other words, we treat outputs falling within a given bin (\(b_{i}\)) corresponding to their inputs as behavioral, and those outside the bin as non-behavioral. This aids in answering the question “Which \(X_{i}\) contributes most toward a given range of outputs?”. Factor space can also be assessed (target="X"), such that \(f(X_{i}|b_{i}) \rightarrow (Y|b_{i})\) and \(f(X_{i}|b_{\sim i}) \rightarrow (Y|b_{\sim i})\). This aids in answering the question “where in factor space are outputs most sensitive to?”

The two-sample Cramér-von Mises (CvM) test is used to compare distributions. Results of the analysis indicate sensitivity across factor/output space. As the Cramér-von Mises criterion ranges from 0 to \(\infty\), a value of zero will indicates the two distributions being compared are identical, with larger values indicating greater differences.

Notes

Compatible with:

all samplers

When applied to grouped factors, the analysis is conducted on each factor individually, and the mean of the results for a group are reported.

Increasing the value of bins increases the granularity of the analysis (across factor space), but necessitates larger sample sizes.

This analysis will produce NaNs, indicating areas of factor space that did not have any samples, or for which the outputs were constant.

Analysis results are normalized against the maximum value such that 1.0 indicates the greatest sensitivity.

Parameters:
  • problem (dict) – The problem definition

  • X (numpy.array) – A NumPy array containing the model inputs

  • Y (numpy.array) – A NumPy array containing the model outputs

  • bins (int) – The number of bins to use (default: 20)

  • target (str) – Assess factor space (“X”) or output space (“Y”) (default: “Y”)

  • print_to_console (bool) – Print results directly to console (default False)

  • seed (int) – Seed value to ensure deterministic results Unused, but defined to maintain compatibility.

References

  1. Hornberger, G. M., and R. C. Spear. 1981.

    Approach to the preliminary analysis of environmental systems. Journal of Environmental Management 12:1. https://www.osti.gov/biblio/6396608-approach-preliminary-analysis-environmental-systems

  2. Pianosi, F., K. Beven, J. Freer, J. W. Hall, J. Rougier, D. B. Stephenson, and

    T. Wagener. 2016. Sensitivity analysis of environmental models: A systematic review with practical workflow. Environmental Modelling & Software 79:214-232. https://dx.doi.org/10.1016/j.envsoft.2016.02.008

  3. Saltelli, A., M. Ratto, T. Andres, F. Campolongo, J. Cariboni, D. Gatelli,

    M. Saisana, and S. Tarantola. 2008. Global Sensitivity Analysis: The Primer. Wiley, West Sussex, U.K. https://dx.doi.org/10.1002/9780470725184 Accessible at: http://www.andreasaltelli.eu/file/repository/Primer_Corrected_2022.pdf

Discrepancy Sensitivity Indices#

SALib.analyze.discrepancy.analyze(problem: Dict, X: ndarray, Y: ndarray, method: str = 'WD', print_to_console: bool = False, seed: int = None)[source]

Discrepancy indices.

Parameters:
  • problem (dict) – The problem definition

  • X (numpy.ndarray) – An array of model inputs and outputs.

  • Y (numpy.ndarray) – An array of model inputs and outputs.

  • method ({"WD", "CD", "MD", "L2-star"}) – Type of discrepancy. Refer to scipy.stats.qmc.discrepancy for more details. Default is “WD”.

  • print_to_console (bool, optional) – Print results directly to console (default False)

  • seed (int, optional) – Seed value to ensure deterministic results Unused, but defined to maintain compatibility with other functions.

Notes

Compatible with:

all samplers

Based on 2D sub projections of [Xi,Y], the discrepancy of each sample is calculated which gives a value for all Xi. This information is used as a measure of sensitivity.

Discrepancy analysis is very fast and is visually explainable. Considering two variables X1 and X2, X1 is more influential than X2 when the scatterplot of X1 against Y displays a more discernible shape than the scatterplot of X2 against Y.

For the method to work properly, the input parameter space need to be uniformly covered as the quality of the measure depends on the value of the discrepancy. Taking a 2D sub projection, if the distribution of sample along Xi is not uniform, it will have an impact on the discrepancy, the value will increase, i.e. the importance of this parameter would be inflated.

References

1. A. Puy, P.T. Roy and A. Saltelli. 2023. Discrepancy measures for sensitivity analysis. https://arxiv.org/abs/2206.13470

2. A. Saltelli, M. Ratto, T. Andres, F. Campolongo, J. Cariboni, D. Gatelli, M. Saisana, and S. Tarantola. 2008. Global Sensitivity Analysis: The Primer. Wiley, West Sussex, U.K. https://dx.doi.org/10.1002/9780470725184 Accessible at: http://www.andreasaltelli.eu/file/repository/Primer_Corrected_2022.pdf

Examples

>>> import numpy as np
>>> from SALib.sample import latin
>>> from SALib.analyze import discrepancy
>>> from SALib.test_functions import Ishigami
>>> problem = {
...   'num_vars': 3,
...   'names': ['x1', 'x2', 'x3'],
...   'bounds': [[-np.pi, np.pi]]*3
... }
>>> X = latin.sample(problem, 1000)
>>> Y = Ishigami.evaluate(X)
>>> Si = discrepancy.analyze(problem, X, Y, print_to_console=True)