SALib.util package#

Submodules#

SALib.util.problem module#

class SALib.util.problem.ProblemSpec(*args, **kwargs)[source]#

Bases: dict

Dictionary-like object representing an SALib Problem specification.

samples#
Type:

np.array, of generated samples

results#
Type:

np.array, of evaluations (i.e., model outputs)

analysis#
Type:

np.array or dict, of sensitivity indices

Attributes:
analysis
results
samples

Methods

analyze(func, *args, **kwargs)

Analyze sampled results using given function.

analyze_parallel(func, *args[, nprocs])

Analyze sampled results using the given function in parallel.

clear()

copy()

evaluate(func, *args, **kwargs)

Evaluate a given model.

evaluate_distributed(func, *args[, nprocs, ...])

Distribute model evaluation across a cluster.

evaluate_parallel(func, *args[, nprocs])

Evaluate model locally in parallel.

fromkeys(iterable[, value])

Create a new dictionary with keys from iterable and values set to value.

get(key[, default])

Return the value for key if key is in the dictionary, else default.

heatmap([metric, index, title, ax])

Plot results as a heatmap.

items()

keys()

plot(**kwargs)

Plot results as a bar chart.

pop(key[, default])

If the key is not found, return the default if given; otherwise, raise a KeyError.

popitem(/)

Remove and return a (key, value) pair as a 2-tuple.

sample(func, *args, **kwargs)

Create sample using given function.

set_results(results)

Set previously available model results.

set_samples(samples)

Set previous samples used.

setdefault(key[, default])

Insert key with a value of default if key is not in the dictionary.

to_df()

Convert results to Pandas DataFrame.

update([E, ]**F)

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values()

property analysis#
analyze(func, *args, **kwargs)[source]#

Analyze sampled results using given function.

Parameters:
  • func (function,) – Analysis method to use. The provided function must accept the problem specification as the first parameter, X values if needed, Y values, and return a numpy array.

  • *args (list,) – Additional arguments to be passed to func

  • nprocs (int,) – If specified, attempts to parallelize model evaluations

  • **kwargs (dict,) – Additional keyword arguments passed to func

Returns:

self

Return type:

ProblemSpec object

analyze_parallel(func, *args, nprocs=None, **kwargs)[source]#

Analyze sampled results using the given function in parallel.

Parameters:
  • func (function,) – Analysis method to use. The provided function must accept the problem specification as the first parameter, X values if needed, Y values, and return a numpy array.

  • *args (list,) – Additional arguments to be passed to func

  • nprocs (int,) – Number of processors to use. Capped to the number of outputs or available processors.

  • **kwargs (dict,) – Additional keyword arguments passed to func

Returns:

self

Return type:

ProblemSpec object

evaluate(func, *args, **kwargs)[source]#

Evaluate a given model.

Parameters:
  • func (function,) – Model, or function that wraps a model, to be run/evaluated. The provided function is required to accept a numpy array of inputs as its first parameter and must return a numpy array of results.

  • *args (list,) – Additional arguments to be passed to func

  • nprocs (int,) – If specified, attempts to parallelize model evaluations

  • **kwargs (dict,) – Additional keyword arguments passed to func

Returns:

self

Return type:

ProblemSpec object

evaluate_distributed(func, *args, nprocs=1, servers=None, verbose=False, **kwargs)[source]#

Distribute model evaluation across a cluster.

Usage Conditions:

  • The provided function needs to accept a numpy array of inputs as its first parameter

  • The provided function must return a numpy array of results

Parameters:
  • func (function,) – Model, or function that wraps a model, to be run in parallel

  • nprocs (int,) – Number of processors to use for each node. Defaults to 1.

  • servers (list[str] or None,) – IP addresses or alias for each server/node to use.

  • verbose (bool,) – Display job execution statistics. Defaults to False.

  • *args (list,) – Additional arguments to be passed to func

  • **kwargs (dict,) – Additional keyword arguments passed to func

Returns:

self

Return type:

ProblemSpec object

evaluate_parallel(func, *args, nprocs=None, **kwargs)[source]#

Evaluate model locally in parallel.

All detected processors will be used if nprocs is not specified.

Parameters:
  • func (function,) – Model, or function that wraps a model, to be run in parallel. The provided function needs to accept a numpy array of inputs as its first parameter and must return a numpy array of results.

  • nprocs (int,) – Number of processors to use. Capped to the number of available processors.

  • *args (list,) – Additional arguments to be passed to func

  • **kwargs (dict,) – Additional keyword arguments passed to func

Returns:

self

Return type:

ProblemSpec object

heatmap(metric: str = None, index: str = None, title: str = None, ax=None)[source]#

Plot results as a heatmap.

Parameters:
  • metric (str or None, name of output to analyze (display all if None))

  • index (str or None, name of index to plot, dependent on what) – analysis was conducted (ST, S1, etc; displays all if None)

  • title (str, title of plot to use (defaults to the same as metric))

  • ax (axes object, matplotlib axes object to use for plot.) – Creates a new figure if not provided.

Returns:

ax

Return type:

matplotlib axes object

plot(**kwargs)[source]#

Plot results as a bar chart.

Returns:

axes

Return type:

matplotlib axes object

property results#
sample(func, *args, **kwargs)[source]#

Create sample using given function.

Parameters:
  • func (function,) – Sampling method to use. The given function must accept the SALib problem specification as the first parameter and return a numpy array.

  • *args (list,) – Additional arguments to be passed to func

  • **kwargs (dict,) – Additional keyword arguments passed to func

Returns:

self

Return type:

ProblemSpec object

property samples#
set_results(results: ndarray)[source]#

Set previously available model results.

set_samples(samples: ndarray)[source]#

Set previous samples used.

to_df()[source]#

Convert results to Pandas DataFrame.

SALib.util.results module#

class SALib.util.results.ResultDict(*args, **kwargs)[source]#

Bases: dict

Dictionary holding analysis results.

Conversion methods (e.g. to Pandas DataFrames) to be attached as necessary by each implementing method

Methods

clear()

copy()

fromkeys(iterable[, value])

Create a new dictionary with keys from iterable and values set to value.

get(key[, default])

Return the value for key if key is in the dictionary, else default.

items()

keys()

plot([ax])

Create bar chart of results

pop(key[, default])

If the key is not found, return the default if given; otherwise, raise a KeyError.

popitem(/)

Remove and return a (key, value) pair as a 2-tuple.

setdefault(key[, default])

Insert key with a value of default if key is not in the dictionary.

to_df()

Convert dict structure into Pandas DataFrame.

update([E, ]**F)

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values()

plot(ax=None)[source]#

Create bar chart of results

to_df()[source]#

Convert dict structure into Pandas DataFrame.

SALib.util.util_funcs module#

SALib.util.util_funcs.avail_approaches(pkg)[source]#

Create list of available modules.

Parameters:

pkg (module) – module to inspect

Returns:

method – A list of available submodules

Return type:

list

SALib.util.util_funcs.read_param_file(filename, delimiter=None)[source]#

Unpacks a parameter file into a dictionary

Reads a parameter file of format:

Param1,0,1,Group1,dist1
Param2,0,1,Group2,dist2
Param3,0,1,Group3,dist3

(Group and Dist columns are optional)

Returns a dictionary containing:
  • names - the names of the parameters

  • bounds - a list of lists of lower and upper bounds

  • num_vars - a scalar indicating the number of variables

    (the length of names)

  • groups - a list of group names (strings) for each variable

  • dists - a list of distributions for the problem,

    None if not specified or all uniform

Parameters:
  • filename (str) – The path to the parameter file

  • delimiter (str, default=None) – The delimiter used in the file to distinguish between columns

Module contents#

A set of utility functions

SALib.util.avail_approaches(pkg)[source]#

Create list of available modules.

Parameters:

pkg (module) – module to inspect

Returns:

method – A list of available submodules

Return type:

list

SALib.util.read_param_file(filename, delimiter=None)[source]#

Unpacks a parameter file into a dictionary

Reads a parameter file of format:

Param1,0,1,Group1,dist1
Param2,0,1,Group2,dist2
Param3,0,1,Group3,dist3

(Group and Dist columns are optional)

Returns a dictionary containing:
  • names - the names of the parameters

  • bounds - a list of lists of lower and upper bounds

  • num_vars - a scalar indicating the number of variables

    (the length of names)

  • groups - a list of group names (strings) for each variable

  • dists - a list of distributions for the problem,

    None if not specified or all uniform

Parameters:
  • filename (str) – The path to the parameter file

  • delimiter (str, default=None) – The delimiter used in the file to distinguish between columns

SALib.util.scale_samples(params: ndarray, problem: Dict)[source]#

Scale samples based on specified distribution (defaulting to uniform).

Adds an entry to the problem specification to indicate samples have been scaled to maintain backwards compatibility (sample_scaled).

Parameters:
  • params (np.ndarray,) – numpy array of dimensions num_params-by-\(N\), where \(N\) is the number of samples

  • problem (dictionary,) – SALib problem specification

Return type:

np.ndarray, scaled samples