Unverified Commit aafa00bc authored by Michael Osthege's avatar Michael Osthege Committed by GitHub

return_inferencedata option for pm.sample (#3911)

* mention arviz functions by name in warning

* convert to InferenceData already in sample function
+ convert to InferenceData and save metadata to it already in sample()
+ pass idata instead of trace to convergence check, to avoid duplicate work
+ directly use arviz diagnostics instead of pymc3 aliases

* fix refactoring bugs

* fix indentation

* add return_inferencedata option
+ set to None
+ defaults to False

* Fix numpy docstring format.

Replaced "<varname>: <type>" with "<varname> : <type>" per numpy guidelines.

Fix spelling typo.

* pass model to from_pymc3 because of deprecation warning

* add test for return_inferencedata option

* advise against keeping warmup draws in a MultiTrace

* mention #3911

* pin to arviz 0.8.0 and address review feedback

* rerun/update notebook to show inferencedata trace

* fix typo

* make all from_pymc3 accessible to the user

* remove duplicate entry, and wording

* address review feedback

* update arviz to 0.8.1 because of bugfix

* incorporate review feedback
+ more direct use of ArviZ
+ some wording things

* use arviz plot_ppc

* also ignore Visual Studio cache

* fix warmup saving logic and test

* require latest ArviZ patch

* change warning to nuget users towards InferenceData

* update ArviZ minimum version

* address review feedback

* start showing the FutureWarning about return_inferencedata in minor release >=3.10

* require arviz>=0.8.3 for latest bugfix
Co-authored-by: default avatarrpgoldman <rpgoldman@goldman-tribe.org>
parent 30d28f44
......@@ -33,7 +33,8 @@ benchmarks/html/
benchmarks/results/
.pytest_cache/
# VSCode
# Visual Studio / VSCode
.vs/
.vscode/
.mypy_cache
......
......@@ -11,6 +11,7 @@
- GP covariance functions can now be exponentiated by a scalar. See PR [#3852](https://github.com/pymc-devs/pymc3/pull/3852)
- `sample_posterior_predictive` can now feed on `xarray.Dataset` - e.g. from `InferenceData.posterior`. (see [#3846](https://github.com/pymc-devs/pymc3/pull/3846))
- `SamplerReport` (`MultiTrace.report`) now has properties `n_tune`, `n_draws`, `t_sampling` for increased convenience (see [#3827](https://github.com/pymc-devs/pymc3/pull/3827))
- `pm.sample(..., return_inferencedata=True)` can now directly return the trace as `arviz.InferenceData` (see [#3911](https://github.com/pymc-devs/pymc3/pull/3911))
- `pm.sample` now has support for adapting dense mass matrix using `QuadPotentialFullAdapt` (see [#3596](https://github.com/pymc-devs/pymc3/pull/3596), [#3705](https://github.com/pymc-devs/pymc3/pull/3705), [#3858](https://github.com/pymc-devs/pymc3/pull/3858), and [#3893](https://github.com/pymc-devs/pymc3/pull/3893)). Use `init="adapt_full"` or `init="jitter+adapt_full"` to use.
- `Moyal` distribution added (see [#3870](https://github.com/pymc-devs/pymc3/pull/3870)).
- `pm.LKJCholeskyCov` now automatically computes and returns the unpacked Cholesky decomposition, the correlations and the standard deviations of the covariance matrix (see [#3881](https://github.com/pymc-devs/pymc3/pull/3881)).
......@@ -21,6 +22,8 @@
### Maintenance
- Tuning results no longer leak into sequentially sampled `Metropolis` chains (see #3733 and #3796).
- We'll deprecate the `Text` and `SQLite` backends and the `save_trace`/`load_trace` functions, since this is now done with ArviZ. (see [#3902](https://github.com/pymc-devs/pymc3/pull/3902))
- ArviZ `v0.8.3` is now the minimum required version
- In named models, `pm.Data` objects now get model-relative names (see [#3843](https://github.com/pymc-devs/pymc3/pull/3843)).
- `pm.sample` now takes 1000 draws and 1000 tuning samples by default, instead of 500 previously (see [#3855](https://github.com/pymc-devs/pymc3/pull/3855)).
- Moved argument division out of `NegativeBinomial` `random` method. Fixes [#3864](https://github.com/pymc-devs/pymc3/issues/3864) in the style of [#3509](https://github.com/pymc-devs/pymc3/pull/3509).
......@@ -34,7 +37,7 @@
### Deprecations
- Remove `sample_ppc` and `sample_ppc_w` that were deprecated in 3.6.
- Deprecated `sd` in version 3.7 has been replaced by `sigma` now raises `DeprecationWarning` on using `sd` in continuous, mixed and timeseries distributions. (see [#3837](https://github.com/pymc-devs/pymc3/pull/3837) and [#3688](https://github.com/pymc-devs/pymc3/issues/3688)).
- Deprecated `sd` has been replaced by `sigma` (already in version 3.7) in continuous, mixed and timeseries distributions and now raises `DeprecationWarning` when `sd` is used. (see [#3837](https://github.com/pymc-devs/pymc3/pull/3837) and [#3688](https://github.com/pymc-devs/pymc3/issues/3688)).
- We'll deprecate the `Text` and `SQLite` backends and the `save_trace`/`load_trace` functions, since this is now done with ArviZ. (see [#3902](https://github.com/pymc-devs/pymc3/pull/3902))
- Dropped some deprecated kwargs and functions (see [#3906](https://github.com/pymc-devs/pymc3/pull/3906))
- Dropped the outdated 'nuts' initialization method for `pm.sample` (see [#3863](https://github.com/pymc-devs/pymc3/pull/3863)).
......
This source diff could not be displayed because it is too large. You can view the blob instead.
......@@ -55,7 +55,7 @@ def save_trace(trace: MultiTrace, directory: Optional[str]=None, overwrite=False
"""
warnings.warn(
'The `save_trace` function will soon be removed.'
'Instead, use ArviZ to save/load traces.',
'Instead, use `arviz.to_netcdf` to save traces.',
DeprecationWarning,
)
......@@ -98,7 +98,7 @@ def load_trace(directory: str, model=None) -> MultiTrace:
"""
warnings.warn(
'The `load_trace` function will soon be removed.'
'Instead, use ArviZ to save/load traces.',
'Instead, use `arviz.from_netcdf` to load traces.',
DeprecationWarning,
)
straces = []
......
......@@ -18,6 +18,7 @@ import enum
import typing
from ..util import is_transformed_name, get_untransformed_name
import arviz
logger = logging.getLogger('pymc3')
......@@ -98,8 +99,8 @@ class SamplerReport:
if errors:
raise ValueError('Serious convergence issues during sampling.')
def _run_convergence_checks(self, trace, model):
if trace.nchains == 1:
def _run_convergence_checks(self, idata:arviz.InferenceData, model):
if idata.posterior.sizes['chain'] == 1:
msg = ("Only one chain was sampled, this makes it impossible to "
"run some convergence checks")
warn = SamplerWarning(WarningType.BAD_PARAMS, msg, 'info',
......@@ -107,9 +108,6 @@ class SamplerReport:
self._add_warnings([warn])
return
from pymc3 import rhat, ess
from arviz import from_pymc3
valid_name = [rv.name for rv in model.free_RVs + model.deterministics]
varnames = []
for rv in model.free_RVs:
......@@ -117,12 +115,11 @@ class SamplerReport:
if is_transformed_name(rv_name):
rv_name2 = get_untransformed_name(rv_name)
rv_name = rv_name2 if rv_name2 in valid_name else rv_name
if rv_name in trace.varnames:
if rv_name in idata.posterior:
varnames.append(rv_name)
idata = from_pymc3(trace, log_likelihood=False)
self._ess = ess = ess(idata, var_names=varnames)
self._rhat = rhat = rhat(idata, var_names=varnames)
self._ess = ess = arviz.ess(idata, var_names=varnames)
self._rhat = rhat = arviz.rhat(idata, var_names=varnames)
warnings = []
rhat_max = max(val.max() for val in rhat.values())
......@@ -147,7 +144,7 @@ class SamplerReport:
warnings.append(warn)
eff_min = min(val.min() for val in ess.values())
n_samples = len(trace) * trace.nchains
n_samples = idata.posterior.sizes['chain'] * idata.posterior.sizes['draw']
if eff_min < 200 and n_samples >= 500:
msg = ("The estimated number of effective samples is smaller than "
"200 for some parameters.")
......
......@@ -194,7 +194,7 @@ def load(name, model=None):
"""
warnings.warn(
'The `load` function will soon be removed. '
'Please use ArviZ to save traces. '
'Please use `arviz.from_netcdf` to load traces. '
'If you have good reasons for using the `load` function, file an issue and tell us about them. ',
DeprecationWarning,
)
......@@ -239,7 +239,7 @@ def dump(name, trace, chains=None):
"""
warnings.warn(
'The `dump` function will soon be removed. '
'Please use ArviZ to save traces. '
'Please use `arviz.to_netcdf` to save traces. '
'If you have good reasons for using the `dump` function, file an issue and tell us about them. ',
DeprecationWarning,
)
......
......@@ -22,11 +22,13 @@ from typing import Iterable as TIterable
from collections.abc import Iterable
from collections import defaultdict
from copy import copy
import packaging
import pickle
import logging
import time
import warnings
import arviz
import numpy as np
import theano.gradient as tg
from theano.tensor import Tensor
......@@ -101,20 +103,20 @@ def instantiate_steppers(model, steps, selected_steps, step_kwargs=None):
Parameters
----------
model: Model object
model : Model object
A fully-specified model object
steps: step function or vector of step functions
steps : step function or vector of step functions
One or more step functions that have been assigned to some subset of
the model's parameters. Defaults to None (no assigned variables).
selected_steps: dictionary of step methods and variables
selected_steps : dictionary of step methods and variables
The step methods and the variables that have were assigned to them.
step_kwargs: dict
step_kwargs : dict
Parameters for the samplers. Keys are the lower case names of
the step method, values a dict of arguments.
Returns
-------
methods: list
methods : list
List of step methods associated with the model's variables.
"""
if step_kwargs is None:
......@@ -152,21 +154,21 @@ def assign_step_methods(model, step=None, methods=STEP_METHODS, step_kwargs=None
Parameters
----------
model: Model object
model : Model object
A fully-specified model object
step: step function or vector of step functions
step : step function or vector of step functions
One or more step functions that have been assigned to some subset of
the model's parameters. Defaults to ``None`` (no assigned variables).
methods: vector of step method classes
methods : vector of step method classes
The set of step methods from which the function may choose. Defaults
to the main step methods provided by PyMC3.
step_kwargs: dict
step_kwargs : dict
Parameters for the samplers. Keys are the lower case names of
the step method, values a dict of arguments.
Returns
-------
methods: list
methods : list
List of step methods associated with the model's variables.
"""
steps = []
......@@ -244,6 +246,9 @@ def sample(
discard_tuned_samples=True,
compute_convergence_checks=True,
callback=None,
*,
return_inferencedata=None,
idata_kwargs:dict=None,
**kwargs
):
"""Draw samples from the posterior using the given step methods.
......@@ -252,10 +257,10 @@ def sample(
Parameters
----------
draws: int
draws : int
The number of samples to draw. Defaults to 1000. The number of tuned samples are discarded
by default. See ``discard_tuned_samples``.
init: str
init : str
Initialization method to use for auto-assigned NUTS samplers.
* auto: Choose a default initialization method automatically.
......@@ -275,61 +280,67 @@ def sample(
* advi_map: Initialize ADVI with MAP and use MAP as starting point.
* map: Use the MAP as starting point. This is discouraged.
* adapt_full: Adapt a dense mass matrix using the sample covariances
step: function or iterable of functions
step : function or iterable of functions
A step function or collection of functions. If there are variables without step methods,
step methods for those variables will be assigned automatically. By default the NUTS step
method will be used, if appropriate to the model; this is a good default for beginning
users.
n_init: int
n_init : int
Number of iterations of initializer. Only works for 'ADVI' init methods.
start: dict, or array of dict
start : dict, or array of dict
Starting point in parameter space (or partial point)
Defaults to ``trace.point(-1))`` if there is a trace provided and model.test_point if not
(defaults to empty dict). Initialization methods for NUTS (see ``init`` keyword) can
overwrite the default.
trace: backend, list, or MultiTrace
trace : backend, list, or MultiTrace
This should be a backend instance, a list of variables to track, or a MultiTrace object
with past values. If a MultiTrace object is given, it must contain samples for the chain
number ``chain``. If None or a list of variables, the NDArray backend is used.
Passing either "text" or "sqlite" is taken as a shortcut to set up the corresponding
backend (with "mcmc" used as the base name).
chain_idx: int
chain_idx : int
Chain number used to store sample in backend. If ``chains`` is greater than one, chain
numbers will start here.
chains: int
chains : int
The number of chains to sample. Running independent chains is important for some
convergence statistics and can also reveal multiple modes in the posterior. If ``None``,
then set to either ``cores`` or 2, whichever is larger.
cores: int
cores : int
The number of chains to run in parallel. If ``None``, set to the number of CPUs in the
system, but at most 4.
tune: int
tune : int
Number of iterations to tune, defaults to 1000. Samplers adjust the step sizes, scalings or
similar during tuning. Tuning samples will be drawn in addition to the number specified in
the ``draws`` argument, and will be discarded unless ``discard_tuned_samples`` is set to
False.
progressbar: bool, optional default=True
progressbar : bool, optional default=True
Whether or not to display a progress bar in the command line. The bar shows the percentage
of completion, the sampling speed in samples per second (SPS), and the estimated remaining
time until completion ("expected time of arrival"; ETA).
model: Model (optional if in ``with`` context)
random_seed: int or list of ints
model : Model (optional if in ``with`` context)
random_seed : int or list of ints
A list is accepted if ``cores`` is greater than one.
discard_tuned_samples: bool
discard_tuned_samples : bool
Whether to discard posterior samples of the tune interval.
compute_convergence_checks: bool, default=True
compute_convergence_checks : bool, default=True
Whether to compute sampler statistics like Gelman-Rubin and ``effective_n``.
callback: function, default=None
callback : function, default=None
A function which gets called for every sample from the trace of a chain. The function is
called with the trace and the current draw and will contain all samples for a single trace.
the ``draw.chain`` argument can be used to determine which of the active chains the sample
is drawn from.
Sampling can be interrupted by throwing a ``KeyboardInterrupt`` in the callback.
return_inferencedata : bool, optional, default=False
Whether to return the trace as an `arviz.InferenceData` (True) object or a `MultiTrace` (False)
Defaults to `False`, but we'll switch to `True` in an upcoming release.
idata_kwargs : dict, optional
Keyword arguments for `arviz.from_pymc3`
Returns
-------
trace: pymc3.backends.base.MultiTrace
A ``MultiTrace`` object that contains the samples.
trace : pymc3.backends.base.MultiTrace or arviz.InferenceData
A ``MultiTrace`` or ArviZ ``InferenceData`` object that contains the samples.
Notes
-----
......@@ -339,11 +350,11 @@ def sample(
If your model uses only one step method, you can address step method kwargs
directly. In particular, the NUTS step method has several options including:
* target_accept: float in [0, 1]. The step size is tuned such that we
* target_accept : float in [0, 1]. The step size is tuned such that we
approximate this acceptance rate. Higher values like 0.9 or 0.95 often
work better for problematic posteriors
* max_treedepth: The maximum depth of the trajectory tree
* step_scale: float, default 0.25
* max_treedepth : The maximum depth of the trajectory tree
* step_scale : float, default 0.25
The initial guess for the step size scaled down by :math:`1/n**(1/4)`
If your model uses multiple step methods, aka a Compound Step, then you have
......@@ -412,6 +423,25 @@ def sample(
if not isinstance(random_seed, Iterable):
raise TypeError("Invalid value for `random_seed`. Must be tuple, list or int")
if not discard_tuned_samples and not return_inferencedata:
warnings.warn(
"Tuning samples will be included in the returned `MultiTrace` object, which can lead to"
" complications in your downstream analysis. Please consider to switch to `InferenceData`:\n"
"`pm.sample(..., return_inferencedata=True)`",
UserWarning
)
if return_inferencedata is None:
v = packaging.version.parse(pm.__version__)
if v.major > 3 or v.minor >= 10:
warnings.warn(
"In an upcoming release, pm.sample will return an `arviz.InferenceData` object instead of a `MultiTrace` by default. "
"You can pass return_inferencedata=True or return_inferencedata=False to be safe and silence this warning.",
FutureWarning
)
# set the default
return_inferencedata = False
if start is not None:
for start_vals in start:
_check_start_shape(model, start_vals)
......@@ -561,15 +591,24 @@ def sample(
f'took {trace.report.t_sampling:.0f} seconds.'
)
idata = None
if compute_convergence_checks or return_inferencedata:
ikwargs = dict(model=model, save_warmup=not discard_tuned_samples)
if idata_kwargs:
ikwargs.update(idata_kwargs)
idata = arviz.from_pymc3(trace, **ikwargs)
if compute_convergence_checks:
if draws - tune < 100:
warnings.warn("The number of samples is too small to check convergence reliably.")
else:
trace.report._run_convergence_checks(trace, model)
trace.report._run_convergence_checks(idata, model)
trace.report._log_summary()
return trace
if return_inferencedata:
return idata
else:
return trace
def _check_start_shape(model, start):
......@@ -662,29 +701,29 @@ def _sample_population(
Parameters
----------
draws: int
draws : int
The number of samples to draw
chain: int
chain : int
The number of the first chain in the population
chains: int
chains : int
The total number of chains in the population
start: list
start : list
Start points for each chain
random_seed: int or list of ints, optional
random_seed : int or list of ints, optional
A list is accepted if more if ``cores`` is greater than one.
step: function
step : function
Step function (should be or contain a population step method)
tune: int, optional
tune : int, optional
Number of iterations to tune, if applicable (defaults to None)
model: Model (optional if in ``with`` context)
progressbar: bool
model : Model (optional if in ``with`` context)
progressbar : bool
Show progress bars? (defaults to True)
parallelize: bool
parallelize : bool
Setting for multiprocess parallelization
Returns
-------
trace: MultiTrace
trace : MultiTrace
Contains samples of all chains
"""
# create the generator that iterates all chains in parallel
......@@ -729,31 +768,31 @@ def _sample(
Parameters
----------
chain: int
chain : int
Number of the chain that the samples will belong to.
progressbar: bool
progressbar : bool
Whether or not to display a progress bar in the command line. The bar shows the percentage
of completion, the sampling speed in samples per second (SPS), and the estimated remaining
time until completion ("expected time of arrival"; ETA).
random_seed: int or list of ints
random_seed : int or list of ints
A list is accepted if ``cores`` is greater than one.
start: dict
start : dict
Starting point in parameter space (or partial point)
draws: int
draws : int
The number of samples to draw
step: function
step : function
Step function
trace: backend, list, or MultiTrace
trace : backend, list, or MultiTrace
This should be a backend instance, a list of variables to track, or a MultiTrace object
with past values. If a MultiTrace object is given, it must contain samples for the chain
number ``chain``. If None or a list of variables, the NDArray backend is used.
tune: int, optional
tune : int, optional
Number of iterations to tune, if applicable (defaults to None)
model: Model (optional if in ``with`` context)
model : Model (optional if in ``with`` context)
Returns
-------
strace: pymc3.backends.base.BaseTrace
strace : pymc3.backends.base.BaseTrace
A ``BaseTrace`` object that contains the samples for this chain.
"""
skip_first = kwargs.get("skip_first", 0)
......@@ -794,26 +833,26 @@ def iter_sample(
Parameters
----------
draws: int
draws : int
The number of samples to draw
step: function
step : function
Step function
start: dict
start : dict
Starting point in parameter space (or partial point). Defaults to trace.point(-1)) if
there is a trace provided and model.test_point if not (defaults to empty dict)
trace: backend, list, or MultiTrace
trace : backend, list, or MultiTrace
This should be a backend instance, a list of variables to track, or a MultiTrace object
with past values. If a MultiTrace object is given, it must contain samples for the chain
number ``chain``. If None or a list of variables, the NDArray backend is used.
chain: int, optional
chain : int, optional
Chain number used to store sample in backend. If ``cores`` is greater than one, chain numbers
will start here.
tune: int, optional
tune : int, optional
Number of iterations to tune, if applicable (defaults to None)
model: Model (optional if in ``with`` context)
random_seed: int or list of ints, optional
model : Model (optional if in ``with`` context)
random_seed : int or list of ints, optional
A list is accepted if more if ``cores`` is greater than one.
callback:
callback :
A function which gets called for every sample from the trace of a chain. The function is
called with the trace and the current draw and will contain all samples for a single trace.
the ``draw.chain`` argument can be used to determine which of the active chains the sample
......@@ -822,7 +861,7 @@ def iter_sample(
Yields
------
trace: MultiTrace
trace : MultiTrace
Contains all samples up to the current iteration
Examples
......@@ -844,31 +883,31 @@ def _iter_sample(
Parameters
----------
draws: int
draws : int
The number of samples to draw
step: function
step : function
Step function
start: dict, optional
start : dict, optional
Starting point in parameter space (or partial point). Defaults to trace.point(-1)) if
there is a trace provided and model.test_point if not (defaults to empty dict)
trace: backend, list, MultiTrace, or None
trace : backend, list, MultiTrace, or None
This should be a backend instance, a list of variables to track, or a MultiTrace object
with past values. If a MultiTrace object is given, it must contain samples for the chain
number ``chain``. If None or a list of variables, the NDArray backend is used.
chain: int, optional
chain : int, optional
Chain number used to store sample in backend. If ``cores`` is greater than one, chain numbers
will start here.
tune: int, optional
tune : int, optional
Number of iterations to tune, if applicable (defaults to None)
model: Model (optional if in ``with`` context)
random_seed: int or list of ints, optional
model : Model (optional if in ``with`` context)
random_seed : int or list of ints, optional
A list is accepted if more if ``cores`` is greater than one.
Yields
------
strace: BaseTrace
strace : BaseTrace
The trace object containing the samples for this chain
diverging: bool
diverging : bool
Indicates if the draw is divergent. Only available with some samplers.
"""
model = modelcontext(model)
......@@ -955,11 +994,11 @@ class PopulationStepper:
Parameters
----------
steppers: list
steppers : list
A collection of independent step methods, one for each chain.
parallelize: bool
parallelize : bool
Indicates if parallelization via multiprocessing is desired.
progressbar: bool
progressbar : bool
Should we display a progress bar showing relative progress?
"""
self.nchains = len(steppers)
......@@ -1028,11 +1067,11 @@ class PopulationStepper:
Parameters
----------
c: int
c : int
number of this chain
stepper: BlockedStep
stepper : BlockedStep
a step method such as CompoundStep
slave_end: multiprocessing.connection.PipeConnection
slave_end : multiprocessing.connection.PipeConnection
This is our connection to the main process
"""
# re-seed each child process to make them unique
......@@ -1070,14 +1109,14 @@ class PopulationStepper:
Parameters
----------
tune_stop: bool
tune_stop : bool
Indicates if the condition (i == tune) is fulfilled
population: list
population : list
Current Points of all chains
Returns
-------
update: list
update : list
List of (Point, stats) tuples for all chains
"""
updates = [None] * self.nchains
......@@ -1110,27 +1149,27 @@ def _prepare_iter_population(
Parameters
----------
draws: int
draws : int
The number of samples to draw
chains: list
chains : list
The chain numbers in the population
step: function
step : function
Step function (should be or contain a population step method)
start: list
start : list
Start points for each chain
parallelize: bool
parallelize : bool
Setting for multiprocess parallelization
tune: int, optional
tune : int, optional
Number of iterations to tune, if applicable (defaults to None)
model: Model (optional if in ``with`` context)
random_seed: int or list of ints, optional
model : Model (optional if in ``with`` context)
random_seed : int or list of ints, optional
A list is accepted if more if ``cores`` is greater than one.
progressbar: bool
progressbar : bool
``progressbar`` argument for the ``PopulationStepper``, (defaults to True)
Returns
-------
_iter_population: generator
_iter_population : generator
Yields traces of all chains at the same time
"""
# chains contains the chain numbers, but for indexing we need indices...
......@@ -1197,22 +1236,22 @@ def _iter_population(draws, tune, popstep, steppers, traces, points):
Parameters
----------
draws: int
draws : int
number of draws per chain
tune: int
tune : int
number of tuning steps
popstep: PopulationStepper
popstep : PopulationStepper
the helper object for (parallelized) stepping of chains
steppers: list
steppers : list
The step methods for each chain
traces: list
traces : list
Traces for each chain
points: list
points : list
population of chain states
Yields
------
traces: list
traces : list
List of trace objects of the individual chains
"""
try:
......@@ -1258,21 +1297,21 @@ def _choose_backend(trace, chain, shortcuts=None, **kwds):
Parameters
----------
trace: backend, list, MultiTrace, or None
trace : backend, list, MultiTrace, or None
This should be a BaseTrace, backend name (e.g. text, sqlite, or hdf5),
list of variables to track, or a MultiTrace object with past values.
If a MultiTrace object is given, it must contain samples for the chain number ``chain``.
If None or a list of variables, the NDArray backend is used.
chain: int
chain : int
Number of the chain of interest.
shortcuts: dict, optional
shortcuts : dict, optional
maps backend names to a dict of backend class and name (defaults to pm.backends._shortcuts)
**kwds
**kwds :
keyword arguments to forward to the backend creation