Unverified Commit 18f1e513 authored by Ahan M R's avatar Ahan M R Committed by GitHub

Closes #3051 - logp numpy array input fixed (#3836)

* Closes #3051 - logp numpy array input- fixed

Converts 'int' type to <TensorType(int64,Scalar)> to parse value to `astype` and allows arguments to `logp(self,value)` when called with numpy array.

* Closes #3051 - Allows numpy array input to logp

Allows `logp(self,value)` to take `value` input of type numpy array without errors

* fixes #3051

* Fixes #3051

* updated RELEASE-NOTES.md

Added the deprecation of `sd` with `sigma` in newer version with DeprecationWarning on usage of `sd`.

* directly use floatX

* mention #3836

* move all deprecations into their own chapter, as done in previous releases

* add regression test
Co-authored-by: default avatarMichael Osthege <michael.osthege@outlook.com>
parent 727b88ad
......@@ -16,19 +16,23 @@
- `pm.LKJCholeskyCov` now automatically computes and returns the unpacked Cholesky decomposition, the correlations and the standard deviations of the covariance matrix (see [#3881](https://github.com/pymc-devs/pymc3/pull/3881)).
### Maintenance
- Remove `sample_ppc` and `sample_ppc_w` that were deprecated in 3.6.
- Tuning results no longer leak into sequentially sampled `Metropolis` chains (see #3733 and #3796).
- Deprecated `sd` in version 3.7 has been replaced by `sigma` now raises `DeprecationWarning` on using `sd` in continuous, mixed and timeseries distributions. (see #3837 and #3688).
- We'll deprecate the `Text` and `SQLite` backends and the `save_trace`/`load_trace` functions, since this is now done with ArviZ. (see [#3902](https://github.com/pymc-devs/pymc3/pull/3902))
- In named models, `pm.Data` objects now get model-relative names (see [#3843](https://github.com/pymc-devs/pymc3/pull/3843)).
- `pm.sample` now takes 1000 draws and 1000 tuning samples by default, instead of 500 previously (see [#3855](https://github.com/pymc-devs/pymc3/pull/3855)).
- Dropped some deprecated kwargs and functions (see [#3906](https://github.com/pymc-devs/pymc3/pull/3906))
- Dropped the outdated 'nuts' initialization method for `pm.sample` (see [#3863](https://github.com/pymc-devs/pymc3/pull/3863)).
- Moved argument division out of `NegativeBinomial` `random` method. Fixes [#3864](https://github.com/pymc-devs/pymc3/issues/3864) in the style of [#3509](https://github.com/pymc-devs/pymc3/pull/3509).
- The Dirichlet distribution now raises a ValueError when it's initialized with <= 0 values (see [#3853](https://github.com/pymc-devs/pymc3/pull/3853)).
- Dtype bugfix in `MvNormal` and `MvStudentT` (see [3836](https://github.com/pymc-devs/pymc3/pull/3836))
- End of sampling report now uses `arviz.InferenceData` internally and avoids storing
pointwise log likelihood (see [#3883](https://github.com/pymc-devs/pymc3/pull/3883))
### Deprecations
- Remove `sample_ppc` and `sample_ppc_w` that were deprecated in 3.6.
- Deprecated `sd` in version 3.7 has been replaced by `sigma` now raises `DeprecationWarning` on using `sd` in continuous, mixed and timeseries distributions. (see [#3837](https://github.com/pymc-devs/pymc3/pull/3837) and [#3688](https://github.com/pymc-devs/pymc3/issues/3688)).
- We'll deprecate the `Text` and `SQLite` backends and the `save_trace`/`load_trace` functions, since this is now done with ArviZ. (see [#3902](https://github.com/pymc-devs/pymc3/pull/3902))
- Dropped some deprecated kwargs and functions (see [#3906](https://github.com/pymc-devs/pymc3/pull/3906))
- Dropped the outdated 'nuts' initialization method for `pm.sample` (see [#3863](https://github.com/pymc-devs/pymc3/pull/3863)).
## PyMC3 3.8 (November 29 2019)
### New features
......
......@@ -27,7 +27,7 @@ from theano.tensor.nlinalg import det, matrix_inverse, trace, eigh
from theano.tensor.slinalg import Cholesky
import pymc3 as pm
from pymc3.theanof import floatX, intX
from pymc3.theanof import floatX
from . import transforms
from pymc3.util import get_variable_name
from .distribution import (Continuous, Discrete, draw_values, generate_samples,
......@@ -325,7 +325,7 @@ class MvNormal(_QuadFormBase):
TensorVariable
"""
quaddist, logdet, ok = self._quaddist(value)
k = intX(value.shape[-1]).astype(theano.config.floatX)
k = floatX(value.shape[-1])
norm = - 0.5 * k * pm.floatX(np.log(2 * np.pi))
return bound(norm - 0.5 * quaddist - logdet, ok)
......@@ -439,7 +439,7 @@ class MvStudentT(_QuadFormBase):
TensorVariable
"""
quaddist, logdet, ok = self._quaddist(value)
k = intX(value.shape[-1]).astype(theano.config.floatX)
k = floatX(value.shape[-1])
norm = (gammaln((self.nu + k) / 2.)
- gammaln(self.nu / 2.)
......
......@@ -1414,3 +1414,21 @@ def test_orderedlogistic_dimensions(shape):
assert np.allclose(clogp, expected)
assert ol.distribution.p.ndim == (len(shape) + 1)
assert np.allclose(ologp, expected)
class TestBugfixes:
@pytest.mark.parametrize('dist_cls,kwargs', [
(MvNormal, dict(mu=0)),
(MvStudentT, dict(mu=0, nu=2))
])
@pytest.mark.parametrize('dims', [1,2,4])
def test_issue_3051(self, dims, dist_cls, kwargs):
d = dist_cls.dist(**kwargs, cov=np.eye(dims), shape=(dims,))
X = np.random.normal(size=(20,dims))
actual_t = d.logp(X)
assert isinstance(actual_t, tt.TensorVariable)
actual_a = actual_t.eval()
assert isinstance(actual_a, np.ndarray)
assert actual_a.shape == (X.shape[0],)
pass
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment