Skip to content
This repository has been archived by the owner on Apr 1, 2022. It is now read-only.

Scheduled weekly dependency update for week 00 #72

Closed
wants to merge 10 commits into from

Conversation

pyup-bot
Copy link
Collaborator

@pyup-bot pyup-bot commented Jan 1, 2018

Updates

Here's a list of all the updates bundled in this pull request. I've added some links to make it easier for you to find all the information you need.

django-braces 1.11.0 » 1.12.0 PyPI | Changelog | Repo
boto3 1.4.7 » 1.5.7 PyPI | Changelog | Repo
pandas 0.21.0 » 0.22.0 PyPI | Changelog | Homepage
raven 6.3.0 » 6.4.0 PyPI | Changelog | Repo
django-tables2 1.14.2 » 1.17.1 PyPI | Changelog | Repo
django-extensions 1.9.7 » 1.9.8 PyPI | Changelog | Repo | Docs
Werkzeug 0.12.2 » 0.14.1 PyPI | Changelog | Homepage
django-test-plus 1.0.20 » 1.0.21 PyPI | Changelog | Repo
django-debug-toolbar 1.8 » 1.9.1 PyPI | Changelog | Repo

Changelogs

boto3 1.4.7 -> 1.5.7

1.5.7

=====

  • api-change:workspaces: [botocore] Update workspaces client to latest version

1.5.6

=====

  • api-change:ecs: [botocore] Update ecs client to latest version
  • api-change:ec2: [botocore] Update ec2 client to latest version
  • api-change:inspector: [botocore] Update inspector client to latest version
  • api-change:sagemaker: [botocore] Update sagemaker client to latest version

1.5.5

=====

  • api-change:ec2: [botocore] Update ec2 client to latest version
  • enhancement:Paginator: [botocore] Added paginator support for lambda list aliases operation.
  • api-change:kinesisanalytics: [botocore] Update kinesisanalytics client to latest version
  • api-change:codebuild: [botocore] Update codebuild client to latest version

1.5.4

=====

  • api-change:iot: [botocore] Update iot client to latest version
  • api-change:config: [botocore] Update config client to latest version

1.5.3

=====

  • api-change:route53: [botocore] Update route53 client to latest version
  • api-change:apigateway: [botocore] Update apigateway client to latest version
  • api-change:mediastore-data: [botocore] Update mediastore-data client to latest version

1.5.2

=====

  • bugfix:presigned-url: [botocore] Fixes a bug where content-type would be set on presigned requests for query services.
  • api-change:cloudwatch: [botocore] Update cloudwatch client to latest version

1.5.1

=====

  • api-change:appstream: [botocore] Update appstream client to latest version

1.5.0

=====

  • bugfix:Filters: Fixes a bug where parameters passed to resource collections could be mutated after the collections were created.
  • api-change:ses: [botocore] Update ses client to latest version
  • enhancement:credentials: [botocore] Moved the JSONFileCache from the CLI into botocore so that it can be used without importing from the cli.
  • feature:botocore dependency: Update dependency strategy to always take a floor on the most recent version of botocore. This means whenever there is a release of botocore, boto3 will release as well to account for the new version of botocore.
  • api-change:apigateway: [botocore] Update apigateway client to latest version

1.4.8

=====

  • enhancement:botocore: Raised minor version dependency for botocore

pandas 0.21.0 -> 0.22.0

0.22.0


This is a major release from 0.21.1 and includes a single, API-breaking change.
We recommend that all users upgrade to this version after carefully reading the
release note (singular!).

.. _whatsnew_0220.api_breaking:

Backwards incompatible API changes

Pandas 0.22.0 changes the handling of empty and all-NA sums and products. The
summary is that

  • The sum of an empty or all-NA Series is now 0
  • The product of an empty or all-NA Series is now 1
  • We've added a min_count parameter to .sum() and .prod() controlling
    the minimum number of valid values for the result to be valid. If fewer than
    min_count non-NA values are present, the result is NA. The default is
    0. To return NaN, the 0.21 behavior, use min_count=1.

Some background: In pandas 0.21, we fixed a long-standing inconsistency
in the return value of all-NA series depending on whether or not bottleneck
was installed. See :ref:whatsnew_0210.api_breaking.bottleneck. At the same
time, we changed the sum and prod of an empty Series to also be NaN.

Based on feedback, we've partially reverted those changes.

Arithmetic Operations
^^^^^^^^^^^^^^^^^^^^^

The default sum for empty or all-NA Series is now 0.

pandas 0.21.x

.. code-block:: ipython

In [1]: pd.Series([]).sum()
Out[1]: nan

In [2]: pd.Series([np.nan]).sum()
Out[2]: nan

pandas 0.22.0

.. ipython:: python

pd.Series([]).sum()
pd.Series([np.nan]).sum()

The default behavior is the same as pandas 0.20.3 with bottleneck installed. It
also matches the behavior of NumPy's np.nansum on empty and all-NA arrays.

To have the sum of an empty series return NaN (the default behavior of
pandas 0.20.3 without bottleneck, or pandas 0.21.x), use the min_count
keyword.

.. ipython:: python

pd.Series([]).sum(min_count=1)

Thanks to the skipna parameter, the .sum on an all-NA
series is conceptually the same as the .sum of an empty one with
skipna=True (the default).

.. ipython:: python

pd.Series([np.nan]).sum(min_count=1) skipna=True by default

The min_count parameter refers to the minimum number of non-null values
required for a non-NA sum or product.

:meth:Series.prod has been updated to behave the same as :meth:Series.sum,
returning 1 instead.

.. ipython:: python

pd.Series([]).prod()
pd.Series([np.nan]).prod()
pd.Series([]).prod(min_count=1)

These changes affect :meth:DataFrame.sum and :meth:DataFrame.prod as well.
Finally, a few less obvious places in pandas are affected by this change.

Grouping by a Categorical
^^^^^^^^^^^^^^^^^^^^^^^^^

Grouping by a Categorical and summing now returns 0 instead of
NaN for categories with no observations. The product now returns 1
instead of NaN.

pandas 0.21.x

.. code-block:: ipython

In [8]: grouper = pd.Categorical(['a', 'a'], categories=['a', 'b'])

In [9]: pd.Series([1, 2]).groupby(grouper).sum()
Out[9]:
a 3.0
b NaN
dtype: float64

pandas 0.22

.. ipython:: python

grouper = pd.Categorical(['a', 'a'], categories=['a', 'b'])
pd.Series([1, 2]).groupby(grouper).sum()

To restore the 0.21 behavior of returning NaN for unobserved groups,
use min_count>=1.

.. ipython:: python

pd.Series([1, 2]).groupby(grouper).sum(min_count=1)

Resample
^^^^^^^^

The sum and product of all-NA bins has changed from NaN to 0 for
sum and 1 for product.

pandas 0.21.x

.. code-block:: ipython

In [11]: s = pd.Series([1, 1, np.nan, np.nan],
...: index=pd.date_range('2017', periods=4))
...: s
Out[11]:
2017-01-01 1.0
2017-01-02 1.0
2017-01-03 NaN
2017-01-04 NaN
Freq: D, dtype: float64

In [12]: s.resample('2d').sum()
Out[12]:
2017-01-01 2.0
2017-01-03 NaN
Freq: 2D, dtype: float64

pandas 0.22.0

.. ipython:: python

s = pd.Series([1, 1, np.nan, np.nan],
index=pd.date_range('2017', periods=4))
s.resample('2d').sum()

To restore the 0.21 behavior of returning NaN, use min_count>=1.

.. ipython:: python

s.resample('2d').sum(min_count=1)

In particular, upsampling and taking the sum or product is affected, as
upsampling introduces missing values even if the original series was
entirely valid.

pandas 0.21.x

.. code-block:: ipython

In [14]: idx = pd.DatetimeIndex(['2017-01-01', '2017-01-02'])

In [15]: pd.Series([1, 2], index=idx).resample('12H').sum()
Out[15]:
2017-01-01 00:00:00 1.0
2017-01-01 12:00:00 NaN
2017-01-02 00:00:00 2.0
Freq: 12H, dtype: float64

pandas 0.22.0

.. ipython:: python

idx = pd.DatetimeIndex(['2017-01-01', '2017-01-02'])
pd.Series([1, 2], index=idx).resample("12H").sum()

Once again, the min_count keyword is available to restore the 0.21 behavior.

.. ipython:: python

pd.Series([1, 2], index=idx).resample("12H").sum(min_count=1)

Rolling and Expanding
^^^^^^^^^^^^^^^^^^^^^

Rolling and expanding already have a min_periods keyword that behaves
similar to min_count. The only case that changes is when doing a rolling
or expanding sum with min_periods=0. Previously this returned NaN,
when fewer than min_periods non-NA values were in the window. Now it
returns 0.

pandas 0.21.1

.. code-block:: ipython

In [17]: s = pd.Series([np.nan, np.nan])

In [18]: s.rolling(2, min_periods=0).sum()
Out[18]:
0 NaN
1 NaN
dtype: float64

pandas 0.22.0

.. ipython:: python

s = pd.Series([np.nan, np.nan])
s.rolling(2, min_periods=0).sum()

The default behavior of min_periods=None, implying that min_periods
equals the window size, is unchanged.

Compatibility

If you maintain a library that should work across pandas versions, it
may be easiest to exclude pandas 0.21 from your requirements. Otherwise, all your
sum() calls would need to check if the Series is empty before summing.

With setuptools, in your setup.py use::

install_requires=['pandas!=0.21.*', ...]

With conda, use

.. code-block:: yaml

requirements:
run:
- pandas !=0.21.0,!=0.21.1

Note that the inconsistency in the return value for all-NA series is still
there for pandas 0.20.3 and earlier. Avoiding pandas 0.21 will only help with
the empty case.

.. _whatsnew_0211:

0.21.1


This is a minor bug-fix release in the 0.21.x series and includes some small regression fixes,
bug fixes and performance improvements.
We recommend that all users upgrade to this version.

Highlights include:

  • Temporarily restore matplotlib datetime plotting functionality. This should
    resolve issues for users who implicitly relied on pandas to plot datetimes
    with matplotlib. See :ref:here <whatsnew_0211.converters>.
  • Improvements to the Parquet IO functions introduced in 0.21.0. See
    :ref:here <whatsnew_0211.enhancements.parquet>.

.. contents:: What's new in v0.21.1
:local:
:backlinks: none

.. _whatsnew_0211.converters:

Restore Matplotlib datetime Converter Registration

Pandas implements some matplotlib converters for nicely formatting the axis
labels on plots with datetime or Period values. Prior to pandas 0.21.0,
these were implicitly registered with matplotlib, as a side effect of import pandas.

In pandas 0.21.0, we required users to explicitly register the
converter. This caused problems for some users who relied on those converters
being present for regular matplotlib.pyplot plotting methods, so we're
temporarily reverting that change; pandas 0.21.1 again registers the converters on
import, just like before 0.21.0.

We've added a new option to control the converters:
pd.options.plotting.matplotlib.register_converters. By default, they are
registered. Toggling this to False removes pandas' formatters and restore
any converters we overwrote when registering them (:issue:18301).

We're working with the matplotlib developers to make this easier. We're trying
to balance user convenience (automatically registering the converters) with
import performance and best practices (importing pandas shouldn't have the side
effect of overwriting any custom converters you've already set). In the future
we hope to have most of the datetime formatting functionality in matplotlib,
with just the pandas-specific converters in pandas. We'll then gracefully
deprecate the automatic registration of converters in favor of users explicitly
registering them when they want them.

.. _whatsnew_0211.enhancements:

New features

.. _whatsnew_0211.enhancements.parquet:

Improvements to the Parquet IO functionality
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  • :func:DataFrame.to_parquet will now write non-default indexes when the
    underlying engine supports it. The indexes will be preserved when reading
    back in with :func:read_parquet (:issue:18581).
  • :func:read_parquet now allows to specify the columns to read from a parquet file (:issue:18154)
  • :func:read_parquet now allows to specify kwargs which are passed to the respective engine (:issue:18216)

.. _whatsnew_0211.enhancements.other:

Other Enhancements
^^^^^^^^^^^^^^^^^^

  • :meth:Timestamp.timestamp is now available in Python 2.7. (:issue:17329)
  • :class:Grouper and :class:TimeGrouper now have a friendly repr output (:issue:18203).

.. _whatsnew_0211.deprecations:

Deprecations

  • pandas.tseries.register has been renamed to
    :func:pandas.plotting.register_matplotlib_converters`` (:issue:18301`)

.. _whatsnew_0211.performance:

Performance Improvements

  • Improved performance of plotting large series/dataframes (:issue:18236).

.. _whatsnew_0211.bug_fixes:

Bug Fixes

Conversion
^^^^^^^^^^

  • Bug in :class:TimedeltaIndex subtraction could incorrectly overflow when NaT is present (:issue:17791)
  • Bug in :class:DatetimeIndex subtracting datetimelike from DatetimeIndex could fail to overflow (:issue:18020)
  • Bug in :meth:IntervalIndex.copy when copying and IntervalIndex with non-default closed (:issue:18339)
  • Bug in :func:DataFrame.to_dict where columns of datetime that are tz-aware were not converted to required arrays when used with orient='records', raising``TypeError (:issue:18372`)
  • Bug in :class:DateTimeIndex and :meth:date_range where mismatching tz-aware start and end timezones would not raise an err if end.tzinfo is None (:issue:18431)
  • Bug in :meth:Series.fillna which raised when passed a long integer on Python 2 (:issue:18159).

Indexing
^^^^^^^^

  • Bug in a boolean comparison of a datetime.datetime and a datetime64[ns] dtype Series (:issue:17965)
  • Bug where a MultiIndex with more than a million records was not raising AttributeError when trying to access a missing attribute (:issue:18165)
  • Bug in :class:IntervalIndex constructor when a list of intervals is passed with non-default closed (:issue:18334)
  • Bug in Index.putmask when an invalid mask passed (:issue:18368)
  • Bug in masked assignment of a timedelta64[ns] dtype Series, incorrectly coerced to float (:issue:18493)

I/O
^^^

  • Bug in class:~pandas.io.stata.StataReader not converting date/time columns with display formatting addressed (:issue:17990). Previously columns with display formatting were normally left as ordinal numbers and not converted to datetime objects.
  • Bug in :func:read_csv when reading a compressed UTF-16 encoded file (:issue:18071)
  • Bug in :func:read_csv for handling null values in index columns when specifying na_filter=False (:issue:5239)
  • Bug in :func:read_csv when reading numeric category fields with high cardinality (:issue:18186)
  • Bug in :meth:DataFrame.to_csv when the table had MultiIndex columns, and a list of strings was passed in for header (:issue:5539)
  • Bug in parsing integer datetime-like columns with specified format in read_sql (:issue:17855).
  • Bug in :meth:DataFrame.to_msgpack when serializing data of the numpy.bool_ datatype (:issue:18390)
  • Bug in :func:read_json not decoding when reading line deliminted JSON from S3 (:issue:17200)
  • Bug in :func:pandas.io.json.json_normalize to avoid modification of meta (:issue:18610)
  • Bug in :func:to_latex where repeated multi-index values were not printed even though a higher level index differed from the previous row (:issue:14484)
  • Bug when reading NaN-only categorical columns in :class:HDFStore (:issue:18413)
  • Bug in :meth:DataFrame.to_latex with longtable=True where a latex multicolumn always spanned over three columns (:issue:17959)

Plotting
^^^^^^^^

  • Bug in DataFrame.plot() and Series.plot() with :class:DatetimeIndex where a figure generated by them is not pickleable in Python 3 (:issue:18439)

Groupby/Resample/Rolling
^^^^^^^^^^^^^^^^^^^^^^^^

  • Bug in DataFrame.resample(...).apply(...) when there is a callable that returns different columns (:issue:15169)
  • Bug in DataFrame.resample(...) when there is a time change (DST) and resampling frequecy is 12h or higher (:issue:15549)
  • Bug in pd.DataFrameGroupBy.count() when counting over a datetimelike column (:issue:13393)
  • Bug in rolling.var where calculation is inaccurate with a zero-valued array (:issue:18430)

Reshaping
^^^^^^^^^

  • Error message in pd.merge_asof() for key datatype mismatch now includes datatype of left and right key (:issue:18068)
  • Bug in pd.concat when empty and non-empty DataFrames or Series are concatenated (:issue:18178 :issue:18187)
  • Bug in DataFrame.filter(...) when :class:unicode is passed as a condition in Python 2 (:issue:13101)
  • Bug when merging empty DataFrames when np.seterr(divide='raise') is set (:issue:17776)

Numeric
^^^^^^^

  • Bug in pd.Series.rolling.skew() and rolling.kurt() with all equal values has floating issue (:issue:18044)

Categorical
^^^^^^^^^^^

  • Bug in :meth:DataFrame.astype where casting to 'category' on an empty DataFrame causes a segmentation fault (:issue:18004)
  • Error messages in the testing module have been improved when items have different CategoricalDtype (:issue:18069)
  • CategoricalIndex can now correctly take a pd.api.types.CategoricalDtype as its dtype (:issue:18116)
  • Bug in Categorical.unique() returning read-only codes array when all categories were NaN (:issue:18051)
  • Bug in DataFrame.groupby(axis=1) with a CategoricalIndex (:issue:18432)

String
^^^^^^

  • :meth:Series.str.split() will now propagate NaN values across all expanded columns instead of None (:issue:18450)

.. _whatsnew_0181:

raven 6.3.0 -> 6.4.0

6.4.0


  • [Core] Support for defining sanitized_keys on the client (pr/990)
  • [Django] Support for Django 2.0 Urlresolver
  • [Docs] Several fixes and improvements

django-tables2 1.14.2 -> 1.17.1

1.17.1

  • Fix typo in setup.py for extras_require.

1.17.0

  • Dropped support for Django 1.8, 1.9 and 1.10.
  • Add extra_context argument to TemplateColumn 509 by ad-m
  • Remove unnecessary cast of record to str 514, fixes 511
  • Use django.test.TestCase for all tests, and remove dependency on pytest and reorganized some tests 515
  • Remove traces of django-haystack tests from the tests, there were no actual tests.

1.16.0

This is the last version supporting Django 1.8, 1.9 and 1.10. Django 1.8 is only supported until april 2018, so consider upgrading to Django 1.11!

  • Added tf dictionary to Column.attrs with default values for the footer, so footers now have class attribute by default 501 by mpasternak

1.15.0

  • Added as=varname keyword argument to the {% querystring %} template tag,
    fixes 481
  • Updated the tutorial to reflect current state of Django a bit better.
  • Used OrderedDict rather than dict as the parent for utils.AttributeDict to make the rendered html more consistant accross python versions.
  • Allow reading column attrs from a column's attribute, allowing easier reuse of custom column attributes (fixes 241)
  • value and record are optionally passed to the column attrs callables for data rows. 503, fixes 500

django-extensions 1.9.7 -> 1.9.8

1.9.8


Changes:

  • Fix: show_urls, fix for Django 2.0 (Locale URL Resolvers are still broken)
  • Fix: runserver_plus, fix rendering of ipv6 link
  • Improvement: validate_templates, allow relative paths
  • Improvement: validate_templates, automatically include app templates
  • Improvement: pip_checker, could not find some packages
  • Docs: shell_plus, --print-sql usage clearification

Werkzeug 0.12.2 -> 0.14.1

0.14.1


Released on December 31st 2017

  • Resolved a regression with status code handling in the integrated
    development server.

0.14


Released on December 31st 2017

  • HTTP exceptions are now automatically caught by
    Request.application.
  • Added support for edge as browser.
  • Added support for platforms that lack SpooledTemporaryFile.
  • Add support for etag handling through if-match
  • Added support for the SameSite cookie attribute.
  • Added werkzeug.wsgi.ProxyMiddleware
  • Implemented has for NullCache
  • get_multi on cache clients now returns lists all the time.
  • Improved the watchdog observer shutdown for the reloader to not crash
    on exit on older Python versions.
  • Added support for filename* filename attributes according to
    RFC 2231
  • Resolved an issue where machine ID for the reloader PIN was not
    read accurately on windows.
  • Added a workaround for syntax errors in init files in the reloader.
  • Added support for using the reloader with console scripts on windows.
  • The built-in HTTP server will no longer close a connection in cases
    where no HTTP body is expected (204, 204, HEAD requests etc.)
  • The EnvironHeaders object now skips over empty content type and
    lengths if they are set to falsy values.
  • Werkzeug will no longer send the content-length header on 1xx or
    204/304 responses.
  • Cookie values are now also permitted to include slashes and equal
    signs without quoting.
  • Relaxed the regex for the routing converter arguments.
  • If cookies are sent without values they are now assumed to have an
    empty value and the parser accepts this. Previously this could have
    corrupted cookies that followed the value.
  • The test Client and EnvironBuilder now support mimetypes like
    the request object does.
  • Added support for static weights in URL rules.
  • Better handle some more complex reloader scenarios where sys.path
    contained non directory paths.
  • EnvironHeaders no longer raises weird errors if non string keys
    are passed to it.

0.13


Released on December 7th 2017

  • Deprecate support for Python 2.6 and 3.3. CI tests will not run
    for these versions, and support will be dropped completely in the next
    version. (pallets/meta24_)
  • Raise TypeError when port is not an integer. (1088_)
  • Fully deprecate werkzeug.script. Use Click_ instead. (1090_)
  • response.age is parsed as a timedelta. Previously, it was
    incorrectly treated as a datetime. The header value is an integer
    number of seconds, not a date string. (414_)
  • Fix a bug in TypeConversionDict where errors are not propagated
    when using the converter. (1102_)
  • Authorization.qop is a string instead of a set, to comply with
    RFC 2617. (984_)
  • An exception is raised when an encoded cookie is larger than, by
    default, 4093 bytes. Browsers may silently ignore cookies larger than
    this. BaseResponse has a new attribute max_cookie_size and
    dump_cookie has a new argument max_size to configure this.
    (780, 1109)
  • Fix a TypeError in werkzeug.contrib.lint.GuardedIterator.close.
    (1116_)
  • BaseResponse.calculate_content_length now correctly works for
    Unicode responses on Python 3. It first encodes using
    iter_encoded. (705_)
  • Secure cookie contrib works with string secret key on Python 3.
    (1205_)
  • Shared data middleware accepts a list instead of a dict of static
    locations to preserve lookup order. (1197_)
  • HTTP header values without encoding can contain single quotes.
    (1208_)
  • The built-in dev server supports receiving requests with chunked
    transfer encoding. (1198_)

.. _Click: https://www.palletsprojects.com/p/click/
.. _pallets/meta24: https://github.com/pallets/meta/issues/24
.. _414: pallets/werkzeug#414
.. _705: pallets/werkzeug#705
.. _780: pallets/werkzeug#780
.. _984: pallets/werkzeug#984
.. _1088: pallets/werkzeug#1088
.. _1090: pallets/werkzeug#1090
.. _1102: pallets/werkzeug#1102
.. _1109: pallets/werkzeug#1109
.. _1116: pallets/werkzeug#1116
.. _1197: pallets/werkzeug#1197
.. _1198: pallets/werkzeug#1198
.. _1205: pallets/werkzeug#1205
.. _1208: pallets/werkzeug#1208

django-debug-toolbar 1.8 -> 1.9.1

1.9


This version is compatible with Django 2.0 and requires Django 1.8 or
later.

Bugfixes

  • The profiling panel now escapes reported data resulting in valid HTML.
  • Many minor cleanups and bugfixes.

That's it for now!

Happy merging! 🤖

@pyup-bot
Copy link
Collaborator Author

pyup-bot commented Jan 8, 2018

Closing this in favor of #73

@pyup-bot pyup-bot closed this Jan 8, 2018
@drummonds drummonds deleted the pyup-scheduled-update-01-01-2018 branch January 8, 2018 13:58
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
1 participant