merge in bad ways, so I'll have to merge that extra-carefully (probably manually.) Merged revisions 46495-46605 via svnmerge from svn+ssh://pythondev@svn.python.org/python/trunk ........ r46495 | tim.peters | 2006-05-28 03:52:38 +0200 (Sun, 28 May 2006) | 2 lines Added missing svn:eol-style property to text files. ........ r46497 | tim.peters | 2006-05-28 12:41:29 +0200 (Sun, 28 May 2006) | 3 lines PyErr_Display(), PyErr_WriteUnraisable(): Coverity found a cut-and-paste bug in both: `className` was referenced before being checked for NULL. ........ r46499 | fredrik.lundh | 2006-05-28 14:06:46 +0200 (Sun, 28 May 2006) | 5 lines needforspeed: added Py_MEMCPY macro (currently tuned for Visual C only), and use it for string copy operations. this gives a 20% speedup on some string benchmarks. ........ r46501 | michael.hudson | 2006-05-28 17:51:40 +0200 (Sun, 28 May 2006) | 26 lines Quality control, meet exceptions.c. Fix a number of problems with the need for speed code: One is doing this sort of thing: Py_DECREF(self->field); self->field = newval; Py_INCREF(self->field); without being very sure that self->field doesn't start with a value that has a __del__, because that almost certainly can lead to segfaults. As self->args is constrained to be an exact tuple we may as well exploit this fact consistently. This leads to quite a lot of simplification (and, hey, probably better performance). Add some error checking in places lacking it. Fix some rather strange indentation in the Unicode code. Delete some trailing whitespace. More to come, I haven't fixed all the reference leaks yet... ........ r46502 | george.yoshida | 2006-05-28 18:39:09 +0200 (Sun, 28 May 2006) | 3 lines Patch #1080727: add "encoding" parameter to doctest.DocFileSuite Contributed by Bjorn Tillenius. ........ r46503 | martin.v.loewis | 2006-05-28 18:57:38 +0200 (Sun, 28 May 2006) | 4 lines Rest of patch #1490384: Commit icon source, remove claim that Erik von Blokland is the author of the installer picture. ........ r46504 | michael.hudson | 2006-05-28 19:40:29 +0200 (Sun, 28 May 2006) | 16 lines Quality control, meet exceptions.c, round two. Make some functions that should have been static static. Fix a bunch of refleaks by fixing the definition of MiddlingExtendsException. Remove all the __new__ implementations apart from BaseException_new. Rewrite most code that needs it to cope with NULL fields (such code could get excercised anyway, the __new__-removal just makes it more likely). This involved editing the code for WindowsError, which I can't test. This fixes all the refleaks in at least the start of a regrtest -R :: run. ........ r46505 | marc-andre.lemburg | 2006-05-28 19:46:58 +0200 (Sun, 28 May 2006) | 10 lines Initial version of systimes - a module to provide platform dependent performance measurements. The module is currently just a proof-of-concept implementation, but will integrated into pybench once it is stable enough. License: pybench license. Author: Marc-Andre Lemburg. ........ r46507 | armin.rigo | 2006-05-28 21:13:17 +0200 (Sun, 28 May 2006) | 15 lines ("Forward-port" of r46506) Remove various dependencies on dictionary order in the standard library tests, and one (clearly an oversight, potentially critical) in the standard library itself - base64.py. Remaining open issues: * test_extcall is an output test, messy to make robust * tarfile.py has a potential bug here, but I'm not familiar enough with this code. Filed in as SF bug #1496501. * urllib2.HTTPPasswordMgr() returns a random result if there is more than one matching root path. I'm asking python-dev for clarification... ........ r46508 | georg.brandl | 2006-05-28 22:11:45 +0200 (Sun, 28 May 2006) | 4 lines The empty string is a valid import path. (fixes #1496539) ........ r46509 | georg.brandl | 2006-05-28 22:23:12 +0200 (Sun, 28 May 2006) | 3 lines Patch #1496206: urllib2 PasswordMgr ./. default ports ........ r46510 | georg.brandl | 2006-05-28 22:57:09 +0200 (Sun, 28 May 2006) | 3 lines Fix refleaks in UnicodeError get and set methods. ........ r46511 | michael.hudson | 2006-05-28 23:19:03 +0200 (Sun, 28 May 2006) | 3 lines use the UnicodeError traversal and clearing functions in UnicodeError subclasses. ........ r46512 | thomas.wouters | 2006-05-28 23:32:12 +0200 (Sun, 28 May 2006) | 4 lines Make last patch valid C89 so Windows compilers can deal with it. ........ r46513 | georg.brandl | 2006-05-28 23:42:54 +0200 (Sun, 28 May 2006) | 3 lines Fix ref-antileak in _struct.c which eventually lead to deallocating None. ........ r46514 | georg.brandl | 2006-05-28 23:57:35 +0200 (Sun, 28 May 2006) | 4 lines Correct None refcount issue in Mac modules. (Are they still used?) ........ r46515 | armin.rigo | 2006-05-29 00:07:08 +0200 (Mon, 29 May 2006) | 3 lines A clearer error message when passing -R to regrtest.py with release builds of Python. ........ r46516 | georg.brandl | 2006-05-29 00:14:04 +0200 (Mon, 29 May 2006) | 3 lines Fix C function calling conventions in _sre module. ........ r46517 | georg.brandl | 2006-05-29 00:34:51 +0200 (Mon, 29 May 2006) | 3 lines Convert audioop over to METH_VARARGS. ........ r46518 | georg.brandl | 2006-05-29 00:38:57 +0200 (Mon, 29 May 2006) | 3 lines METH_NOARGS functions do get called with two args. ........ r46519 | georg.brandl | 2006-05-29 11:46:51 +0200 (Mon, 29 May 2006) | 4 lines Fix refleak in socketmodule. Replace bogus Py_BuildValue calls. Fix refleak in exceptions. ........ r46520 | nick.coghlan | 2006-05-29 14:43:05 +0200 (Mon, 29 May 2006) | 7 lines Apply modified version of Collin Winter's patch #1478788 Renames functional extension module to _functools and adds a Python functools module so that utility functions like update_wrapper can be added easily. ........ r46522 | georg.brandl | 2006-05-29 15:53:16 +0200 (Mon, 29 May 2006) | 3 lines Convert fmmodule to METH_VARARGS. ........ r46523 | georg.brandl | 2006-05-29 16:13:21 +0200 (Mon, 29 May 2006) | 3 lines Fix #1494605. ........ r46524 | georg.brandl | 2006-05-29 16:28:05 +0200 (Mon, 29 May 2006) | 3 lines Handle PyMem_Malloc failure in pystrtod.c. Closes #1494671. ........ r46525 | georg.brandl | 2006-05-29 16:33:55 +0200 (Mon, 29 May 2006) | 3 lines Fix compiler warning. ........ r46526 | georg.brandl | 2006-05-29 16:39:00 +0200 (Mon, 29 May 2006) | 3 lines Fix #1494787 (pyclbr counts whitespace as superclass name) ........ r46527 | bob.ippolito | 2006-05-29 17:47:29 +0200 (Mon, 29 May 2006) | 1 line simplify the struct code a bit (no functional changes) ........ r46528 | armin.rigo | 2006-05-29 19:59:47 +0200 (Mon, 29 May 2006) | 2 lines Silence a warning. ........ r46529 | georg.brandl | 2006-05-29 21:39:45 +0200 (Mon, 29 May 2006) | 3 lines Correct some value converting strangenesses. ........ r46530 | nick.coghlan | 2006-05-29 22:27:44 +0200 (Mon, 29 May 2006) | 1 line When adding a module like functools, it helps to let SVN know about the file. ........ r46531 | georg.brandl | 2006-05-29 22:52:54 +0200 (Mon, 29 May 2006) | 4 lines Patches #1497027 and #972322: try HTTP digest auth first, and watch out for handler name collisions. ........ r46532 | georg.brandl | 2006-05-29 22:57:01 +0200 (Mon, 29 May 2006) | 3 lines Add News entry for last commit. ........ r46533 | georg.brandl | 2006-05-29 23:04:52 +0200 (Mon, 29 May 2006) | 4 lines Make use of METH_O and METH_NOARGS where possible. Use Py_UnpackTuple instead of PyArg_ParseTuple where possible. ........ r46534 | georg.brandl | 2006-05-29 23:58:42 +0200 (Mon, 29 May 2006) | 3 lines Convert more modules to METH_VARARGS. ........ r46535 | georg.brandl | 2006-05-30 00:00:30 +0200 (Tue, 30 May 2006) | 3 lines Whoops. ........ r46536 | fredrik.lundh | 2006-05-30 00:42:07 +0200 (Tue, 30 May 2006) | 4 lines fixed "abc".count("", 100) == -96 error (hopefully, nobody's relying on the current behaviour ;-) ........ r46537 | bob.ippolito | 2006-05-30 00:55:48 +0200 (Tue, 30 May 2006) | 1 line struct: modulo math plus warning on all endian-explicit formats for compatibility with older struct usage (ugly) ........ r46539 | bob.ippolito | 2006-05-30 02:26:01 +0200 (Tue, 30 May 2006) | 1 line Add a length check to aifc to ensure it doesn't write a bogus file ........ r46540 | tim.peters | 2006-05-30 04:25:25 +0200 (Tue, 30 May 2006) | 10 lines deprecated_err(): Stop bizarre warning messages when the tests are run in the order: test_genexps (or any other doctest-based test) test_struct test_doctest The `warnings` module needs an advertised way to save/restore its internal filter list. ........ r46541 | tim.peters | 2006-05-30 04:26:46 +0200 (Tue, 30 May 2006) | 2 lines Whitespace normalization. ........ r46542 | tim.peters | 2006-05-30 04:30:30 +0200 (Tue, 30 May 2006) | 2 lines Set a binary svn:mime-type property on this UTF-8 encoded file. ........ r46543 | neal.norwitz | 2006-05-30 05:18:50 +0200 (Tue, 30 May 2006) | 1 line Simplify further by using AddStringConstant ........ r46544 | tim.peters | 2006-05-30 06:16:25 +0200 (Tue, 30 May 2006) | 6 lines Convert relevant dict internals to Py_ssize_t. I don't have a box with nearly enough RAM, or an OS, that could get close to tickling this, though (requires a dict w/ at least 2**31 entries). ........ r46545 | neal.norwitz | 2006-05-30 06:19:21 +0200 (Tue, 30 May 2006) | 1 line Remove stray | in comment ........ r46546 | neal.norwitz | 2006-05-30 06:25:05 +0200 (Tue, 30 May 2006) | 1 line Use Py_SAFE_DOWNCAST for safety. Fix format strings. Remove 2 more stray | in comment ........ r46547 | neal.norwitz | 2006-05-30 06:43:23 +0200 (Tue, 30 May 2006) | 1 line No DOWNCAST is required since sizeof(Py_ssize_t) >= sizeof(int) and Py_ReprEntr returns an int ........ r46548 | tim.peters | 2006-05-30 07:04:59 +0200 (Tue, 30 May 2006) | 3 lines dict_print(): Explicitly narrow the return value from a (possibly) wider variable. ........ r46549 | tim.peters | 2006-05-30 07:23:59 +0200 (Tue, 30 May 2006) | 5 lines dict_print(): So that Neal & I don't spend the rest of our lives taking turns rewriting code that works ;-), get rid of casting illusions by declaring a new variable with the obvious type. ........ r46550 | georg.brandl | 2006-05-30 09:04:55 +0200 (Tue, 30 May 2006) | 3 lines Restore exception pickle support. #1497319. ........ r46551 | georg.brandl | 2006-05-30 09:13:29 +0200 (Tue, 30 May 2006) | 3 lines Add a test case for exception pickling. args is never NULL. ........ r46552 | neal.norwitz | 2006-05-30 09:21:10 +0200 (Tue, 30 May 2006) | 1 line Don't fail if the (sub)pkgname already exist. ........ r46553 | georg.brandl | 2006-05-30 09:34:45 +0200 (Tue, 30 May 2006) | 3 lines Disallow keyword args for exceptions. ........ r46554 | neal.norwitz | 2006-05-30 09:36:54 +0200 (Tue, 30 May 2006) | 5 lines I'm impatient. I think this will fix a few more problems with the buildbots. I'm not sure this is the best approach, but I can't think of anything better. If this creates problems, feel free to revert, but I think it's safe and should make things a little better. ........ r46555 | georg.brandl | 2006-05-30 10:17:00 +0200 (Tue, 30 May 2006) | 4 lines Do the check for no keyword arguments in __init__ so that subclasses of Exception can be supplied keyword args ........ r46556 | georg.brandl | 2006-05-30 10:47:19 +0200 (Tue, 30 May 2006) | 3 lines Convert test_exceptions to unittest. ........ r46557 | andrew.kuchling | 2006-05-30 14:52:01 +0200 (Tue, 30 May 2006) | 1 line Add SoC name, and reorganize this section a bit ........ r46559 | tim.peters | 2006-05-30 17:53:34 +0200 (Tue, 30 May 2006) | 11 lines PyLong_FromString(): Continued fraction analysis (explained in a new comment) suggests there are almost certainly large input integers in all non-binary input bases for which one Python digit too few is initally allocated to hold the final result. Instead of assert-failing when that happens, allocate more space. Alas, I estimate it would take a few days to find a specific such case, so this isn't backed up by a new test (not to mention that such a case may take hours to run, since conversion time is quadratic in the number of digits, and preliminary attempts suggested that the smallest such inputs contain at least a million digits). ........ r46560 | fredrik.lundh | 2006-05-30 19:11:48 +0200 (Tue, 30 May 2006) | 3 lines changed find/rfind to return -1 for matches outside the source string ........ r46561 | bob.ippolito | 2006-05-30 19:37:54 +0200 (Tue, 30 May 2006) | 1 line Change wrapping terminology to overflow masking ........ r46562 | fredrik.lundh | 2006-05-30 19:39:58 +0200 (Tue, 30 May 2006) | 3 lines changed count to return 0 for slices outside the source string ........ r46568 | tim.peters | 2006-05-31 01:28:02 +0200 (Wed, 31 May 2006) | 2 lines Whitespace normalization. ........ r46569 | brett.cannon | 2006-05-31 04:19:54 +0200 (Wed, 31 May 2006) | 5 lines Clarify wording on default values for strptime(); defaults are used when better values cannot be inferred. Closes bug #1496315. ........ r46572 | neal.norwitz | 2006-05-31 09:43:27 +0200 (Wed, 31 May 2006) | 1 line Calculate smallest properly (it was off by one) and use proper ssize_t types for Win64 ........ r46573 | neal.norwitz | 2006-05-31 10:01:08 +0200 (Wed, 31 May 2006) | 1 line Revert last checkin, it is better to do make distclean ........ r46574 | neal.norwitz | 2006-05-31 11:02:44 +0200 (Wed, 31 May 2006) | 3 lines On 64-bit platforms running test_struct after test_tarfile would fail since the deprecation warning wouldn't be raised. ........ r46575 | thomas.heller | 2006-05-31 13:37:58 +0200 (Wed, 31 May 2006) | 3 lines PyTuple_Pack is not available in Python 2.3, but ctypes must stay compatible with that. ........ r46576 | andrew.kuchling | 2006-05-31 15:18:56 +0200 (Wed, 31 May 2006) | 1 line 'functional' module was renamed to 'functools' ........ r46577 | kristjan.jonsson | 2006-05-31 15:35:41 +0200 (Wed, 31 May 2006) | 1 line Fixup the PCBuild8 project directory. exceptions.c have moved to Objects, and the functionalmodule.c has been replaced with _functoolsmodule.c. Other minor changes to .vcproj files and .sln to fix compilation ........ r46578 | andrew.kuchling | 2006-05-31 16:08:48 +0200 (Wed, 31 May 2006) | 15 lines [Bug #1473048] SimpleXMLRPCServer and DocXMLRPCServer don't look at the path of the HTTP request at all; you can POST or GET from / or /RPC2 or /blahblahblah with the same results. Security scanners that look for /cgi-bin/phf will therefore report lots of vulnerabilities. Fix: add a .rpc_paths attribute to the SimpleXMLRPCServer class, and report a 404 error if the path isn't on the allowed list. Possibly-controversial aspect of this change: the default makes only '/' and '/RPC2' legal. Maybe this will break people's applications (though I doubt it). We could just set the default to an empty tuple, which would exactly match the current behaviour. ........ r46579 | andrew.kuchling | 2006-05-31 16:12:47 +0200 (Wed, 31 May 2006) | 1 line Mention SimpleXMLRPCServer change ........ r46580 | tim.peters | 2006-05-31 16:28:07 +0200 (Wed, 31 May 2006) | 2 lines Trimmed trailing whitespace. ........ r46581 | tim.peters | 2006-05-31 17:33:22 +0200 (Wed, 31 May 2006) | 4 lines _range_error(): Speed and simplify (there's no real need for loops here). Assert that size_t is actually big enough, and that f->size is at least one. Wrap a long line. ........ r46582 | tim.peters | 2006-05-31 17:34:37 +0200 (Wed, 31 May 2006) | 2 lines Repaired error in new comment. ........ r46584 | neal.norwitz | 2006-06-01 07:32:49 +0200 (Thu, 01 Jun 2006) | 4 lines Remove ; at end of macro. There was a compiler recently that warned about extra semi-colons. It may have been the HP C compiler. This file will trigger a bunch of those warnings now. ........ r46585 | georg.brandl | 2006-06-01 08:39:19 +0200 (Thu, 01 Jun 2006) | 3 lines Correctly unpickle 2.4 exceptions via __setstate__ (patch #1498571) ........ r46586 | georg.brandl | 2006-06-01 10:27:32 +0200 (Thu, 01 Jun 2006) | 3 lines Correctly allocate complex types with tp_alloc. (bug #1498638) ........ r46587 | georg.brandl | 2006-06-01 14:30:46 +0200 (Thu, 01 Jun 2006) | 2 lines Correctly dispatch Faults in loads (patch #1498627) ........ r46588 | georg.brandl | 2006-06-01 15:00:49 +0200 (Thu, 01 Jun 2006) | 3 lines Some code style tweaks, and remove apply. ........ r46589 | armin.rigo | 2006-06-01 15:19:12 +0200 (Thu, 01 Jun 2006) | 5 lines [ 1497053 ] Let dicts propagate the exceptions in user __eq__(). [ 1456209 ] dictresize() vulnerability ( <- backport candidate ). ........ r46590 | tim.peters | 2006-06-01 15:41:46 +0200 (Thu, 01 Jun 2006) | 2 lines Whitespace normalization. ........ r46591 | tim.peters | 2006-06-01 15:49:23 +0200 (Thu, 01 Jun 2006) | 2 lines Record bugs 1275608 and 1456209 as being fixed. ........ r46592 | tim.peters | 2006-06-01 15:56:26 +0200 (Thu, 01 Jun 2006) | 5 lines Re-enable a new empty-string test added during the NFS sprint, but disabled then because str and unicode strings gave different results. The implementations were repaired later during the sprint, but the new test remained disabled. ........ r46594 | tim.peters | 2006-06-01 17:50:44 +0200 (Thu, 01 Jun 2006) | 7 lines Armin committed his patch while I was reviewing it (I'm sure he didn't know this), so merged in some changes I made during review. Nothing material apart from changing a new `mask` local from int to Py_ssize_t. Mostly this is repairing comments that were made incorrect, and adding new comments. Also a few minor code rewrites for clarity or helpful succinctness. ........ r46599 | neal.norwitz | 2006-06-02 06:45:53 +0200 (Fri, 02 Jun 2006) | 1 line Convert docstrings to comments so regrtest -v prints method names ........ r46600 | neal.norwitz | 2006-06-02 06:50:49 +0200 (Fri, 02 Jun 2006) | 2 lines Fix memory leak found by valgrind. ........ r46601 | neal.norwitz | 2006-06-02 06:54:52 +0200 (Fri, 02 Jun 2006) | 1 line More memory leaks from valgrind ........ r46602 | neal.norwitz | 2006-06-02 08:23:00 +0200 (Fri, 02 Jun 2006) | 11 lines Patch #1357836: Prevent an invalid memory read from test_coding in case the done flag is set. In that case, the loop isn't entered. I wonder if rather than setting the done flag in the cases before the loop, if they should just exit early. This code looks like it should be refactored. Backport candidate (also the early break above if decoding_fgets fails) ........ r46603 | martin.blais | 2006-06-02 15:03:43 +0200 (Fri, 02 Jun 2006) | 1 line Fixed struct test to not use unittest. ........ r46605 | tim.peters | 2006-06-03 01:22:51 +0200 (Sat, 03 Jun 2006) | 10 lines pprint functions used to sort a dict (by key) if and only if the output required more than one line. "Small" dicts got displayed in seemingly random order (the hash-induced order produced by dict.__repr__). None of this was documented. Now pprint functions always sort dicts by key, and the docs promise it. This was proposed and agreed to during the PyCon 2006 core sprint -- I just didn't have time for it before now. ........
1145 lines
38 KiB
Python
1145 lines
38 KiB
Python
import gc
|
|
import sys
|
|
import unittest
|
|
import UserList
|
|
import weakref
|
|
|
|
from test import test_support
|
|
|
|
|
|
class C:
|
|
def method(self):
|
|
pass
|
|
|
|
|
|
class Callable:
|
|
bar = None
|
|
|
|
def __call__(self, x):
|
|
self.bar = x
|
|
|
|
|
|
def create_function():
|
|
def f(): pass
|
|
return f
|
|
|
|
def create_bound_method():
|
|
return C().method
|
|
|
|
def create_unbound_method():
|
|
return C.method
|
|
|
|
|
|
class TestBase(unittest.TestCase):
|
|
|
|
def setUp(self):
|
|
self.cbcalled = 0
|
|
|
|
def callback(self, ref):
|
|
self.cbcalled += 1
|
|
|
|
|
|
class ReferencesTestCase(TestBase):
|
|
|
|
def test_basic_ref(self):
|
|
self.check_basic_ref(C)
|
|
self.check_basic_ref(create_function)
|
|
self.check_basic_ref(create_bound_method)
|
|
self.check_basic_ref(create_unbound_method)
|
|
|
|
# Just make sure the tp_repr handler doesn't raise an exception.
|
|
# Live reference:
|
|
o = C()
|
|
wr = weakref.ref(o)
|
|
`wr`
|
|
# Dead reference:
|
|
del o
|
|
`wr`
|
|
|
|
def test_basic_callback(self):
|
|
self.check_basic_callback(C)
|
|
self.check_basic_callback(create_function)
|
|
self.check_basic_callback(create_bound_method)
|
|
self.check_basic_callback(create_unbound_method)
|
|
|
|
def test_multiple_callbacks(self):
|
|
o = C()
|
|
ref1 = weakref.ref(o, self.callback)
|
|
ref2 = weakref.ref(o, self.callback)
|
|
del o
|
|
self.assert_(ref1() is None,
|
|
"expected reference to be invalidated")
|
|
self.assert_(ref2() is None,
|
|
"expected reference to be invalidated")
|
|
self.assert_(self.cbcalled == 2,
|
|
"callback not called the right number of times")
|
|
|
|
def test_multiple_selfref_callbacks(self):
|
|
# Make sure all references are invalidated before callbacks are called
|
|
#
|
|
# What's important here is that we're using the first
|
|
# reference in the callback invoked on the second reference
|
|
# (the most recently created ref is cleaned up first). This
|
|
# tests that all references to the object are invalidated
|
|
# before any of the callbacks are invoked, so that we only
|
|
# have one invocation of _weakref.c:cleanup_helper() active
|
|
# for a particular object at a time.
|
|
#
|
|
def callback(object, self=self):
|
|
self.ref()
|
|
c = C()
|
|
self.ref = weakref.ref(c, callback)
|
|
ref1 = weakref.ref(c, callback)
|
|
del c
|
|
|
|
def test_proxy_ref(self):
|
|
o = C()
|
|
o.bar = 1
|
|
ref1 = weakref.proxy(o, self.callback)
|
|
ref2 = weakref.proxy(o, self.callback)
|
|
del o
|
|
|
|
def check(proxy):
|
|
proxy.bar
|
|
|
|
self.assertRaises(weakref.ReferenceError, check, ref1)
|
|
self.assertRaises(weakref.ReferenceError, check, ref2)
|
|
self.assertRaises(weakref.ReferenceError, bool, weakref.proxy(C()))
|
|
self.assert_(self.cbcalled == 2)
|
|
|
|
def check_basic_ref(self, factory):
|
|
o = factory()
|
|
ref = weakref.ref(o)
|
|
self.assert_(ref() is not None,
|
|
"weak reference to live object should be live")
|
|
o2 = ref()
|
|
self.assert_(o is o2,
|
|
"<ref>() should return original object if live")
|
|
|
|
def check_basic_callback(self, factory):
|
|
self.cbcalled = 0
|
|
o = factory()
|
|
ref = weakref.ref(o, self.callback)
|
|
del o
|
|
self.assert_(self.cbcalled == 1,
|
|
"callback did not properly set 'cbcalled'")
|
|
self.assert_(ref() is None,
|
|
"ref2 should be dead after deleting object reference")
|
|
|
|
def test_ref_reuse(self):
|
|
o = C()
|
|
ref1 = weakref.ref(o)
|
|
# create a proxy to make sure that there's an intervening creation
|
|
# between these two; it should make no difference
|
|
proxy = weakref.proxy(o)
|
|
ref2 = weakref.ref(o)
|
|
self.assert_(ref1 is ref2,
|
|
"reference object w/out callback should be re-used")
|
|
|
|
o = C()
|
|
proxy = weakref.proxy(o)
|
|
ref1 = weakref.ref(o)
|
|
ref2 = weakref.ref(o)
|
|
self.assert_(ref1 is ref2,
|
|
"reference object w/out callback should be re-used")
|
|
self.assert_(weakref.getweakrefcount(o) == 2,
|
|
"wrong weak ref count for object")
|
|
del proxy
|
|
self.assert_(weakref.getweakrefcount(o) == 1,
|
|
"wrong weak ref count for object after deleting proxy")
|
|
|
|
def test_proxy_reuse(self):
|
|
o = C()
|
|
proxy1 = weakref.proxy(o)
|
|
ref = weakref.ref(o)
|
|
proxy2 = weakref.proxy(o)
|
|
self.assert_(proxy1 is proxy2,
|
|
"proxy object w/out callback should have been re-used")
|
|
|
|
def test_basic_proxy(self):
|
|
o = C()
|
|
self.check_proxy(o, weakref.proxy(o))
|
|
|
|
L = UserList.UserList()
|
|
p = weakref.proxy(L)
|
|
self.failIf(p, "proxy for empty UserList should be false")
|
|
p.append(12)
|
|
self.assertEqual(len(L), 1)
|
|
self.failUnless(p, "proxy for non-empty UserList should be true")
|
|
p[:] = [2, 3]
|
|
self.assertEqual(len(L), 2)
|
|
self.assertEqual(len(p), 2)
|
|
self.failUnless(3 in p,
|
|
"proxy didn't support __contains__() properly")
|
|
p[1] = 5
|
|
self.assertEqual(L[1], 5)
|
|
self.assertEqual(p[1], 5)
|
|
L2 = UserList.UserList(L)
|
|
p2 = weakref.proxy(L2)
|
|
self.assertEqual(p, p2)
|
|
## self.assertEqual(repr(L2), repr(p2))
|
|
L3 = UserList.UserList(range(10))
|
|
p3 = weakref.proxy(L3)
|
|
self.assertEqual(L3[:], p3[:])
|
|
self.assertEqual(L3[5:], p3[5:])
|
|
self.assertEqual(L3[:5], p3[:5])
|
|
self.assertEqual(L3[2:5], p3[2:5])
|
|
|
|
# The PyWeakref_* C API is documented as allowing either NULL or
|
|
# None as the value for the callback, where either means "no
|
|
# callback". The "no callback" ref and proxy objects are supposed
|
|
# to be shared so long as they exist by all callers so long as
|
|
# they are active. In Python 2.3.3 and earlier, this guaranttee
|
|
# was not honored, and was broken in different ways for
|
|
# PyWeakref_NewRef() and PyWeakref_NewProxy(). (Two tests.)
|
|
|
|
def test_shared_ref_without_callback(self):
|
|
self.check_shared_without_callback(weakref.ref)
|
|
|
|
def test_shared_proxy_without_callback(self):
|
|
self.check_shared_without_callback(weakref.proxy)
|
|
|
|
def check_shared_without_callback(self, makeref):
|
|
o = Object(1)
|
|
p1 = makeref(o, None)
|
|
p2 = makeref(o, None)
|
|
self.assert_(p1 is p2, "both callbacks were None in the C API")
|
|
del p1, p2
|
|
p1 = makeref(o)
|
|
p2 = makeref(o, None)
|
|
self.assert_(p1 is p2, "callbacks were NULL, None in the C API")
|
|
del p1, p2
|
|
p1 = makeref(o)
|
|
p2 = makeref(o)
|
|
self.assert_(p1 is p2, "both callbacks were NULL in the C API")
|
|
del p1, p2
|
|
p1 = makeref(o, None)
|
|
p2 = makeref(o)
|
|
self.assert_(p1 is p2, "callbacks were None, NULL in the C API")
|
|
|
|
def test_callable_proxy(self):
|
|
o = Callable()
|
|
ref1 = weakref.proxy(o)
|
|
|
|
self.check_proxy(o, ref1)
|
|
|
|
self.assert_(type(ref1) is weakref.CallableProxyType,
|
|
"proxy is not of callable type")
|
|
ref1('twinkies!')
|
|
self.assert_(o.bar == 'twinkies!',
|
|
"call through proxy not passed through to original")
|
|
ref1(x='Splat.')
|
|
self.assert_(o.bar == 'Splat.',
|
|
"call through proxy not passed through to original")
|
|
|
|
# expect due to too few args
|
|
self.assertRaises(TypeError, ref1)
|
|
|
|
# expect due to too many args
|
|
self.assertRaises(TypeError, ref1, 1, 2, 3)
|
|
|
|
def check_proxy(self, o, proxy):
|
|
o.foo = 1
|
|
self.assert_(proxy.foo == 1,
|
|
"proxy does not reflect attribute addition")
|
|
o.foo = 2
|
|
self.assert_(proxy.foo == 2,
|
|
"proxy does not reflect attribute modification")
|
|
del o.foo
|
|
self.assert_(not hasattr(proxy, 'foo'),
|
|
"proxy does not reflect attribute removal")
|
|
|
|
proxy.foo = 1
|
|
self.assert_(o.foo == 1,
|
|
"object does not reflect attribute addition via proxy")
|
|
proxy.foo = 2
|
|
self.assert_(
|
|
o.foo == 2,
|
|
"object does not reflect attribute modification via proxy")
|
|
del proxy.foo
|
|
self.assert_(not hasattr(o, 'foo'),
|
|
"object does not reflect attribute removal via proxy")
|
|
|
|
def test_proxy_deletion(self):
|
|
# Test clearing of SF bug #762891
|
|
class Foo:
|
|
result = None
|
|
def __delitem__(self, accessor):
|
|
self.result = accessor
|
|
g = Foo()
|
|
f = weakref.proxy(g)
|
|
del f[0]
|
|
self.assertEqual(f.result, 0)
|
|
|
|
def test_proxy_bool(self):
|
|
# Test clearing of SF bug #1170766
|
|
class List(list): pass
|
|
lyst = List()
|
|
self.assertEqual(bool(weakref.proxy(lyst)), bool(lyst))
|
|
|
|
def test_getweakrefcount(self):
|
|
o = C()
|
|
ref1 = weakref.ref(o)
|
|
ref2 = weakref.ref(o, self.callback)
|
|
self.assert_(weakref.getweakrefcount(o) == 2,
|
|
"got wrong number of weak reference objects")
|
|
|
|
proxy1 = weakref.proxy(o)
|
|
proxy2 = weakref.proxy(o, self.callback)
|
|
self.assert_(weakref.getweakrefcount(o) == 4,
|
|
"got wrong number of weak reference objects")
|
|
|
|
del ref1, ref2, proxy1, proxy2
|
|
self.assert_(weakref.getweakrefcount(o) == 0,
|
|
"weak reference objects not unlinked from"
|
|
" referent when discarded.")
|
|
|
|
# assumes ints do not support weakrefs
|
|
self.assert_(weakref.getweakrefcount(1) == 0,
|
|
"got wrong number of weak reference objects for int")
|
|
|
|
def test_getweakrefs(self):
|
|
o = C()
|
|
ref1 = weakref.ref(o, self.callback)
|
|
ref2 = weakref.ref(o, self.callback)
|
|
del ref1
|
|
self.assert_(weakref.getweakrefs(o) == [ref2],
|
|
"list of refs does not match")
|
|
|
|
o = C()
|
|
ref1 = weakref.ref(o, self.callback)
|
|
ref2 = weakref.ref(o, self.callback)
|
|
del ref2
|
|
self.assert_(weakref.getweakrefs(o) == [ref1],
|
|
"list of refs does not match")
|
|
|
|
del ref1
|
|
self.assert_(weakref.getweakrefs(o) == [],
|
|
"list of refs not cleared")
|
|
|
|
# assumes ints do not support weakrefs
|
|
self.assert_(weakref.getweakrefs(1) == [],
|
|
"list of refs does not match for int")
|
|
|
|
def test_newstyle_number_ops(self):
|
|
class F(float):
|
|
pass
|
|
f = F(2.0)
|
|
p = weakref.proxy(f)
|
|
self.assert_(p + 1.0 == 3.0)
|
|
self.assert_(1.0 + p == 3.0) # this used to SEGV
|
|
|
|
def test_callbacks_protected(self):
|
|
# Callbacks protected from already-set exceptions?
|
|
# Regression test for SF bug #478534.
|
|
class BogusError(Exception):
|
|
pass
|
|
data = {}
|
|
def remove(k):
|
|
del data[k]
|
|
def encapsulate():
|
|
f = lambda : ()
|
|
data[weakref.ref(f, remove)] = None
|
|
raise BogusError
|
|
try:
|
|
encapsulate()
|
|
except BogusError:
|
|
pass
|
|
else:
|
|
self.fail("exception not properly restored")
|
|
try:
|
|
encapsulate()
|
|
except BogusError:
|
|
pass
|
|
else:
|
|
self.fail("exception not properly restored")
|
|
|
|
def test_sf_bug_840829(self):
|
|
# "weakref callbacks and gc corrupt memory"
|
|
# subtype_dealloc erroneously exposed a new-style instance
|
|
# already in the process of getting deallocated to gc,
|
|
# causing double-deallocation if the instance had a weakref
|
|
# callback that triggered gc.
|
|
# If the bug exists, there probably won't be an obvious symptom
|
|
# in a release build. In a debug build, a segfault will occur
|
|
# when the second attempt to remove the instance from the "list
|
|
# of all objects" occurs.
|
|
|
|
import gc
|
|
|
|
class C(object):
|
|
pass
|
|
|
|
c = C()
|
|
wr = weakref.ref(c, lambda ignore: gc.collect())
|
|
del c
|
|
|
|
# There endeth the first part. It gets worse.
|
|
del wr
|
|
|
|
c1 = C()
|
|
c1.i = C()
|
|
wr = weakref.ref(c1.i, lambda ignore: gc.collect())
|
|
|
|
c2 = C()
|
|
c2.c1 = c1
|
|
del c1 # still alive because c2 points to it
|
|
|
|
# Now when subtype_dealloc gets called on c2, it's not enough just
|
|
# that c2 is immune from gc while the weakref callbacks associated
|
|
# with c2 execute (there are none in this 2nd half of the test, btw).
|
|
# subtype_dealloc goes on to call the base classes' deallocs too,
|
|
# so any gc triggered by weakref callbacks associated with anything
|
|
# torn down by a base class dealloc can also trigger double
|
|
# deallocation of c2.
|
|
del c2
|
|
|
|
def test_callback_in_cycle_1(self):
|
|
import gc
|
|
|
|
class J(object):
|
|
pass
|
|
|
|
class II(object):
|
|
def acallback(self, ignore):
|
|
self.J
|
|
|
|
I = II()
|
|
I.J = J
|
|
I.wr = weakref.ref(J, I.acallback)
|
|
|
|
# Now J and II are each in a self-cycle (as all new-style class
|
|
# objects are, since their __mro__ points back to them). I holds
|
|
# both a weak reference (I.wr) and a strong reference (I.J) to class
|
|
# J. I is also in a cycle (I.wr points to a weakref that references
|
|
# I.acallback). When we del these three, they all become trash, but
|
|
# the cycles prevent any of them from getting cleaned up immediately.
|
|
# Instead they have to wait for cyclic gc to deduce that they're
|
|
# trash.
|
|
#
|
|
# gc used to call tp_clear on all of them, and the order in which
|
|
# it does that is pretty accidental. The exact order in which we
|
|
# built up these things manages to provoke gc into running tp_clear
|
|
# in just the right order (I last). Calling tp_clear on II leaves
|
|
# behind an insane class object (its __mro__ becomes NULL). Calling
|
|
# tp_clear on J breaks its self-cycle, but J doesn't get deleted
|
|
# just then because of the strong reference from I.J. Calling
|
|
# tp_clear on I starts to clear I's __dict__, and just happens to
|
|
# clear I.J first -- I.wr is still intact. That removes the last
|
|
# reference to J, which triggers the weakref callback. The callback
|
|
# tries to do "self.J", and instances of new-style classes look up
|
|
# attributes ("J") in the class dict first. The class (II) wants to
|
|
# search II.__mro__, but that's NULL. The result was a segfault in
|
|
# a release build, and an assert failure in a debug build.
|
|
del I, J, II
|
|
gc.collect()
|
|
|
|
def test_callback_in_cycle_2(self):
|
|
import gc
|
|
|
|
# This is just like test_callback_in_cycle_1, except that II is an
|
|
# old-style class. The symptom is different then: an instance of an
|
|
# old-style class looks in its own __dict__ first. 'J' happens to
|
|
# get cleared from I.__dict__ before 'wr', and 'J' was never in II's
|
|
# __dict__, so the attribute isn't found. The difference is that
|
|
# the old-style II doesn't have a NULL __mro__ (it doesn't have any
|
|
# __mro__), so no segfault occurs. Instead it got:
|
|
# test_callback_in_cycle_2 (__main__.ReferencesTestCase) ...
|
|
# Exception exceptions.AttributeError:
|
|
# "II instance has no attribute 'J'" in <bound method II.acallback
|
|
# of <?.II instance at 0x00B9B4B8>> ignored
|
|
|
|
class J(object):
|
|
pass
|
|
|
|
class II:
|
|
def acallback(self, ignore):
|
|
self.J
|
|
|
|
I = II()
|
|
I.J = J
|
|
I.wr = weakref.ref(J, I.acallback)
|
|
|
|
del I, J, II
|
|
gc.collect()
|
|
|
|
def test_callback_in_cycle_3(self):
|
|
import gc
|
|
|
|
# This one broke the first patch that fixed the last two. In this
|
|
# case, the objects reachable from the callback aren't also reachable
|
|
# from the object (c1) *triggering* the callback: you can get to
|
|
# c1 from c2, but not vice-versa. The result was that c2's __dict__
|
|
# got tp_clear'ed by the time the c2.cb callback got invoked.
|
|
|
|
class C:
|
|
def cb(self, ignore):
|
|
self.me
|
|
self.c1
|
|
self.wr
|
|
|
|
c1, c2 = C(), C()
|
|
|
|
c2.me = c2
|
|
c2.c1 = c1
|
|
c2.wr = weakref.ref(c1, c2.cb)
|
|
|
|
del c1, c2
|
|
gc.collect()
|
|
|
|
def test_callback_in_cycle_4(self):
|
|
import gc
|
|
|
|
# Like test_callback_in_cycle_3, except c2 and c1 have different
|
|
# classes. c2's class (C) isn't reachable from c1 then, so protecting
|
|
# objects reachable from the dying object (c1) isn't enough to stop
|
|
# c2's class (C) from getting tp_clear'ed before c2.cb is invoked.
|
|
# The result was a segfault (C.__mro__ was NULL when the callback
|
|
# tried to look up self.me).
|
|
|
|
class C(object):
|
|
def cb(self, ignore):
|
|
self.me
|
|
self.c1
|
|
self.wr
|
|
|
|
class D:
|
|
pass
|
|
|
|
c1, c2 = D(), C()
|
|
|
|
c2.me = c2
|
|
c2.c1 = c1
|
|
c2.wr = weakref.ref(c1, c2.cb)
|
|
|
|
del c1, c2, C, D
|
|
gc.collect()
|
|
|
|
def test_callback_in_cycle_resurrection(self):
|
|
import gc
|
|
|
|
# Do something nasty in a weakref callback: resurrect objects
|
|
# from dead cycles. For this to be attempted, the weakref and
|
|
# its callback must also be part of the cyclic trash (else the
|
|
# objects reachable via the callback couldn't be in cyclic trash
|
|
# to begin with -- the callback would act like an external root).
|
|
# But gc clears trash weakrefs with callbacks early now, which
|
|
# disables the callbacks, so the callbacks shouldn't get called
|
|
# at all (and so nothing actually gets resurrected).
|
|
|
|
alist = []
|
|
class C(object):
|
|
def __init__(self, value):
|
|
self.attribute = value
|
|
|
|
def acallback(self, ignore):
|
|
alist.append(self.c)
|
|
|
|
c1, c2 = C(1), C(2)
|
|
c1.c = c2
|
|
c2.c = c1
|
|
c1.wr = weakref.ref(c2, c1.acallback)
|
|
c2.wr = weakref.ref(c1, c2.acallback)
|
|
|
|
def C_went_away(ignore):
|
|
alist.append("C went away")
|
|
wr = weakref.ref(C, C_went_away)
|
|
|
|
del c1, c2, C # make them all trash
|
|
self.assertEqual(alist, []) # del isn't enough to reclaim anything
|
|
|
|
gc.collect()
|
|
# c1.wr and c2.wr were part of the cyclic trash, so should have
|
|
# been cleared without their callbacks executing. OTOH, the weakref
|
|
# to C is bound to a function local (wr), and wasn't trash, so that
|
|
# callback should have been invoked when C went away.
|
|
self.assertEqual(alist, ["C went away"])
|
|
# The remaining weakref should be dead now (its callback ran).
|
|
self.assertEqual(wr(), None)
|
|
|
|
del alist[:]
|
|
gc.collect()
|
|
self.assertEqual(alist, [])
|
|
|
|
def test_callbacks_on_callback(self):
|
|
import gc
|
|
|
|
# Set up weakref callbacks *on* weakref callbacks.
|
|
alist = []
|
|
def safe_callback(ignore):
|
|
alist.append("safe_callback called")
|
|
|
|
class C(object):
|
|
def cb(self, ignore):
|
|
alist.append("cb called")
|
|
|
|
c, d = C(), C()
|
|
c.other = d
|
|
d.other = c
|
|
callback = c.cb
|
|
c.wr = weakref.ref(d, callback) # this won't trigger
|
|
d.wr = weakref.ref(callback, d.cb) # ditto
|
|
external_wr = weakref.ref(callback, safe_callback) # but this will
|
|
self.assert_(external_wr() is callback)
|
|
|
|
# The weakrefs attached to c and d should get cleared, so that
|
|
# C.cb is never called. But external_wr isn't part of the cyclic
|
|
# trash, and no cyclic trash is reachable from it, so safe_callback
|
|
# should get invoked when the bound method object callback (c.cb)
|
|
# -- which is itself a callback, and also part of the cyclic trash --
|
|
# gets reclaimed at the end of gc.
|
|
|
|
del callback, c, d, C
|
|
self.assertEqual(alist, []) # del isn't enough to clean up cycles
|
|
gc.collect()
|
|
self.assertEqual(alist, ["safe_callback called"])
|
|
self.assertEqual(external_wr(), None)
|
|
|
|
del alist[:]
|
|
gc.collect()
|
|
self.assertEqual(alist, [])
|
|
|
|
def test_gc_during_ref_creation(self):
|
|
self.check_gc_during_creation(weakref.ref)
|
|
|
|
def test_gc_during_proxy_creation(self):
|
|
self.check_gc_during_creation(weakref.proxy)
|
|
|
|
def check_gc_during_creation(self, makeref):
|
|
thresholds = gc.get_threshold()
|
|
gc.set_threshold(1, 1, 1)
|
|
gc.collect()
|
|
class A:
|
|
pass
|
|
|
|
def callback(*args):
|
|
pass
|
|
|
|
referenced = A()
|
|
|
|
a = A()
|
|
a.a = a
|
|
a.wr = makeref(referenced)
|
|
|
|
try:
|
|
# now make sure the object and the ref get labeled as
|
|
# cyclic trash:
|
|
a = A()
|
|
weakref.ref(referenced, callback)
|
|
|
|
finally:
|
|
gc.set_threshold(*thresholds)
|
|
|
|
|
|
class SubclassableWeakrefTestCase(unittest.TestCase):
|
|
|
|
def test_subclass_refs(self):
|
|
class MyRef(weakref.ref):
|
|
def __init__(self, ob, callback=None, value=42):
|
|
self.value = value
|
|
super(MyRef, self).__init__(ob, callback)
|
|
def __call__(self):
|
|
self.called = True
|
|
return super(MyRef, self).__call__()
|
|
o = Object("foo")
|
|
mr = MyRef(o, value=24)
|
|
self.assert_(mr() is o)
|
|
self.assert_(mr.called)
|
|
self.assertEqual(mr.value, 24)
|
|
del o
|
|
self.assert_(mr() is None)
|
|
self.assert_(mr.called)
|
|
|
|
def test_subclass_refs_dont_replace_standard_refs(self):
|
|
class MyRef(weakref.ref):
|
|
pass
|
|
o = Object(42)
|
|
r1 = MyRef(o)
|
|
r2 = weakref.ref(o)
|
|
self.assert_(r1 is not r2)
|
|
self.assertEqual(weakref.getweakrefs(o), [r2, r1])
|
|
self.assertEqual(weakref.getweakrefcount(o), 2)
|
|
r3 = MyRef(o)
|
|
self.assertEqual(weakref.getweakrefcount(o), 3)
|
|
refs = weakref.getweakrefs(o)
|
|
self.assertEqual(len(refs), 3)
|
|
self.assert_(r2 is refs[0])
|
|
self.assert_(r1 in refs[1:])
|
|
self.assert_(r3 in refs[1:])
|
|
|
|
def test_subclass_refs_dont_conflate_callbacks(self):
|
|
class MyRef(weakref.ref):
|
|
pass
|
|
o = Object(42)
|
|
r1 = MyRef(o, id)
|
|
r2 = MyRef(o, str)
|
|
self.assert_(r1 is not r2)
|
|
refs = weakref.getweakrefs(o)
|
|
self.assert_(r1 in refs)
|
|
self.assert_(r2 in refs)
|
|
|
|
def test_subclass_refs_with_slots(self):
|
|
class MyRef(weakref.ref):
|
|
__slots__ = "slot1", "slot2"
|
|
def __new__(type, ob, callback, slot1, slot2):
|
|
return weakref.ref.__new__(type, ob, callback)
|
|
def __init__(self, ob, callback, slot1, slot2):
|
|
self.slot1 = slot1
|
|
self.slot2 = slot2
|
|
def meth(self):
|
|
return self.slot1 + self.slot2
|
|
o = Object(42)
|
|
r = MyRef(o, None, "abc", "def")
|
|
self.assertEqual(r.slot1, "abc")
|
|
self.assertEqual(r.slot2, "def")
|
|
self.assertEqual(r.meth(), "abcdef")
|
|
self.failIf(hasattr(r, "__dict__"))
|
|
|
|
|
|
class Object:
|
|
def __init__(self, arg):
|
|
self.arg = arg
|
|
def __repr__(self):
|
|
return "<Object %r>" % self.arg
|
|
|
|
|
|
class MappingTestCase(TestBase):
|
|
|
|
COUNT = 10
|
|
|
|
def test_weak_values(self):
|
|
#
|
|
# This exercises d.copy(), d.items(), d[], del d[], len(d).
|
|
#
|
|
dict, objects = self.make_weak_valued_dict()
|
|
for o in objects:
|
|
self.assert_(weakref.getweakrefcount(o) == 1,
|
|
"wrong number of weak references to %r!" % o)
|
|
self.assert_(o is dict[o.arg],
|
|
"wrong object returned by weak dict!")
|
|
items1 = dict.items()
|
|
items2 = dict.copy().items()
|
|
items1.sort()
|
|
items2.sort()
|
|
self.assert_(items1 == items2,
|
|
"cloning of weak-valued dictionary did not work!")
|
|
del items1, items2
|
|
self.assert_(len(dict) == self.COUNT)
|
|
del objects[0]
|
|
self.assert_(len(dict) == (self.COUNT - 1),
|
|
"deleting object did not cause dictionary update")
|
|
del objects, o
|
|
self.assert_(len(dict) == 0,
|
|
"deleting the values did not clear the dictionary")
|
|
# regression on SF bug #447152:
|
|
dict = weakref.WeakValueDictionary()
|
|
self.assertRaises(KeyError, dict.__getitem__, 1)
|
|
dict[2] = C()
|
|
self.assertRaises(KeyError, dict.__getitem__, 2)
|
|
|
|
def test_weak_keys(self):
|
|
#
|
|
# This exercises d.copy(), d.items(), d[] = v, d[], del d[],
|
|
# len(d), d.has_key().
|
|
#
|
|
dict, objects = self.make_weak_keyed_dict()
|
|
for o in objects:
|
|
self.assert_(weakref.getweakrefcount(o) == 1,
|
|
"wrong number of weak references to %r!" % o)
|
|
self.assert_(o.arg is dict[o],
|
|
"wrong object returned by weak dict!")
|
|
items1 = dict.items()
|
|
items2 = dict.copy().items()
|
|
self.assert_(set(items1) == set(items2),
|
|
"cloning of weak-keyed dictionary did not work!")
|
|
del items1, items2
|
|
self.assert_(len(dict) == self.COUNT)
|
|
del objects[0]
|
|
self.assert_(len(dict) == (self.COUNT - 1),
|
|
"deleting object did not cause dictionary update")
|
|
del objects, o
|
|
self.assert_(len(dict) == 0,
|
|
"deleting the keys did not clear the dictionary")
|
|
o = Object(42)
|
|
dict[o] = "What is the meaning of the universe?"
|
|
self.assert_(dict.has_key(o))
|
|
self.assert_(not dict.has_key(34))
|
|
|
|
def test_weak_keyed_iters(self):
|
|
dict, objects = self.make_weak_keyed_dict()
|
|
self.check_iters(dict)
|
|
|
|
# Test keyrefs()
|
|
refs = dict.keyrefs()
|
|
self.assertEqual(len(refs), len(objects))
|
|
objects2 = list(objects)
|
|
for wr in refs:
|
|
ob = wr()
|
|
self.assert_(dict.has_key(ob))
|
|
self.assert_(ob in dict)
|
|
self.assertEqual(ob.arg, dict[ob])
|
|
objects2.remove(ob)
|
|
self.assertEqual(len(objects2), 0)
|
|
|
|
# Test iterkeyrefs()
|
|
objects2 = list(objects)
|
|
self.assertEqual(len(list(dict.iterkeyrefs())), len(objects))
|
|
for wr in dict.iterkeyrefs():
|
|
ob = wr()
|
|
self.assert_(dict.has_key(ob))
|
|
self.assert_(ob in dict)
|
|
self.assertEqual(ob.arg, dict[ob])
|
|
objects2.remove(ob)
|
|
self.assertEqual(len(objects2), 0)
|
|
|
|
def test_weak_valued_iters(self):
|
|
dict, objects = self.make_weak_valued_dict()
|
|
self.check_iters(dict)
|
|
|
|
# Test valuerefs()
|
|
refs = dict.valuerefs()
|
|
self.assertEqual(len(refs), len(objects))
|
|
objects2 = list(objects)
|
|
for wr in refs:
|
|
ob = wr()
|
|
self.assertEqual(ob, dict[ob.arg])
|
|
self.assertEqual(ob.arg, dict[ob.arg].arg)
|
|
objects2.remove(ob)
|
|
self.assertEqual(len(objects2), 0)
|
|
|
|
# Test itervaluerefs()
|
|
objects2 = list(objects)
|
|
self.assertEqual(len(list(dict.itervaluerefs())), len(objects))
|
|
for wr in dict.itervaluerefs():
|
|
ob = wr()
|
|
self.assertEqual(ob, dict[ob.arg])
|
|
self.assertEqual(ob.arg, dict[ob.arg].arg)
|
|
objects2.remove(ob)
|
|
self.assertEqual(len(objects2), 0)
|
|
|
|
def check_iters(self, dict):
|
|
# item iterator:
|
|
items = dict.items()
|
|
for item in dict.iteritems():
|
|
items.remove(item)
|
|
self.assert_(len(items) == 0, "iteritems() did not touch all items")
|
|
|
|
# key iterator, via __iter__():
|
|
keys = dict.keys()
|
|
for k in dict:
|
|
keys.remove(k)
|
|
self.assert_(len(keys) == 0, "__iter__() did not touch all keys")
|
|
|
|
# key iterator, via iterkeys():
|
|
keys = dict.keys()
|
|
for k in dict.iterkeys():
|
|
keys.remove(k)
|
|
self.assert_(len(keys) == 0, "iterkeys() did not touch all keys")
|
|
|
|
# value iterator:
|
|
values = dict.values()
|
|
for v in dict.itervalues():
|
|
values.remove(v)
|
|
self.assert_(len(values) == 0,
|
|
"itervalues() did not touch all values")
|
|
|
|
def test_make_weak_keyed_dict_from_dict(self):
|
|
o = Object(3)
|
|
dict = weakref.WeakKeyDictionary({o:364})
|
|
self.assert_(dict[o] == 364)
|
|
|
|
def test_make_weak_keyed_dict_from_weak_keyed_dict(self):
|
|
o = Object(3)
|
|
dict = weakref.WeakKeyDictionary({o:364})
|
|
dict2 = weakref.WeakKeyDictionary(dict)
|
|
self.assert_(dict[o] == 364)
|
|
|
|
def make_weak_keyed_dict(self):
|
|
dict = weakref.WeakKeyDictionary()
|
|
objects = map(Object, range(self.COUNT))
|
|
for o in objects:
|
|
dict[o] = o.arg
|
|
return dict, objects
|
|
|
|
def make_weak_valued_dict(self):
|
|
dict = weakref.WeakValueDictionary()
|
|
objects = map(Object, range(self.COUNT))
|
|
for o in objects:
|
|
dict[o.arg] = o
|
|
return dict, objects
|
|
|
|
def check_popitem(self, klass, key1, value1, key2, value2):
|
|
weakdict = klass()
|
|
weakdict[key1] = value1
|
|
weakdict[key2] = value2
|
|
self.assert_(len(weakdict) == 2)
|
|
k, v = weakdict.popitem()
|
|
self.assert_(len(weakdict) == 1)
|
|
if k is key1:
|
|
self.assert_(v is value1)
|
|
else:
|
|
self.assert_(v is value2)
|
|
k, v = weakdict.popitem()
|
|
self.assert_(len(weakdict) == 0)
|
|
if k is key1:
|
|
self.assert_(v is value1)
|
|
else:
|
|
self.assert_(v is value2)
|
|
|
|
def test_weak_valued_dict_popitem(self):
|
|
self.check_popitem(weakref.WeakValueDictionary,
|
|
"key1", C(), "key2", C())
|
|
|
|
def test_weak_keyed_dict_popitem(self):
|
|
self.check_popitem(weakref.WeakKeyDictionary,
|
|
C(), "value 1", C(), "value 2")
|
|
|
|
def check_setdefault(self, klass, key, value1, value2):
|
|
self.assert_(value1 is not value2,
|
|
"invalid test"
|
|
" -- value parameters must be distinct objects")
|
|
weakdict = klass()
|
|
o = weakdict.setdefault(key, value1)
|
|
self.assert_(o is value1)
|
|
self.assert_(weakdict.has_key(key))
|
|
self.assert_(weakdict.get(key) is value1)
|
|
self.assert_(weakdict[key] is value1)
|
|
|
|
o = weakdict.setdefault(key, value2)
|
|
self.assert_(o is value1)
|
|
self.assert_(weakdict.has_key(key))
|
|
self.assert_(weakdict.get(key) is value1)
|
|
self.assert_(weakdict[key] is value1)
|
|
|
|
def test_weak_valued_dict_setdefault(self):
|
|
self.check_setdefault(weakref.WeakValueDictionary,
|
|
"key", C(), C())
|
|
|
|
def test_weak_keyed_dict_setdefault(self):
|
|
self.check_setdefault(weakref.WeakKeyDictionary,
|
|
C(), "value 1", "value 2")
|
|
|
|
def check_update(self, klass, dict):
|
|
#
|
|
# This exercises d.update(), len(d), d.keys(), d.has_key(),
|
|
# d.get(), d[].
|
|
#
|
|
weakdict = klass()
|
|
weakdict.update(dict)
|
|
self.assert_(len(weakdict) == len(dict))
|
|
for k in weakdict.keys():
|
|
self.assert_(dict.has_key(k),
|
|
"mysterious new key appeared in weak dict")
|
|
v = dict.get(k)
|
|
self.assert_(v is weakdict[k])
|
|
self.assert_(v is weakdict.get(k))
|
|
for k in dict.keys():
|
|
self.assert_(weakdict.has_key(k),
|
|
"original key disappeared in weak dict")
|
|
v = dict[k]
|
|
self.assert_(v is weakdict[k])
|
|
self.assert_(v is weakdict.get(k))
|
|
|
|
def test_weak_valued_dict_update(self):
|
|
self.check_update(weakref.WeakValueDictionary,
|
|
{1: C(), 'a': C(), C(): C()})
|
|
|
|
def test_weak_keyed_dict_update(self):
|
|
self.check_update(weakref.WeakKeyDictionary,
|
|
{C(): 1, C(): 2, C(): 3})
|
|
|
|
def test_weak_keyed_delitem(self):
|
|
d = weakref.WeakKeyDictionary()
|
|
o1 = Object('1')
|
|
o2 = Object('2')
|
|
d[o1] = 'something'
|
|
d[o2] = 'something'
|
|
self.assert_(len(d) == 2)
|
|
del d[o1]
|
|
self.assert_(len(d) == 1)
|
|
self.assert_(d.keys() == [o2])
|
|
|
|
def test_weak_valued_delitem(self):
|
|
d = weakref.WeakValueDictionary()
|
|
o1 = Object('1')
|
|
o2 = Object('2')
|
|
d['something'] = o1
|
|
d['something else'] = o2
|
|
self.assert_(len(d) == 2)
|
|
del d['something']
|
|
self.assert_(len(d) == 1)
|
|
self.assert_(d.items() == [('something else', o2)])
|
|
|
|
def test_weak_keyed_bad_delitem(self):
|
|
d = weakref.WeakKeyDictionary()
|
|
o = Object('1')
|
|
# An attempt to delete an object that isn't there should raise
|
|
# KeyError. It didn't before 2.3.
|
|
self.assertRaises(KeyError, d.__delitem__, o)
|
|
self.assertRaises(KeyError, d.__getitem__, o)
|
|
|
|
# If a key isn't of a weakly referencable type, __getitem__ and
|
|
# __setitem__ raise TypeError. __delitem__ should too.
|
|
self.assertRaises(TypeError, d.__delitem__, 13)
|
|
self.assertRaises(TypeError, d.__getitem__, 13)
|
|
self.assertRaises(TypeError, d.__setitem__, 13, 13)
|
|
|
|
def test_weak_keyed_cascading_deletes(self):
|
|
# SF bug 742860. For some reason, before 2.3 __delitem__ iterated
|
|
# over the keys via self.data.iterkeys(). If things vanished from
|
|
# the dict during this (or got added), that caused a RuntimeError.
|
|
|
|
d = weakref.WeakKeyDictionary()
|
|
mutate = False
|
|
|
|
class C(object):
|
|
def __init__(self, i):
|
|
self.value = i
|
|
def __hash__(self):
|
|
return hash(self.value)
|
|
def __eq__(self, other):
|
|
if mutate:
|
|
# Side effect that mutates the dict, by removing the
|
|
# last strong reference to a key.
|
|
del objs[-1]
|
|
return self.value == other.value
|
|
|
|
objs = [C(i) for i in range(4)]
|
|
for o in objs:
|
|
d[o] = o.value
|
|
del o # now the only strong references to keys are in objs
|
|
# Find the order in which iterkeys sees the keys.
|
|
objs = d.keys()
|
|
# Reverse it, so that the iteration implementation of __delitem__
|
|
# has to keep looping to find the first object we delete.
|
|
objs.reverse()
|
|
|
|
# Turn on mutation in C.__eq__. The first time thru the loop,
|
|
# under the iterkeys() business the first comparison will delete
|
|
# the last item iterkeys() would see, and that causes a
|
|
# RuntimeError: dictionary changed size during iteration
|
|
# when the iterkeys() loop goes around to try comparing the next
|
|
# key. After this was fixed, it just deletes the last object *our*
|
|
# "for o in obj" loop would have gotten to.
|
|
mutate = True
|
|
count = 0
|
|
for o in objs:
|
|
count += 1
|
|
del d[o]
|
|
self.assertEqual(len(d), 0)
|
|
self.assertEqual(count, 2)
|
|
|
|
from test import mapping_tests
|
|
|
|
class WeakValueDictionaryTestCase(mapping_tests.BasicTestMappingProtocol):
|
|
"""Check that WeakValueDictionary conforms to the mapping protocol"""
|
|
__ref = {"key1":Object(1), "key2":Object(2), "key3":Object(3)}
|
|
type2test = weakref.WeakValueDictionary
|
|
def _reference(self):
|
|
return self.__ref.copy()
|
|
|
|
class WeakKeyDictionaryTestCase(mapping_tests.BasicTestMappingProtocol):
|
|
"""Check that WeakKeyDictionary conforms to the mapping protocol"""
|
|
__ref = {Object("key1"):1, Object("key2"):2, Object("key3"):3}
|
|
type2test = weakref.WeakKeyDictionary
|
|
def _reference(self):
|
|
return self.__ref.copy()
|
|
|
|
libreftest = """ Doctest for examples in the library reference: libweakref.tex
|
|
|
|
>>> import weakref
|
|
>>> class Dict(dict):
|
|
... pass
|
|
...
|
|
>>> obj = Dict(red=1, green=2, blue=3) # this object is weak referencable
|
|
>>> r = weakref.ref(obj)
|
|
>>> print r() is obj
|
|
True
|
|
|
|
>>> import weakref
|
|
>>> class Object:
|
|
... pass
|
|
...
|
|
>>> o = Object()
|
|
>>> r = weakref.ref(o)
|
|
>>> o2 = r()
|
|
>>> o is o2
|
|
True
|
|
>>> del o, o2
|
|
>>> print r()
|
|
None
|
|
|
|
>>> import weakref
|
|
>>> class ExtendedRef(weakref.ref):
|
|
... def __init__(self, ob, callback=None, **annotations):
|
|
... super(ExtendedRef, self).__init__(ob, callback)
|
|
... self.__counter = 0
|
|
... for k, v in annotations.iteritems():
|
|
... setattr(self, k, v)
|
|
... def __call__(self):
|
|
... '''Return a pair containing the referent and the number of
|
|
... times the reference has been called.
|
|
... '''
|
|
... ob = super(ExtendedRef, self).__call__()
|
|
... if ob is not None:
|
|
... self.__counter += 1
|
|
... ob = (ob, self.__counter)
|
|
... return ob
|
|
...
|
|
>>> class A: # not in docs from here, just testing the ExtendedRef
|
|
... pass
|
|
...
|
|
>>> a = A()
|
|
>>> r = ExtendedRef(a, foo=1, bar="baz")
|
|
>>> r.foo
|
|
1
|
|
>>> r.bar
|
|
'baz'
|
|
>>> r()[1]
|
|
1
|
|
>>> r()[1]
|
|
2
|
|
>>> r()[0] is a
|
|
True
|
|
|
|
|
|
>>> import weakref
|
|
>>> _id2obj_dict = weakref.WeakValueDictionary()
|
|
>>> def remember(obj):
|
|
... oid = id(obj)
|
|
... _id2obj_dict[oid] = obj
|
|
... return oid
|
|
...
|
|
>>> def id2obj(oid):
|
|
... return _id2obj_dict[oid]
|
|
...
|
|
>>> a = A() # from here, just testing
|
|
>>> a_id = remember(a)
|
|
>>> id2obj(a_id) is a
|
|
True
|
|
>>> del a
|
|
>>> try:
|
|
... id2obj(a_id)
|
|
... except KeyError:
|
|
... print 'OK'
|
|
... else:
|
|
... print 'WeakValueDictionary error'
|
|
OK
|
|
|
|
"""
|
|
|
|
__test__ = {'libreftest' : libreftest}
|
|
|
|
def test_main():
|
|
test_support.run_unittest(
|
|
ReferencesTestCase,
|
|
MappingTestCase,
|
|
WeakValueDictionaryTestCase,
|
|
WeakKeyDictionaryTestCase,
|
|
)
|
|
test_support.run_doctest(sys.modules[__name__])
|
|
|
|
|
|
if __name__ == "__main__":
|
|
test_main()
|