summaryrefslogtreecommitdiffstats
path: root/src/engine
diff options
context:
space:
mode:
authorSteven Knight <knight@baldmt.com>2006-12-16 01:43:01 (GMT)
committerSteven Knight <knight@baldmt.com>2006-12-16 01:43:01 (GMT)
commitc4d04b3b45e7b71a1b28053b90084bcf2fdf9c0e (patch)
tree8a0d07c078ac21bf1ab689eacf06577069bb9231 /src/engine
parentb32cd624a5ad9526d28584b8e6c4a7958f436424 (diff)
downloadSCons-c4d04b3b45e7b71a1b28053b90084bcf2fdf9c0e.zip
SCons-c4d04b3b45e7b71a1b28053b90084bcf2fdf9c0e.tar.gz
SCons-c4d04b3b45e7b71a1b28053b90084bcf2fdf9c0e.tar.bz2
Merged revisions 1675-1736 via svnmerge from
http://scons.tigris.org/svn/scons/branches/core ........ r1689 | stevenknight | 2006-11-06 20:56:29 -0600 (Mon, 06 Nov 2006) | 1 line 0.96.D483 - Merge changes for 0.96.93 packaging from the subsidiary branch. ........ r1690 | stevenknight | 2006-11-06 20:59:30 -0600 (Mon, 06 Nov 2006) | 1 line 0.96.D484 - Update HOWTO for releases. Fix name type in src/CHANGES.txt. ........ r1691 | stevenknight | 2006-11-08 13:55:36 -0600 (Wed, 08 Nov 2006) | 1 line 0.96.D485 - Fix MergeFlags() handling of None values. (John Pye) ........ r1692 | stevenknight | 2006-11-08 17:15:05 -0600 (Wed, 08 Nov 2006) | 1 line 0.96.D486 - Directly execute commands on Windows when possible. (Jay Kint) ........ r1693 | stevenknight | 2006-11-08 18:54:49 -0600 (Wed, 08 Nov 2006) | 1 line 0.96.D487 - Remove the semi-colon from the list of characters that determine when we use cmd ........ r1694 | stevenknight | 2006-11-09 01:34:06 -0600 (Thu, 09 Nov 2006) | 1 line 0.96.D488 - Pick up latex/bibtex 'Rerun to get citations correct' messages. (Dmitry Mikhin) ........ r1695 | stevenknight | 2006-11-11 08:36:33 -0600 (Sat, 11 Nov 2006) | 1 line 0.96.D489 - Back out the direct-execution-on-Windows change until we solve a corner case. ........ r1696 | stevenknight | 2006-11-15 10:33:10 -0600 (Wed, 15 Nov 2006) | 1 line 0.96.D490 - Fix the sconsign script when the .sconsign.dblite file is specified with its suf ........ r1697 | stevenknight | 2006-11-18 10:45:50 -0600 (Sat, 18 Nov 2006) | 4 lines Complete move of test/sconsign/script.py to underneath test/sconsign/script/. (This got left out of the previous checkin due to an error in the script that resubmits Aegis changes to Subversion.) ........ r1698 | stevenknight | 2006-11-18 11:05:26 -0600 (Sat, 18 Nov 2006) | 1 line 0.96.D491 - Allow an Options converter to take the construction environment as a parameter. ........ r1699 | stevenknight | 2006-11-30 15:34:37 -0600 (Thu, 30 Nov 2006) | 1 line 0.96.D492 - Reverse the order in which we try the arguments Options converters, first a sing ........ r1700 | stevenknight | 2006-11-30 16:03:09 -0600 (Thu, 30 Nov 2006) | 1 line 0.96.D493 - Speed up rel_path() by avoiding recomputation of intermediate directory relative ........ r1701 | stevenknight | 2006-11-30 16:14:16 -0600 (Thu, 30 Nov 2006) | 1 line 0.96.D494 - More efficient get_suffix(): compute it once when we set the name. ........ r1702 | stevenknight | 2006-11-30 16:22:55 -0600 (Thu, 30 Nov 2006) | 1 line 0.96.D495 - Fix missing XML end tags. ........ r1703 | stevenknight | 2006-11-30 17:15:25 -0600 (Thu, 30 Nov 2006) | 1 line 0.96.D496 - Turn Memoizer into a simple counter for --debug=memoizer, not something that doe ........ r1704 | stevenknight | 2006-11-30 20:30:50 -0600 (Thu, 30 Nov 2006) | 1 line 0.96.D497 - Add the scons-time script, with doc and tests. ........ r1705 | stevenknight | 2006-11-30 23:28:20 -0600 (Thu, 30 Nov 2006) | 1 line 0.96.D498 - Update the copyright years string. ........ r1706 | stevenknight | 2006-12-01 11:54:22 -0600 (Fri, 01 Dec 2006) | 1 line 0.96.D499 - Fix _do_Lookup => _doLookup value-caching misspellings. (Ben Leslie) ........ r1707 | stevenknight | 2006-12-01 12:03:46 -0600 (Fri, 01 Dec 2006) | 1 line 0.96.D500 - Fix copyright test against debian build. (Walter Franzini) ........ r1708 | stevenknight | 2006-12-01 14:23:29 -0600 (Fri, 01 Dec 2006) | 1 line 0.96.D501 - Add #include lines for test portability. (Gary Oberbrunner) ........ r1709 | stevenknight | 2006-12-01 14:51:12 -0600 (Fri, 01 Dec 2006) | 1 line 0.96.D502 - Fix tests under Python versions with no profiler (pstats module). ........ r1710 | stevenknight | 2006-12-01 20:04:49 -0600 (Fri, 01 Dec 2006) | 1 line 0.96.D503 - Remove unnecessary os.path.normpath() calls. (Gary Oberbrunner) ........ r1711 | stevenknight | 2006-12-01 20:34:31 -0600 (Fri, 01 Dec 2006) | 1 line 0.96.D504 - Accomodate arbitray white space after a SWIG %module keyword. (Anonymous) ........ r1712 | stevenknight | 2006-12-05 14:49:54 -0600 (Tue, 05 Dec 2006) | 1 line 0.96.D506 - Cache substitutions of of Builder source suffixes. Use a new PathList module, and a refactor Node.FS.Rfindalldirs() method, to cache calculations of values like CPPPATH. ........ r1713 | stevenknight | 2006-12-05 18:43:36 -0600 (Tue, 05 Dec 2006) | 1 line 0.96.D507 - Use cached stat() values in diskchecks. ........ r1714 | stevenknight | 2006-12-05 21:11:24 -0600 (Tue, 05 Dec 2006) | 1 line 0.96.D508 - Fix Memoizer hit counts for methods memoizing simple values. Clean up the code for memoizing return values in a dictionary. Fix comments. ........ r1715 | stevenknight | 2006-12-06 07:23:18 -0600 (Wed, 06 Dec 2006) | 1 line 0.96.D369 - More efficient Node.FS.Dir.current() check. Fix some Windows test portability issues. ........ r1716 | stevenknight | 2006-12-06 12:24:32 -0600 (Wed, 06 Dec 2006) | 2 lines Undo previous checkin (distributed incorrect Aegis change number). ........ r1717 | stevenknight | 2006-12-06 12:34:53 -0600 (Wed, 06 Dec 2006) | 1 line 0.96.D505 - Update ae-{cvs,svn}-ci for newer versions of aetar, and to not truncate descriptions. ........ r1718 | stevenknight | 2006-12-07 23:01:41 -0600 (Thu, 07 Dec 2006) | 1 line 0.96.D509 - Only look for mslink on Windows systems. (Sohail Somani) ........ r1719 | stevenknight | 2006-12-07 23:18:33 -0600 (Thu, 07 Dec 2006) | 1 line 0.96.D510 - Have the D compiler Tool use the same logic for shared libraries, too. (Paolo Invernizzi) ........ r1720 | stevenknight | 2006-12-07 23:29:47 -0600 (Thu, 07 Dec 2006) | 1 line 0.96.D511 - Generalize a JobTests.py test so it doesn't assume a specific order in which the operating system executes the threads. ........ r1721 | stevenknight | 2006-12-07 23:39:37 -0600 (Thu, 07 Dec 2006) | 1 line 0.96.D512 - Back out the Tool/dmd.py change; it breaks shared library linking for other lanuages beside D in the construction environment. ........ r1722 | stevenknight | 2006-12-07 23:47:11 -0600 (Thu, 07 Dec 2006) | 1 line 0.96.D513 - Test fixes: Windows portability, handle changes to Python 2.5 messages. ........ r1723 | stevenknight | 2006-12-08 00:00:13 -0600 (Fri, 08 Dec 2006) | 1 line 0.96.D514 - Change how the 'as' Tool is imported to accomodate the Python 2.6 'as' keyword. ........ r1724 | stevenknight | 2006-12-08 11:19:27 -0600 (Fri, 08 Dec 2006) | 1 line 0.96.D515 - Cache both Node.FS.find_file() and Node.FS.Dri.srcdir_find_file(). ........ r1725 | stevenknight | 2006-12-08 17:27:35 -0600 (Fri, 08 Dec 2006) | 1 line 0.96.D516 - Better error when we try to fetch contents from an Entry that doesn't exist. (Tom Parker) ........ r1726 | stevenknight | 2006-12-08 23:28:55 -0600 (Fri, 08 Dec 2006) | 1 line 0.96.D517 - Make sure we pick up the scons-local directory regardless of where we chdir internally. ........ r1727 | stevenknight | 2006-12-11 16:25:53 -0600 (Mon, 11 Dec 2006) | 1 line 0.96.D518 - Cache results of Executor.get_unignored_sources() and Executor.process_sources(). Eliminate some map() and disambiguate() calls when scanning for implicit dependencies. ........ r1728 | stevenknight | 2006-12-12 14:32:22 -0600 (Tue, 12 Dec 2006) | 1 line 0.96.D519 - Fix SideEffect() when -j is used. ........ r1729 | stevenknight | 2006-12-12 16:58:15 -0600 (Tue, 12 Dec 2006) | 1 line 0.96.D520 - Add a srcdir keyword to Builder calls. ........ r1730 | stevenknight | 2006-12-12 21:40:59 -0600 (Tue, 12 Dec 2006) | 1 line 0.96.D521 - TeX/LaTeX updates, including handling files in subdirectories. (Joel B. Mohler, Rob Managan, Dmitry Mikhin) ........ r1731 | stevenknight | 2006-12-14 15:01:02 -0600 (Thu, 14 Dec 2006) | 1 line 0.96.D522 - Propogate TypeErrors during variable substitution for display to the user. ........ r1732 | stevenknight | 2006-12-14 20:01:49 -0600 (Thu, 14 Dec 2006) | 1 line 0.96.D523 - Fix the os.path.join() calls in EnvironmentTests.py. ........ r1733 | stevenknight | 2006-12-15 07:48:22 -0600 (Fri, 15 Dec 2006) | 1 line 0.96.D524 - Fix source directories as dependencies of an Alias (0.96.93 problem found by LilyPond). ........ r1735 | stevenknight | 2006-12-15 12:43:45 -0600 (Fri, 15 Dec 2006) | 1 line 0.96.D525 - Allow printing Debug.caller() output (or other end-of-run debugging info) when using -h. ........ r1736 | stevenknight | 2006-12-15 16:30:08 -0600 (Fri, 15 Dec 2006) | 1 line 0.96.D526 - Add an option to debug IndexError and NameError exceptions during variable substitution. ........
Diffstat (limited to 'src/engine')
-rw-r--r--src/engine/MANIFEST.in1
-rw-r--r--src/engine/SCons/Action.py23
-rw-r--r--src/engine/SCons/Builder.py109
-rw-r--r--src/engine/SCons/BuilderTests.py20
-rw-r--r--src/engine/SCons/Defaults.py8
-rw-r--r--src/engine/SCons/Environment.py114
-rw-r--r--src/engine/SCons/EnvironmentTests.py13
-rw-r--r--src/engine/SCons/Executor.py88
-rw-r--r--src/engine/SCons/JobTests.py14
-rw-r--r--src/engine/SCons/Memoize.py948
-rw-r--r--src/engine/SCons/MemoizeTests.py192
-rw-r--r--src/engine/SCons/Node/FS.py554
-rw-r--r--src/engine/SCons/Node/FSTests.py83
-rw-r--r--src/engine/SCons/Node/__init__.py54
-rw-r--r--src/engine/SCons/Options/__init__.py5
-rw-r--r--src/engine/SCons/PathList.py217
-rw-r--r--src/engine/SCons/PathListTests.py145
-rw-r--r--src/engine/SCons/Scanner/CTests.py15
-rw-r--r--src/engine/SCons/Scanner/D.py1
-rw-r--r--src/engine/SCons/Scanner/Fortran.py3
-rw-r--r--src/engine/SCons/Scanner/FortranTests.py13
-rw-r--r--src/engine/SCons/Scanner/IDLTests.py13
-rw-r--r--src/engine/SCons/Scanner/LaTeX.py72
-rw-r--r--src/engine/SCons/Scanner/LaTeXTests.py6
-rw-r--r--src/engine/SCons/Scanner/Prog.py5
-rw-r--r--src/engine/SCons/Scanner/ProgTests.py10
-rw-r--r--src/engine/SCons/Scanner/ScannerTests.py29
-rw-r--r--src/engine/SCons/Scanner/__init__.py54
-rw-r--r--src/engine/SCons/Script/Main.py36
-rw-r--r--src/engine/SCons/Script/SConscript.py10
-rw-r--r--src/engine/SCons/Script/__init__.py40
-rw-r--r--src/engine/SCons/Subst.py54
-rw-r--r--src/engine/SCons/SubstTests.py65
-rw-r--r--src/engine/SCons/Taskmaster.py17
-rw-r--r--src/engine/SCons/TaskmasterTests.py2
-rw-r--r--src/engine/SCons/Tool/386asm.py4
-rw-r--r--src/engine/SCons/Tool/dvipdf.py2
-rw-r--r--src/engine/SCons/Tool/dvips.py4
-rw-r--r--src/engine/SCons/Tool/gas.py4
-rw-r--r--src/engine/SCons/Tool/latex.py2
-rw-r--r--src/engine/SCons/Tool/mslink.py8
-rw-r--r--src/engine/SCons/Tool/msvc.xml2
-rw-r--r--src/engine/SCons/Tool/pdflatex.py2
-rw-r--r--src/engine/SCons/Tool/pdftex.py8
-rw-r--r--src/engine/SCons/Tool/swig.py11
-rw-r--r--src/engine/SCons/Tool/tex.py42
-rw-r--r--src/engine/SCons/Warnings.py3
47 files changed, 1799 insertions, 1326 deletions
diff --git a/src/engine/MANIFEST.in b/src/engine/MANIFEST.in
index c762f1c..7be1c16 100644
--- a/src/engine/MANIFEST.in
+++ b/src/engine/MANIFEST.in
@@ -26,6 +26,7 @@ SCons/Options/EnumOption.py
SCons/Options/ListOption.py
SCons/Options/PackageOption.py
SCons/Options/PathOption.py
+SCons/PathList.py
SCons/Platform/__init__.py
SCons/Platform/aix.py
SCons/Platform/cygwin.py
diff --git a/src/engine/SCons/Action.py b/src/engine/SCons/Action.py
index 4576164..503dc9f 100644
--- a/src/engine/SCons/Action.py
+++ b/src/engine/SCons/Action.py
@@ -233,9 +233,6 @@ class ActionBase:
"""Base class for all types of action objects that can be held by
other objects (Builders, Executors, etc.) This provides the
common methods for manipulating and combining those actions."""
-
- if SCons.Memoize.use_memoizer:
- __metaclass__ = SCons.Memoize.Memoized_Metaclass
def __cmp__(self, other):
return cmp(self.__dict__, other)
@@ -266,15 +263,6 @@ class ActionBase:
return SCons.Executor.Executor(self, env, overrides,
tlist, slist, executor_kw)
-if SCons.Memoize.use_old_memoization():
- _Base = ActionBase
- class ActionBase(SCons.Memoize.Memoizer, _Base):
- "Cache-backed version of ActionBase"
- def __init__(self, *args, **kw):
- apply(_Base.__init__, (self,)+args, kw)
- SCons.Memoize.Memoizer.__init__(self)
-
-
class _ActionAction(ActionBase):
"""Base class for actions that create output objects."""
def __init__(self, strfunction=_null, presub=_null, chdir=None, exitstatfunc=None, **kw):
@@ -563,9 +551,6 @@ class CommandGeneratorAction(ActionBase):
class LazyAction(CommandGeneratorAction, CommandAction):
- if SCons.Memoize.use_memoizer:
- __metaclass__ = SCons.Memoize.Memoized_Metaclass
-
def __init__(self, var, *args, **kw):
if __debug__: logInstanceCreation(self, 'Action.LazyAction')
apply(CommandAction.__init__, (self, '$'+var)+args, kw)
@@ -580,7 +565,6 @@ class LazyAction(CommandGeneratorAction, CommandAction):
return CommandGeneratorAction
def _generate_cache(self, env):
- """__cacheable__"""
c = env.get(self.var, '')
gen_cmd = apply(Action, (c,)+self.gen_args, self.gen_kw)
if not gen_cmd:
@@ -599,13 +583,6 @@ class LazyAction(CommandGeneratorAction, CommandAction):
c = self.get_parent_class(env)
return c.get_contents(self, target, source, env)
-if not SCons.Memoize.has_metaclass:
- _Base = LazyAction
- class LazyAction(SCons.Memoize.Memoizer, _Base):
- def __init__(self, *args, **kw):
- SCons.Memoize.Memoizer.__init__(self)
- apply(_Base.__init__, (self,)+args, kw)
-
class FunctionAction(_ActionAction):
diff --git a/src/engine/SCons/Builder.py b/src/engine/SCons/Builder.py
index 16f1191..d5f566a 100644
--- a/src/engine/SCons/Builder.py
+++ b/src/engine/SCons/Builder.py
@@ -126,6 +126,7 @@ import SCons.Action
from SCons.Debug import logInstanceCreation
from SCons.Errors import InternalError, UserError
import SCons.Executor
+import SCons.Memoize
import SCons.Node
import SCons.Node.FS
import SCons.Util
@@ -370,6 +371,8 @@ class BuilderBase:
if SCons.Memoize.use_memoizer:
__metaclass__ = SCons.Memoize.Memoized_Metaclass
+ memoizer_counters = []
+
def __init__(self, action = None,
prefix = '',
suffix = '',
@@ -387,6 +390,7 @@ class BuilderBase:
is_explicit = 1,
**overrides):
if __debug__: logInstanceCreation(self, 'Builder.BuilderBase')
+ self._memo = {}
self.action = action
self.multi = multi
if SCons.Util.is_Dict(prefix):
@@ -604,6 +608,16 @@ class BuilderBase:
ekw = self.executor_kw.copy()
ekw['chdir'] = chdir
if kw:
+ if kw.has_key('srcdir'):
+ def prependDirIfRelative(f, srcdir=kw['srcdir']):
+ import os.path
+ if SCons.Util.is_String(f) and not os.path.isabs(f):
+ f = os.path.join(srcdir, f)
+ return f
+ if not SCons.Util.is_List(source):
+ source = [source]
+ source = map(prependDirIfRelative, source)
+ del kw['srcdir']
if self.overrides:
env_kw = self.overrides.copy()
env_kw.update(kw)
@@ -636,9 +650,34 @@ class BuilderBase:
suffix = suffix(env, sources)
return env.subst(suffix)
+ def _src_suffixes_key(self, env):
+ return id(env)
+
+ memoizer_counters.append(SCons.Memoize.CountDict('src_suffixes', _src_suffixes_key))
+
def src_suffixes(self, env):
- "__cacheable__"
- return map(lambda x, s=self, e=env: e.subst(x), self.src_suffix)
+ """
+ Returns the list of source suffixes for this Builder.
+
+ The suffix list may contain construction variable expansions,
+ so we have to evaluate the individual strings. To avoid doing
+ this over and over, we memoize the results for each construction
+ environment.
+ """
+ memo_key = id(env)
+ try:
+ memo_dict = self._memo['src_suffixes']
+ except KeyError:
+ memo_dict = {}
+ self._memo['src_suffixes'] = memo_dict
+ else:
+ try:
+ return memo_dict[memo_key]
+ except KeyError:
+ pass
+ result = map(lambda x, s=self, e=env: e.subst(x), self.src_suffix)
+ memo_dict[memo_key] = result
+ return result
def set_src_suffix(self, src_suffix):
if not src_suffix:
@@ -673,13 +712,7 @@ class BuilderBase:
"""
self.emitter[suffix] = emitter
-if SCons.Memoize.use_old_memoization():
- _Base = BuilderBase
- class BuilderBase(SCons.Memoize.Memoizer, _Base):
- "Cache-backed version of BuilderBase"
- def __init__(self, *args, **kw):
- apply(_Base.__init__, (self,)+args, kw)
- SCons.Memoize.Memoizer.__init__(self)
+
class ListBuilder(SCons.Util.Proxy):
"""A Proxy to support building an array of targets (for example,
@@ -718,6 +751,9 @@ class MultiStepBuilder(BuilderBase):
builder is NOT invoked if the suffix of a source file matches
src_suffix.
"""
+
+ memoizer_counters = []
+
def __init__(self, src_builder,
action = None,
prefix = '',
@@ -738,8 +774,32 @@ class MultiStepBuilder(BuilderBase):
src_builder = [ src_builder ]
self.src_builder = src_builder
+ def _get_sdict_key(self, env):
+ return id(env)
+
+ memoizer_counters.append(SCons.Memoize.CountDict('_get_sdict', _get_sdict_key))
+
def _get_sdict(self, env):
- "__cacheable__"
+ """
+ Returns a dictionary mapping all of the source suffixes of all
+ src_builders of this Builder to the underlying Builder that
+ should be called first.
+
+ This dictionary is used for each target specified, so we save a
+ lot of extra computation by memoizing it for each construction
+ environment.
+ """
+ memo_key = id(env)
+ try:
+ memo_dict = self._memo['_get_sdict']
+ except KeyError:
+ memo_dict = {}
+ self._memo['_get_sdict'] = memo_dict
+ else:
+ try:
+ return memo_dict[memo_key]
+ except KeyError:
+ pass
sdict = {}
for bld in self.src_builder:
if SCons.Util.is_String(bld):
@@ -749,6 +809,7 @@ class MultiStepBuilder(BuilderBase):
continue
for suf in bld.src_suffixes(env):
sdict[suf] = bld
+ memo_dict[memo_key] = sdict
return sdict
def _execute(self, env, target, source, overwarn={}, executor_kw={}):
@@ -810,14 +871,36 @@ class MultiStepBuilder(BuilderBase):
ret.append(bld)
return ret
+ def _src_suffixes_key(self, env):
+ return id(env)
+
+ memoizer_counters.append(SCons.Memoize.CountDict('src_suffixes', _src_suffixes_key))
+
def src_suffixes(self, env):
- """Return a list of the src_suffix attributes for all
- src_builders of this Builder.
- __cacheable__
"""
+ Returns the list of source suffixes for all src_builders of this
+ Builder.
+
+ The suffix list may contain construction variable expansions,
+ so we have to evaluate the individual strings. To avoid doing
+ this over and over, we memoize the results for each construction
+ environment.
+ """
+ memo_key = id(env)
+ try:
+ memo_dict = self._memo['src_suffixes']
+ except KeyError:
+ memo_dict = {}
+ self._memo['src_suffixes'] = memo_dict
+ else:
+ try:
+ return memo_dict[memo_key]
+ except KeyError:
+ pass
suffixes = BuilderBase.src_suffixes(self, env)
for builder in self.get_src_builders(env):
suffixes.extend(builder.src_suffixes(env))
+ memo_dict[memo_key] = suffixes
return suffixes
class CompositeBuilder(SCons.Util.Proxy):
diff --git a/src/engine/SCons/BuilderTests.py b/src/engine/SCons/BuilderTests.py
index c5b428c..4e196e2 100644
--- a/src/engine/SCons/BuilderTests.py
+++ b/src/engine/SCons/BuilderTests.py
@@ -339,6 +339,11 @@ class BuilderTestCase(unittest.TestCase):
else:
raise "Did not catch expected UserError."
+ builder = SCons.Builder.Builder(action="foo")
+ target = builder(env, None, source='n22', srcdir='src_dir')[0]
+ p = target.sources[0].path
+ assert p == os.path.join('src_dir', 'n22'), p
+
def test_mistaken_variables(self):
"""Test keyword arguments that are often mistakes
"""
@@ -1182,6 +1187,13 @@ class BuilderTestCase(unittest.TestCase):
target_factory=MyNode,
source_factory=MyNode)
+ builder2a=SCons.Builder.Builder(action='foo',
+ emitter="$FOO",
+ target_factory=MyNode,
+ source_factory=MyNode)
+
+ assert builder2 == builder2a, repr(builder2.__dict__) + "\n" + repr(builder2a.__dict__)
+
tgt = builder2(env2, target='foo5', source='bar')[0]
assert str(tgt) == 'foo5', str(tgt)
assert str(tgt.sources[0]) == 'bar', str(tgt.sources[0])
@@ -1197,12 +1209,6 @@ class BuilderTestCase(unittest.TestCase):
assert 'baz' in map(str, tgt.sources), map(str, tgt.sources)
assert 'bar' in map(str, tgt.sources), map(str, tgt.sources)
- builder2a=SCons.Builder.Builder(action='foo',
- emitter="$FOO",
- target_factory=MyNode,
- source_factory=MyNode)
- assert builder2 == builder2a, repr(builder2.__dict__) + "\n" + repr(builder2a.__dict__)
-
# Test that, if an emitter sets a builder on the passed-in
# targets and passes back new targets, the new builder doesn't
# get overwritten.
@@ -1595,7 +1601,7 @@ class CompositeBuilderTestCase(unittest.TestCase):
if __name__ == "__main__":
suite = unittest.TestSuite()
tclasses = [
-# BuilderTestCase,
+ BuilderTestCase,
CompositeBuilderTestCase
]
for tclass in tclasses:
diff --git a/src/engine/SCons/Defaults.py b/src/engine/SCons/Defaults.py
index 7513c0d..96c3cf8 100644
--- a/src/engine/SCons/Defaults.py
+++ b/src/engine/SCons/Defaults.py
@@ -48,9 +48,10 @@ import sys
import SCons.Action
import SCons.Builder
import SCons.Environment
-import SCons.Tool
+import SCons.PathList
import SCons.Sig
import SCons.Subst
+import SCons.Tool
# A placeholder for a default Environment (for fetching source files
# from source code management systems and the like). This must be
@@ -214,7 +215,10 @@ def _concat(prefix, list, suffix, env, f=lambda x: x, target=None, source=None):
if SCons.Util.is_List(list):
list = SCons.Util.flatten(list)
- list = f(env.subst_path(list, target=target, source=source))
+
+ l = f(SCons.PathList.PathList(list).subst_path(env, target, source))
+ if not l is None:
+ list = l
result = []
diff --git a/src/engine/SCons/Environment.py b/src/engine/SCons/Environment.py
index a5bffc8..4761ea0 100644
--- a/src/engine/SCons/Environment.py
+++ b/src/engine/SCons/Environment.py
@@ -166,6 +166,10 @@ def _set_BUILDERS(env, key, value):
env._dict[key] = BuilderDict(kwbd, env)
env._dict[key].update(value)
+def _del_SCANNERS(env, key):
+ del env._dict[key]
+ env.scanner_map_delete()
+
def _set_SCANNERS(env, key, value):
env._dict[key] = value
env.scanner_map_delete()
@@ -279,29 +283,35 @@ class SubstitutionEnvironment:
self.lookup_list = SCons.Node.arg2nodes_lookups
self._dict = kw.copy()
self._init_special()
+ #self._memo = {}
def _init_special(self):
- """Initial the dispatch table for special handling of
+ """Initial the dispatch tables for special handling of
special construction variables."""
- self._special = {}
+ self._special_del = {}
+ self._special_del['SCANNERS'] = _del_SCANNERS
+
+ self._special_set = {}
for key in reserved_construction_var_names:
- self._special[key] = _set_reserved
- self._special['BUILDERS'] = _set_BUILDERS
- self._special['SCANNERS'] = _set_SCANNERS
+ self._special_set[key] = _set_reserved
+ self._special_set['BUILDERS'] = _set_BUILDERS
+ self._special_set['SCANNERS'] = _set_SCANNERS
def __cmp__(self, other):
return cmp(self._dict, other._dict)
def __delitem__(self, key):
- "__cache_reset__"
- del self._dict[key]
+ special = self._special_del.get(key)
+ if special:
+ special(self, key)
+ else:
+ del self._dict[key]
def __getitem__(self, key):
return self._dict[key]
def __setitem__(self, key, value):
- "__cache_reset__"
- special = self._special.get(key)
+ special = self._special_set.get(key)
if special:
special(self, key, value)
else:
@@ -663,8 +673,10 @@ class SubstitutionEnvironment:
except KeyError:
orig = value
else:
- if len(orig) == 0: orig = []
- elif not SCons.Util.is_List(orig): orig = [orig]
+ if not orig:
+ orig = []
+ elif not SCons.Util.is_List(orig):
+ orig = [orig]
orig = orig + value
t = []
if key[-4:] == 'PATH':
@@ -694,6 +706,8 @@ class Base(SubstitutionEnvironment):
if SCons.Memoize.use_memoizer:
__metaclass__ = SCons.Memoize.Memoized_Metaclass
+ memoizer_counters = []
+
#######################################################################
# This is THE class for interacting with the SCons build engine,
# and it contains a lot of stuff, so we're going to try to keep this
@@ -725,6 +739,7 @@ class Base(SubstitutionEnvironment):
with the much simpler base class initialization.
"""
if __debug__: logInstanceCreation(self, 'Environment.Base')
+ self._memo = {}
self.fs = SCons.Node.FS.default_fs or SCons.Node.FS.FS()
self.ans = SCons.Node.Alias.default_ans
self.lookup_list = SCons.Node.arg2nodes_lookups
@@ -786,7 +801,6 @@ class Base(SubstitutionEnvironment):
return None
def get_calculator(self):
- "__cacheable__"
try:
module = self._calc_module
c = apply(SCons.Sig.Calculator, (module,), CalculatorArgs)
@@ -800,7 +814,6 @@ class Base(SubstitutionEnvironment):
def get_factory(self, factory, default='File'):
"""Return a factory function for creating Nodes for this
construction environment.
- __cacheable__
"""
name = default
try:
@@ -827,50 +840,54 @@ class Base(SubstitutionEnvironment):
factory = getattr(self.fs, name)
return factory
+ memoizer_counters.append(SCons.Memoize.CountValue('_gsm'))
+
def _gsm(self):
- "__cacheable__"
try:
- scanners = self._dict['SCANNERS']
+ return self._memo['_gsm']
except KeyError:
- return None
+ pass
- sm = {}
- # Reverse the scanner list so that, if multiple scanners
- # claim they can scan the same suffix, earlier scanners
- # in the list will overwrite later scanners, so that
- # the result looks like a "first match" to the user.
- if not SCons.Util.is_List(scanners):
- scanners = [scanners]
+ result = {}
+
+ try:
+ scanners = self._dict['SCANNERS']
+ except KeyError:
+ pass
else:
- scanners = scanners[:] # copy so reverse() doesn't mod original
- scanners.reverse()
- for scanner in scanners:
- for k in scanner.get_skeys(self):
- sm[k] = scanner
- return sm
+ # Reverse the scanner list so that, if multiple scanners
+ # claim they can scan the same suffix, earlier scanners
+ # in the list will overwrite later scanners, so that
+ # the result looks like a "first match" to the user.
+ if not SCons.Util.is_List(scanners):
+ scanners = [scanners]
+ else:
+ scanners = scanners[:] # copy so reverse() doesn't mod original
+ scanners.reverse()
+ for scanner in scanners:
+ for k in scanner.get_skeys(self):
+ result[k] = scanner
+
+ self._memo['_gsm'] = result
+
+ return result
def get_scanner(self, skey):
"""Find the appropriate scanner given a key (usually a file suffix).
"""
- sm = self._gsm()
- try: return sm[skey]
- except (KeyError, TypeError): return None
+ return self._gsm().get(skey)
- def _smd(self):
- "__reset_cache__"
- pass
-
def scanner_map_delete(self, kw=None):
"""Delete the cached scanner map (if we need to).
"""
- if not kw is None and not kw.has_key('SCANNERS'):
- return
- self._smd()
+ try:
+ del self._memo['_gsm']
+ except KeyError:
+ pass
def _update(self, dict):
"""Update an environment's values directly, bypassing the normal
checks that occur when users try to set items.
- __cache_reset__
"""
self._dict.update(dict)
@@ -1014,7 +1031,9 @@ class Base(SubstitutionEnvironment):
clone._dict['BUILDERS'] = BuilderDict(cbd, clone)
except KeyError:
pass
-
+
+ clone._memo = {}
+
apply_tools(clone, tools, toolpath)
# Apply passed-in variables after the new tools.
@@ -1030,7 +1049,7 @@ class Base(SubstitutionEnvironment):
return apply(self.Clone, args, kw)
def Detect(self, progs):
- """Return the first available program in progs. __cacheable__
+ """Return the first available program in progs.
"""
if not SCons.Util.is_List(progs):
progs = [ progs ]
@@ -1306,7 +1325,7 @@ class Base(SubstitutionEnvironment):
tool(self)
def WhereIs(self, prog, path=None, pathext=None, reject=[]):
- """Find prog in the path. __cacheable__
+ """Find prog in the path.
"""
if path is None:
try:
@@ -1841,12 +1860,3 @@ def NoSubstitutionProxy(subject):
self.raw_to_mode(nkw)
return apply(SCons.Subst.scons_subst, nargs, nkw)
return _NoSubstitutionProxy(subject)
-
-if SCons.Memoize.use_old_memoization():
- _Base = Base
- class Base(SCons.Memoize.Memoizer, _Base):
- def __init__(self, *args, **kw):
- SCons.Memoize.Memoizer.__init__(self)
- apply(_Base.__init__, (self,)+args, kw)
- Environment = Base
-
diff --git a/src/engine/SCons/EnvironmentTests.py b/src/engine/SCons/EnvironmentTests.py
index edf3740..f0f73da 100644
--- a/src/engine/SCons/EnvironmentTests.py
+++ b/src/engine/SCons/EnvironmentTests.py
@@ -728,6 +728,10 @@ sys.exit(1)
env.MergeFlags('-X')
assert env['CCFLAGS'] == ['-X'], env['CCFLAGS']
+ env = SubstitutionEnvironment(CCFLAGS=None)
+ env.MergeFlags('-Y')
+ assert env['CCFLAGS'] == ['-Y'], env['CCFLAGS']
+
env = SubstitutionEnvironment()
env.MergeFlags({'A':['aaa'], 'B':['bbb']})
assert env['A'] == ['aaa'], env['A']
@@ -992,7 +996,7 @@ class BaseTestCase(unittest.TestCase,TestEnvironmentFixture):
LIBLINKSUFFIX = 'bar')
def RDirs(pathlist, fs=env.fs):
- return fs.Rfindalldirs(pathlist, fs.Dir('xx'))
+ return fs.Dir('xx').Rfindalldirs(pathlist)
env['RDirs'] = RDirs
flags = env.subst_list('$_LIBFLAGS', 1)[0]
@@ -2782,7 +2786,7 @@ def generate(env):
tgt = env.Install('export', 'build')
paths = map(str, tgt)
paths.sort()
- expect = ['export/build']
+ expect = [os.path.join('export', 'build')]
assert paths == expect, paths
for tnode in tgt:
assert tnode.builder == InstallBuilder
@@ -2790,7 +2794,10 @@ def generate(env):
tgt = env.Install('export', ['build', 'build/foo1'])
paths = map(str, tgt)
paths.sort()
- expect = ['export/build', 'export/foo1']
+ expect = [
+ os.path.join('export', 'build'),
+ os.path.join('export', 'foo1'),
+ ]
assert paths == expect, paths
for tnode in tgt:
assert tnode.builder == InstallBuilder
diff --git a/src/engine/SCons/Executor.py b/src/engine/SCons/Executor.py
index ffc1ba3..4b15010 100644
--- a/src/engine/SCons/Executor.py
+++ b/src/engine/SCons/Executor.py
@@ -47,6 +47,8 @@ class Executor:
if SCons.Memoize.use_memoizer:
__metaclass__ = SCons.Memoize.Memoized_Metaclass
+ memoizer_counters = []
+
def __init__(self, action, env=None, overridelist=[{}],
targets=[], sources=[], builder_kw={}):
if __debug__: logInstanceCreation(self, 'Executor.Executor')
@@ -58,6 +60,7 @@ class Executor:
self.targets = targets
self.sources = sources[:]
self.builder_kw = builder_kw
+ self._memo = {}
def set_action_list(self, action):
if not SCons.Util.is_List(action):
@@ -72,7 +75,6 @@ class Executor:
def get_build_env(self):
"""Fetch or create the appropriate build Environment
for this Executor.
- __cacheable__
"""
# Create the build environment instance with appropriate
# overrides. These get evaluated against the current
@@ -125,8 +127,7 @@ class Executor:
self.do_execute(target, exitstatfunc, kw)
def cleanup(self):
- "__reset_cache__"
- pass
+ self._memo = {}
def add_sources(self, sources):
"""Add source files to this Executor's list. This is necessary
@@ -151,25 +152,30 @@ class Executor:
def __str__(self):
- "__cacheable__"
return self.my_str()
def nullify(self):
- "__reset_cache__"
+ self.cleanup()
self.do_execute = self.do_nothing
self.my_str = lambda S=self: ''
+ memoizer_counters.append(SCons.Memoize.CountValue('get_contents'))
+
def get_contents(self):
- """Fetch the signature contents. This, along with
- get_raw_contents(), is the real reason this class exists, so we
- can compute this once and cache it regardless of how many target
- or source Nodes there are.
- __cacheable__
+ """Fetch the signature contents. This is the main reason this
+ class exists, so we can compute this once and cache it regardless
+ of how many target or source Nodes there are.
"""
+ try:
+ return self._memo['get_contents']
+ except KeyError:
+ pass
env = self.get_build_env()
get = lambda action, t=self.targets, s=self.sources, e=env: \
action.get_contents(t, s, e)
- return string.join(map(get, self.get_action_list()), "")
+ result = string.join(map(get, self.get_action_list()), "")
+ self._memo['get_contents'] = result
+ return result
def get_timestamp(self):
"""Fetch a time stamp for this Executor. We don't have one, of
@@ -219,20 +225,58 @@ class Executor:
def get_missing_sources(self):
"""
- __cacheable__
"""
return filter(lambda s: s.missing(), self.sources)
- def get_unignored_sources(self, ignore):
- """__cacheable__"""
+ def _get_unignored_sources_key(self, ignore=()):
+ return tuple(ignore)
+
+ memoizer_counters.append(SCons.Memoize.CountDict('get_unignored_sources', _get_unignored_sources_key))
+
+ def get_unignored_sources(self, ignore=()):
+ ignore = tuple(ignore)
+ try:
+ memo_dict = self._memo['get_unignored_sources']
+ except KeyError:
+ memo_dict = {}
+ self._memo['get_unignored_sources'] = memo_dict
+ else:
+ try:
+ return memo_dict[ignore]
+ except KeyError:
+ pass
+
sourcelist = self.sources
if ignore:
sourcelist = filter(lambda s, i=ignore: not s in i, sourcelist)
+
+ memo_dict[ignore] = sourcelist
+
return sourcelist
- def process_sources(self, func, ignore=[]):
- """__cacheable__"""
- return map(func, self.get_unignored_sources(ignore))
+ def _process_sources_key(self, func, ignore=()):
+ return (func, tuple(ignore))
+
+ memoizer_counters.append(SCons.Memoize.CountDict('process_sources', _process_sources_key))
+
+ def process_sources(self, func, ignore=()):
+ memo_key = (func, tuple(ignore))
+ try:
+ memo_dict = self._memo['process_sources']
+ except KeyError:
+ memo_dict = {}
+ self._memo['process_sources'] = memo_dict
+ else:
+ try:
+ return memo_dict[memo_key]
+ except KeyError:
+ pass
+
+ result = map(func, self.get_unignored_sources(ignore))
+
+ memo_dict[memo_key] = result
+
+ return result
_Executor = Executor
@@ -258,13 +302,3 @@ class Null(_Executor):
return None
def cleanup(self):
pass
-
-
-
-if SCons.Memoize.use_old_memoization():
- _Base = Executor
- class Executor(SCons.Memoize.Memoizer, _Base):
- def __init__(self, *args, **kw):
- SCons.Memoize.Memoizer.__init__(self)
- apply(_Base.__init__, (self,)+args, kw)
-
diff --git a/src/engine/SCons/JobTests.py b/src/engine/SCons/JobTests.py
index d2c019f..e38e251 100644
--- a/src/engine/SCons/JobTests.py
+++ b/src/engine/SCons/JobTests.py
@@ -259,16 +259,14 @@ class ParallelTestCase(unittest.TestCase):
jobs.run()
# The key here is that we get(1) and get(2) from the
- # resultsQueue before we put(3).
+ # resultsQueue before we put(3), but get(1) and get(2) can
+ # be in either order depending on how the first two parallel
+ # tasks get scheduled by the operating system.
expect = [
- 'put(1)',
- 'put(2)',
- 'get(1)',
- 'get(2)',
- 'put(3)',
- 'get(3)',
+ ['put(1)', 'put(2)', 'get(1)', 'get(2)', 'put(3)', 'get(3)'],
+ ['put(1)', 'put(2)', 'get(2)', 'get(1)', 'put(3)', 'get(3)'],
]
- assert ThreadPoolCallList == expect, ThreadPoolCallList
+ assert ThreadPoolCallList in expect, ThreadPoolCallList
finally:
SCons.Job.ThreadPool = SaveThreadPool
diff --git a/src/engine/SCons/Memoize.py b/src/engine/SCons/Memoize.py
index c2a3027..6a46350 100644
--- a/src/engine/SCons/Memoize.py
+++ b/src/engine/SCons/Memoize.py
@@ -1,66 +1,3 @@
-"""Memoizer
-
-Memoizer -- base class to provide automatic, optimized caching of
-method return values for subclassed objects. Caching is activated by
-the presence of "__cacheable__" in the doc of a method (acts like a
-decorator). The presence of "__cache_reset__" or "__reset_cache__"
-in the doc string instead indicates a method that should reset the
-cache, discarding any currently cached values.
-
-Note: current implementation is optimized for speed, not space. The
-cache reset operation does not actually discard older results, and in
-fact, all cached results (and keys) are held indefinitely.
-
-Most of the work for this is done by copying and modifying the class
-definition itself, rather than the object instances. This will
-therefore allow all instances of a class to get caching activated
-without requiring lengthy initialization or other management of the
-instance.
-
-[This could also be done using metaclassing (which would require
-Python 2.2) and decorators (which would require Python 2.4). Current
-implementation is used due to Python 1.5.2 compatability requirement
-contraint.]
-
-A few notes:
-
- * All local methods/attributes use a prefix of "_MeMoIZeR" to avoid
- namespace collisions with the attributes of the objects
- being cached.
-
- * Based on performance evaluations of dictionaries, caching is
- done by providing each object with a unique key attribute and
- using the value of that attribute as an index for dictionary
- lookup. If an object doesn't have one of these attributes,
- fallbacks are utilized (although they will be somewhat slower).
-
- * To support this unique-value attribute correctly, it must be
- removed whenever a __cmp__ operation is performed, and it must
- be updated whenever a copy.copy or copy.deepcopy is performed,
- so appropriate manipulation is provided by the Caching code
- below.
-
- * Cached values are stored in the class (indexed by the caching
- key attribute, then by the name of the method called and the
- constructed key of the arguments passed). By storing them here
- rather than on the instance, the instance can be compared,
- copied, and pickled much easier.
-
-Some advantages:
-
- * The method by which caching is implemented can be changed in a
- single location and it will apply globally.
-
- * Greatly simplified client code: remove lots of try...except or
- similar handling of cached lookup. Also usually more correct in
- that it based caching on all input arguments whereas many
- hand-implemented caching operations often miss arguments that
- might affect results.
-
- * Caching can be globally disabled very easily (for testing, etc.)
-
-"""
-
#
# __COPYRIGHT__
#
@@ -86,752 +23,247 @@ Some advantages:
__revision__ = "__FILE__ __REVISION__ __DATE__ __DEVELOPER__"
-#TBD: for pickling, should probably revert object to unclassed state...
+__doc__ = """Memoizer
-import copy
-import os
-import string
-import sys
+A metaclass implementation to count hits and misses of the computed
+values that various methods cache in memory.
-# A flag controlling whether or not we actually use memoization.
-use_memoizer = 1
+Use of this modules assumes that wrapped methods be coded to cache their
+values in a consistent way. Here is an example of wrapping a method
+that returns a computed value, with no input parameters:
-#
-# Generate a key for an object that is to be used as the caching key
-# for that object.
-#
-# Current implementation: singleton generating a monotonically
-# increasing integer
+ memoizer_counters = [] # Memoization
-class MemoizerKey:
- def __init__(self):
- self._next_keyval = 0
- def __call__(self):
- r = self._next_keyval
- self._next_keyval = self._next_keyval + 1
- return str(r)
-Next_Memoize_Key = MemoizerKey()
+ memoizer_counters.append(SCons.Memoize.CountValue('foo')) # Memoization
+ def foo(self):
-#
-# Memoized Class management.
-#
-# Classes can be manipulated just like object instances; we are going
-# to do some of that here, without the benefit of metaclassing
-# introduced in Python 2.2 (it would be nice to use that, but this
-# attempts to maintain backward compatibility to Python 1.5.2).
-#
-# The basic implementation therefore is to update the class definition
-# for any objects that we want to enable caching for. The updated
-# definition performs caching activities for those methods
-# appropriately marked in the original class.
-#
-# When an object is created, its class is switched to this updated,
-# cache-enabled class definition, thereby enabling caching operations.
-#
-# To get an instance to used the updated, caching class, the instance
-# must declare the Memoizer as a base class and make sure to call the
-# Memoizer's __init__ during the instance's __init__. The Memoizer's
-# __init__ will perform the class updating.
-
-# For Python 2.2 and later, where metaclassing is supported, it is
-# sufficient to provide a "__metaclass__ = Memoized_Metaclass" as part
-# of the class definition; the metaclassing will automatically invoke
-# the code herein properly.
-
-##import cPickle
-##def ALT0_MeMoIZeR_gen_key(argtuple, kwdict):
-## return cPickle.dumps( (argtuple,kwdict) )
-
-def ALT1_MeMoIZeR_gen_key(argtuple, kwdict):
- return repr(argtuple) + '|' + repr(kwdict)
-
-def ALT2_MeMoIZeR_gen_key(argtuple, kwdict):
- return string.join(map(lambda A:
- getattr(A, '_MeMoIZeR_Key', str(A)),
- argtuple) + \
- map(lambda D:
- str(D[0])+
- getattr(D[1], '_MeMoIZeR_Key', str(D[1])),
- kwdict.items()),
- '|')
-
-def ALT3_MeMoIZeR_gen_key(argtuple, kwdict):
- ret = []
- for A in argtuple:
- X = getattr(A, '_MeMoIZeR_Key', None)
- if X:
- ret.append(X)
- else:
- ret.append(str(A))
- for K,V in kwdict.items():
- ret.append(str(K))
- X = getattr(V, '_MeMoIZeR_Key', None)
- if X:
- ret.append(X)
- else:
- ret.append(str(V))
- return string.join(ret, '|')
-
-def ALT4_MeMoIZeR_gen_key(argtuple, kwdict):
- if kwdict:
- return string.join(map(lambda A:
- getattr(A, '_MeMoIZeR_Key', None) or str(A),
- argtuple) + \
- map(lambda D:
- str(D[0])+
- (getattr(D[1], '_MeMoIZeR_Key', None) or str(D[1])),
- kwdict.items()),
- '|')
- return string.join(map(lambda A:
- getattr(A, '_MeMoIZeR_Key', None) or str(A),
- argtuple),
- '!')
-
-def ALT5_MeMoIZeR_gen_key(argtuple, kwdict):
- A = string.join(map(str, argtuple), '|')
- K = ''
- if kwdict:
- I = map(lambda K,D=kwdict: str(K)+'='+str(D[K]), kwdict.keys())
- K = string.join(I, '|')
- return string.join([A,K], '!')
-
-def ALT6_MeMoIZeR_gen_key(argtuple, kwdict):
- A = string.join(map(str, map(id, argtuple)), '|')
- K = ''
- if kwdict:
- I = map(lambda K,D=kwdict: str(K)+'='+str(id(D[K])), kwdict.keys())
- K = string.join(I, '|')
- return string.join([A,K], '!')
-
-def ALT7_MeMoIZeR_gen_key(argtuple, kwdict):
- A = string.join(map(repr, argtuple), '|')
- K = ''
- if kwdict:
- I = map(lambda K,D=kwdict: repr(K)+'='+repr(D[K]), kwdict.keys())
- K = string.join(I, '|')
- return string.join([A,K], '!')
-
-def ALT8_MeMoIZeR_gen_key(argtuple, kwdict):
- ret = []
- for A in argtuple:
- X = getattr(A, '_MeMoIZeR_Key', None)
- if X:
- ret.append(X)
- else:
- ret.append(repr(A))
- for K,V in kwdict.items():
- ret.append(str(K))
- X = getattr(V, '_MeMoIZeR_Key', None)
- if X:
- ret.append(X)
- else:
- ret.append(repr(V))
- return string.join(ret, '|')
+ try: # Memoization
+ return self._memo['foo'] # Memoization
+ except KeyError: # Memoization
+ pass # Memoization
-def ALT9_MeMoIZeR_gen_key(argtuple, kwdict):
- ret = []
- for A in argtuple:
- try:
- X = A.__dict__.get('_MeMoIZeR_Key', None) or repr(A)
- except (AttributeError, KeyError):
- X = repr(A)
- ret.append(X)
- for K,V in kwdict.items():
- ret.append(str(K))
- ret.append('=')
- try:
- X = V.__dict__.get('_MeMoIZeR_Key', None) or repr(V)
- except (AttributeError, KeyError):
- X = repr(V)
- ret.append(X)
- return string.join(ret, '|')
-
-#_MeMoIZeR_gen_key = ALT9_MeMoIZeR_gen_key # 8.8, 0.20
-_MeMoIZeR_gen_key = ALT8_MeMoIZeR_gen_key # 8.5, 0.18
-#_MeMoIZeR_gen_key = ALT7_MeMoIZeR_gen_key # 8.7, 0.17
-#_MeMoIZeR_gen_key = ALT6_MeMoIZeR_gen_key #
-#_MeMoIZeR_gen_key = ALT5_MeMoIZeR_gen_key # 9.7, 0.20
-#_MeMoIZeR_gen_key = ALT4_MeMoIZeR_gen_key # 8.6, 0.19
-#_MeMoIZeR_gen_key = ALT3_MeMoIZeR_gen_key # 8.5, 0.20
-#_MeMoIZeR_gen_key = ALT2_MeMoIZeR_gen_key # 10.1, 0.22
-#_MeMoIZeR_gen_key = ALT1_MeMoIZeR_gen_key # 8.6 0.18
-
-
-
-## This is really the core worker of the Memoize module. Any
-## __cacheable__ method ends up calling this function which tries to
-## return a previously cached value if it exists, and which calls the
-## actual function and caches the return value if it doesn't already
-## exist.
-##
-## This function should be VERY efficient: it will get called a lot
-## and its job is to be faster than what would be called.
-
-def Memoizer_cache_get(func, cdict, args, kw):
- """Called instead of name to see if this method call's return
- value has been cached. If it has, just return the cached
- value; if not, call the actual method and cache the return."""
-
- obj = args[0]
-
- ckey = obj._MeMoIZeR_Key + ':' + _MeMoIZeR_gen_key(args, kw)
-
-## try:
-## rval = cdict[ckey]
-## except KeyError:
-## rval = cdict[ckey] = apply(func, args, kw)
-
- rval = cdict.get(ckey, "_MeMoIZeR")
- if rval is "_MeMoIZeR":
- rval = cdict[ckey] = apply(func, args, kw)
-
-## rval = cdict.setdefault(ckey, apply(func, args, kw))
-
-## if cdict.has_key(ckey):
-## rval = cdict[ckey]
-## else:
-## rval = cdict[ckey] = apply(func, args, kw)
-
- return rval
-
-def Memoizer_cache_get_self(func, cdict, self):
- """Called instead of func(self) to see if this method call's
- return value has been cached. If it has, just return the cached
- value; if not, call the actual method and cache the return.
- Optimized version of Memoizer_cache_get for methods that take the
- object instance as the only argument."""
-
- ckey = self._MeMoIZeR_Key
-
-## try:
-## rval = cdict[ckey]
-## except KeyError:
-## rval = cdict[ckey] = func(self)
-
- rval = cdict.get(ckey, "_MeMoIZeR")
- if rval is "_MeMoIZeR":
- rval = cdict[ckey] = func(self)
-
-## rval = cdict.setdefault(ckey, func(self)))
-
-## if cdict.has_key(ckey):
-## rval = cdict[ckey]
-## else:
-## rval = cdict[ckey] = func(self)
-
- return rval
-
-def Memoizer_cache_get_one(func, cdict, self, arg):
- """Called instead of func(self, arg) to see if this method call's
- return value has been cached. If it has, just return the cached
- value; if not, call the actual method and cache the return.
- Optimized version of Memoizer_cache_get for methods that take the
- object instance and one other argument only."""
-
-## X = getattr(arg, "_MeMoIZeR_Key", None)
-## if X:
-## ckey = self._MeMoIZeR_Key +':'+ X
-## else:
-## ckey = self._MeMoIZeR_Key +':'+ str(arg)
- ckey = self._MeMoIZeR_Key + ':' + \
- (getattr(arg, "_MeMoIZeR_Key", None) or repr(arg))
-
-## try:
-## rval = cdict[ckey]
-## except KeyError:
-## rval = cdict[ckey] = func(self, arg)
-
- rval = cdict.get(ckey, "_MeMoIZeR")
- if rval is "_MeMoIZeR":
- rval = cdict[ckey] = func(self, arg)
+ result = self.compute_foo_value()
-## rval = cdict.setdefault(ckey, func(self, arg)))
-
-## if cdict.has_key(ckey):
-## rval = cdict[ckey]
-## else:
-## rval = cdict[ckey] = func(self, arg)
+ self._memo['foo'] = result # Memoization
- return rval
+ return result
-#
-# Caching stuff is tricky, because the tradeoffs involved are often so
-# non-obvious, so we're going to support an alternate set of functions
-# that also count the hits and misses, to try to get a concrete idea of
-# which Memoizations seem to pay off.
-#
-# Because different configurations can have such radically different
-# performance tradeoffs, interpreting the hit/miss results will likely be
-# more of an art than a science. In other words, don't assume that just
-# because you see no hits in one configuration that it's not worthwhile
-# Memoizing that method.
-#
-# Note that these are essentially cut-and-paste copies of the above
-# Memozer_cache_get*() implementations, with the addition of the
-# counting logic. If the above implementations change, the
-# corresponding change should probably be made down below as well,
-# just to try to keep things in sync.
-#
+Here is an example of wrapping a method that will return different values
+based on one or more input arguments:
-class CounterEntry:
- def __init__(self):
- self.hit = 0
- self.miss = 0
+ def _bar_key(self, argument): # Memoization
+ return argument # Memoization
-import UserDict
-class Counter(UserDict.UserDict):
- def __call__(self, obj, methname):
- k = obj.__class__.__name__ + '.' + methname
- try:
- return self[k]
- except KeyError:
- c = self[k] = CounterEntry()
- return c
-
-CacheCount = Counter()
-CacheCountSelf = Counter()
-CacheCountOne = Counter()
-
-def Dump():
- items = CacheCount.items() + CacheCountSelf.items() + CacheCountOne.items()
- items.sort()
- for k, v in items:
- print " %7d hits %7d misses %s()" % (v.hit, v.miss, k)
-
-def Count_cache_get(name, func, cdict, args, kw):
- """Called instead of name to see if this method call's return
- value has been cached. If it has, just return the cached
- value; if not, call the actual method and cache the return."""
-
- obj = args[0]
-
- ckey = obj._MeMoIZeR_Key + ':' + _MeMoIZeR_gen_key(args, kw)
-
- c = CacheCount(obj, name)
- rval = cdict.get(ckey, "_MeMoIZeR")
- if rval is "_MeMoIZeR":
- rval = cdict[ckey] = apply(func, args, kw)
- c.miss = c.miss + 1
- else:
- c.hit = c.hit + 1
-
- return rval
-
-def Count_cache_get_self(name, func, cdict, self):
- """Called instead of func(self) to see if this method call's
- return value has been cached. If it has, just return the cached
- value; if not, call the actual method and cache the return.
- Optimized version of Memoizer_cache_get for methods that take the
- object instance as the only argument."""
-
- ckey = self._MeMoIZeR_Key
-
- c = CacheCountSelf(self, name)
- rval = cdict.get(ckey, "_MeMoIZeR")
- if rval is "_MeMoIZeR":
- rval = cdict[ckey] = func(self)
- c.miss = c.miss + 1
- else:
- c.hit = c.hit + 1
-
- return rval
-
-def Count_cache_get_one(name, func, cdict, self, arg):
- """Called instead of func(self, arg) to see if this method call's
- return value has been cached. If it has, just return the cached
- value; if not, call the actual method and cache the return.
- Optimized version of Memoizer_cache_get for methods that take the
- object instance and one other argument only."""
-
- ckey = self._MeMoIZeR_Key + ':' + \
- (getattr(arg, "_MeMoIZeR_Key", None) or repr(arg))
-
- c = CacheCountOne(self, name)
- rval = cdict.get(ckey, "_MeMoIZeR")
- if rval is "_MeMoIZeR":
- rval = cdict[ckey] = func(self, arg)
- c.miss = c.miss + 1
- else:
- c.hit = c.hit + 1
-
- return rval
-
-MCG_dict = {
- 'MCG' : Memoizer_cache_get,
- 'MCGS' : Memoizer_cache_get_self,
- 'MCGO' : Memoizer_cache_get_one,
-}
-
-MCG_lambda = "lambda *args, **kw: MCG(methcode, methcached, args, kw)"
-MCGS_lambda = "lambda self: MCGS(methcode, methcached, self)"
-MCGO_lambda = "lambda self, arg: MCGO(methcode, methcached, self, arg)"
-
-def EnableCounting():
- """Enable counting of Memoizer hits and misses by overriding the
- globals that hold the non-counting versions of the functions and
- lambdas we call with the counting versions.
- """
- global MCG_dict
- global MCG_lambda
- global MCGS_lambda
- global MCGO_lambda
+ memoizer_counters.append(SCons.Memoize.CountDict('bar', _bar_key)) # Memoization
- MCG_dict = {
- 'MCG' : Count_cache_get,
- 'MCGS' : Count_cache_get_self,
- 'MCGO' : Count_cache_get_one,
- }
+ def bar(self, argument):
- MCG_lambda = "lambda *args, **kw: MCG(methname, methcode, methcached, args, kw)"
- MCGS_lambda = "lambda self: MCGS(methname, methcode, methcached, self)"
- MCGO_lambda = "lambda self, arg: MCGO(methname, methcode, methcached, self, arg)"
+ memo_key = argument # Memoization
+ try: # Memoization
+ memo_dict = self._memo['bar'] # Memoization
+ except KeyError: # Memoization
+ memo_dict = {} # Memoization
+ self._memo['dict'] = memo_dict # Memoization
+ else: # Memoization
+ try: # Memoization
+ return memo_dict[memo_key] # Memoization
+ except KeyError: # Memoization
+ pass # Memoization
+ result = self.compute_bar_value(argument)
+ memo_dict[memo_key] = result # Memoization
-class _Memoizer_Simple:
+ return result
- def __setstate__(self, state):
- self.__dict__.update(state)
- self.__dict__['_MeMoIZeR_Key'] = Next_Memoize_Key()
- #kwq: need to call original's setstate if it had one...
+At one point we avoided replicating this sort of logic in all the methods
+by putting it right into this module, but we've moved away from that at
+present (see the "Historical Note," below.).
- def _MeMoIZeR_reset(self):
- self.__dict__['_MeMoIZeR_Key'] = Next_Memoize_Key()
- return 1
+Deciding what to cache is tricky, because different configurations
+can have radically different performance tradeoffs, and because the
+tradeoffs involved are often so non-obvious. Consequently, deciding
+whether or not to cache a given method will likely be more of an art than
+a science, but should still be based on available data from this module.
+Here are some VERY GENERAL guidelines about deciding whether or not to
+cache return values from a method that's being called a lot:
+ -- The first question to ask is, "Can we change the calling code
+ so this method isn't called so often?" Sometimes this can be
+ done by changing the algorithm. Sometimes the *caller* should
+ be memoized, not the method you're looking at.
-class _Memoizer_Comparable:
+ -- The memoized function should be timed with multiple configurations
+ to make sure it doesn't inadvertently slow down some other
+ configuration.
- def __setstate__(self, state):
- self.__dict__.update(state)
- self.__dict__['_MeMoIZeR_Key'] = Next_Memoize_Key()
- #kwq: need to call original's setstate if it had one...
+ -- When memoizing values based on a dictionary key composed of
+ input arguments, you don't need to use all of the arguments
+ if some of them don't affect the return values.
- def _MeMoIZeR_reset(self):
- self.__dict__['_MeMoIZeR_Key'] = Next_Memoize_Key()
- return 1
+Historical Note: The initial Memoizer implementation actually handled
+the caching of values for the wrapped methods, based on a set of generic
+algorithms for computing hashable values based on the method's arguments.
+This collected caching logic nicely, but had two drawbacks:
- def __cmp__(self, other):
- """A comparison might use the object dictionaries to
- compare, so the dictionaries should contain caching
- entries. Make new dictionaries without those entries
- to use with the underlying comparison."""
+ Running arguments through a generic key-conversion mechanism is slower
+ (and less flexible) than just coding these things directly. Since the
+ methods that need memoized values are generally performance-critical,
+ slowing them down in order to collect the logic isn't the right
+ tradeoff.
- if self is other:
- return 0
+ Use of the memoizer really obscured what was being called, because
+ all the memoized methods were wrapped with re-used generic methods.
+ This made it more difficult, for example, to use the Python profiler
+ to figure out how to optimize the underlying methods.
+"""
- # We are here as a cached object, but cmp will flip its
- # arguments back and forth and recurse attempting to get base
- # arguments for the comparison, so we might have already been
- # stripped.
+import new
- try:
- saved_d1 = self.__dict__
- d1 = copy.copy(saved_d1)
- del d1['_MeMoIZeR_Key']
- except KeyError:
- return self._MeMoIZeR_cmp(other)
- self.__dict__ = d1
+# A flag controlling whether or not we actually use memoization.
+use_memoizer = None
- # Same thing for the other, but we should try to convert it
- # here in case the _MeMoIZeR_cmp compares __dict__ objects
- # directly.
+CounterList = []
- saved_other = None
- try:
- if other.__dict__.has_key('_MeMoIZeR_Key'):
- saved_other = other.__dict__
- d2 = copy.copy(saved_other)
- del d2['_MeMoIZeR_Key']
- other.__dict__ = d2
- except (AttributeError, KeyError):
- pass
-
- # Both self and other have been prepared: perform the test,
- # then restore the original dictionaries and exit
-
- rval = self._MeMoIZeR_cmp(other)
-
- self.__dict__ = saved_d1
- if saved_other:
- other.__dict__ = saved_other
-
- return rval
-
-
-def Analyze_Class(klass):
- if klass.__dict__.has_key('_MeMoIZeR_converted'): return klass
-
- original_name = str(klass)
-
- D,R,C = _analyze_classmethods(klass.__dict__, klass.__bases__)
-
- if C:
- modelklass = _Memoizer_Comparable
- lcldict = {'_MeMoIZeR_cmp':C}
- else:
- modelklass = _Memoizer_Simple
- lcldict = {}
-
- klass.__dict__.update(memoize_classdict(klass, modelklass, lcldict, D, R))
-
- return klass
-
-
-# Note that each eval("lambda...") has a few \n's prepended to the
-# lambda, and furthermore that each of these evals has a different
-# number of \n's prepended. This is to provide a little bit of info
-# for traceback or profile output, which generate things like 'File
-# "<string>", line X'. X will be the number of \n's plus 1.
-
-# Also use the following routine to specify the "filename" portion so
-# that it provides useful information. In addition, make sure it
-# contains 'os.sep + "SCons" + os.sep' for the
-# SCons.Script.find_deepest_user_frame operation.
-
-def whoami(memoizer_funcname, real_funcname):
- return '...'+os.sep+'SCons'+os.sep+'Memoizer-'+ \
- memoizer_funcname+'-lambda<'+real_funcname+'>'
-
-def memoize_classdict(klass, modelklass, new_klassdict, cacheable, resetting):
- new_klassdict.update(modelklass.__dict__)
- new_klassdict['_MeMoIZeR_converted'] = 1
-
- for name,code in cacheable.items():
- eval_dict = {
- 'methname' : name,
- 'methcode' : code,
- 'methcached' : {},
- }
- eval_dict.update(MCG_dict)
- fc = code.func_code
- if fc.co_argcount == 1 and not fc.co_flags & 0xC:
- compiled = compile("\n"*1 + MCGS_lambda,
- whoami('cache_get_self', name),
- "eval")
- elif fc.co_argcount == 2 and not fc.co_flags & 0xC:
- compiled = compile("\n"*2 + MCGO_lambda,
- whoami('cache_get_one', name),
- "eval")
+class Counter:
+ """
+ Base class for counting memoization hits and misses.
+
+ We expect that the metaclass initialization will have filled in
+ the .name attribute that represents the name of the function
+ being counted.
+ """
+ def __init__(self, method_name):
+ """
+ """
+ self.method_name = method_name
+ self.hit = 0
+ self.miss = 0
+ CounterList.append(self)
+ def display(self):
+ fmt = " %7d hits %7d misses %s()"
+ print fmt % (self.hit, self.miss, self.name)
+ def __cmp__(self, other):
+ return cmp(self.name, other.name)
+
+class CountValue(Counter):
+ """
+ A counter class for simple, atomic memoized values.
+
+ A CountValue object should be instantiated in a class for each of
+ the class's methods that memoizes its return value by simply storing
+ the return value in its _memo dictionary.
+
+ We expect that the metaclass initialization will fill in the
+ .underlying_method attribute with the method that we're wrapping.
+ We then call the underlying_method method after counting whether
+ its memoized value has already been set (a hit) or not (a miss).
+ """
+ def __call__(self, *args, **kw):
+ obj = args[0]
+ if obj._memo.has_key(self.method_name):
+ self.hit = self.hit + 1
else:
- compiled = compile("\n"*3 + MCG_lambda,
- whoami('cache_get', name),
- "eval")
- newmethod = eval(compiled, eval_dict, {})
- new_klassdict[name] = newmethod
-
- for name,code in resetting.items():
- newmethod = eval(
- compile(
- "lambda obj_self, *args, **kw: (obj_self._MeMoIZeR_reset(), apply(rmethcode, (obj_self,)+args, kw))[1]",
- whoami('cache_reset', name),
- 'eval'),
- {'rmethcode':code}, {})
- new_klassdict[name] = newmethod
-
- return new_klassdict
-
-def _analyze_classmethods(klassdict, klassbases):
- """Given a class, performs a scan of methods for that class and
- all its base classes (recursively). Returns aggregated results of
- _scan_classdict calls where subclass methods are superimposed over
- base class methods of the same name (emulating instance->class
- method lookup)."""
-
- D = {}
- R = {}
- C = None
-
- # Get cache/reset/cmp methods from subclasses
-
- for K in klassbases:
- if K.__dict__.has_key('_MeMoIZeR_converted'): continue
- d,r,c = _analyze_classmethods(K.__dict__, K.__bases__)
- D.update(d)
- R.update(r)
- C = c or C
-
- # Delete base method info if current class has an override
-
- for M in D.keys():
- if M == '__cmp__': continue
- if klassdict.has_key(M):
- del D[M]
- for M in R.keys():
- if M == '__cmp__': continue
- if klassdict.has_key(M):
- del R[M]
-
- # Get cache/reset/cmp from current class
-
- d,r,c = _scan_classdict(klassdict)
-
- # Update accumulated cache/reset/cmp methods
-
- D.update(d)
- R.update(r)
- C = c or C
-
- return D,R,C
-
-
-def _scan_classdict(klassdict):
- """Scans the method dictionary of a class to find all methods
- interesting to caching operations. Returns a tuple of these
- interesting methods:
-
- ( dict-of-cachable-methods,
- dict-of-cache-resetting-methods,
- cmp_method_val or None)
-
- Each dict has the name of the method as a key and the corresponding
- value is the method body."""
-
- cache_setters = {}
- cache_resetters = {}
- cmp_if_exists = None
- already_cache_modified = 0
-
- for attr,val in klassdict.items():
- if not callable(val): continue
- if attr == '__cmp__':
- cmp_if_exists = val
- continue # cmp can't be cached and can't reset cache
- if attr == '_MeMoIZeR_cmp':
- already_cache_modified = 1
- continue
- if not val.__doc__: continue
- if string.find(val.__doc__, '__cache_reset__') > -1:
- cache_resetters[attr] = val
- continue
- if string.find(val.__doc__, '__reset_cache__') > -1:
- cache_resetters[attr] = val
- continue
- if string.find(val.__doc__, '__cacheable__') > -1:
- cache_setters[attr] = val
- continue
- if already_cache_modified: cmp_if_exists = 'already_cache_modified'
- return cache_setters, cache_resetters, cmp_if_exists
+ self.miss = self.miss + 1
+ return apply(self.underlying_method, args, kw)
-#
-# Primary Memoizer class. This should be a base-class for any class
-# that wants method call results to be cached. The sub-class should
-# call this parent class's __init__ method, but no other requirements
-# are made on the subclass (other than appropriate decoration).
+class CountDict(Counter):
+ """
+ A counter class for memoized values stored in a dictionary, with
+ keys based on the method's input arguments.
+
+ A CountDict object is instantiated in a class for each of the
+ class's methods that memoizes its return value in a dictionary,
+ indexed by some key that can be computed from one or more of
+ its input arguments.
+
+ We expect that the metaclass initialization will fill in the
+ .underlying_method attribute with the method that we're wrapping.
+ We then call the underlying_method method after counting whether the
+ computed key value is already present in the memoization dictionary
+ (a hit) or not (a miss).
+ """
+ def __init__(self, method_name, keymaker):
+ """
+ """
+ Counter.__init__(self, method_name)
+ self.keymaker = keymaker
+ def __call__(self, *args, **kw):
+ obj = args[0]
+ try:
+ memo_dict = obj._memo[self.method_name]
+ except KeyError:
+ self.miss = self.miss + 1
+ else:
+ key = apply(self.keymaker, args, kw)
+ if memo_dict.has_key(key):
+ self.hit = self.hit + 1
+ else:
+ self.miss = self.miss + 1
+ return apply(self.underlying_method, args, kw)
class Memoizer:
"""Object which performs caching of method calls for its 'primary'
instance."""
def __init__(self):
- self.__class__ = Analyze_Class(self.__class__)
- self._MeMoIZeR_Key = Next_Memoize_Key()
+ pass
+
+# Find out if we support metaclasses (Python 2.2 and later).
-# Find out if we are pre-2.2
+class M:
+ def __init__(cls, name, bases, cls_dict):
+ cls.has_metaclass = 1
+
+class A:
+ __metaclass__ = M
try:
- vinfo = sys.version_info
+ has_metaclass = A.has_metaclass
except AttributeError:
- """Split an old-style version string into major and minor parts. This
- is complicated by the fact that a version string can be something
- like 3.2b1."""
- import re
- version = string.split(string.split(sys.version, ' ')[0], '.')
- vinfo = (int(version[0]), int(re.match('\d+', version[1]).group()))
- del re
-
-need_version = (2, 2) # actual
-#need_version = (33, 0) # always
-#need_version = (0, 0) # never
-
-has_metaclass = (vinfo[0] > need_version[0] or \
- (vinfo[0] == need_version[0] and
- vinfo[1] >= need_version[1]))
+ has_metaclass = None
+
+del M
+del A
if not has_metaclass:
+ def Dump(title):
+ pass
+
class Memoized_Metaclass:
# Just a place-holder so pre-metaclass Python versions don't
# have to have special code for the Memoized classes.
pass
+ def EnableMemoization():
+ import SCons.Warnings
+ msg = 'memoization is not supported in this version of Python (no metaclasses)'
+ raise SCons.Warnings.NoMetaclassSupportWarning, msg
+
else:
- # Initialization is a wee bit of a hassle. We want to do some of
- # our own work for initialization, then pass on to the actual
- # initialization function. However, we have to be careful we
- # don't interfere with (a) the super()'s initialization call of
- # it's superclass's __init__, and (b) classes we are Memoizing
- # that don't have their own __init__ but which have a super that
- # has an __init__. To do (a), we eval a lambda below where the
- # actual init code is locally bound and the __init__ entry in the
- # class's dictionary is replaced with the _MeMoIZeR_init call. To
- # do (b), we use _MeMoIZeR_superinit as a fallback if the class
- # doesn't have it's own __init__. Note that we don't use getattr
- # to obtain the __init__ because we don't want to re-instrument
- # parent-class __init__ operations (and we want to avoid the
- # Object object's slot init if the class has no __init__).
-
- def _MeMoIZeR_init(actual_init, self, args, kw):
- self.__dict__['_MeMoIZeR_Key'] = Next_Memoize_Key()
- apply(actual_init, (self,)+args, kw)
-
- def _MeMoIZeR_superinit(self, cls, args, kw):
- apply(super(cls, self).__init__, args, kw)
+ def Dump(title=None):
+ if title:
+ print title
+ CounterList.sort()
+ for counter in CounterList:
+ counter.display()
class Memoized_Metaclass(type):
def __init__(cls, name, bases, cls_dict):
- # Note that cls_dict apparently contains a *copy* of the
- # attribute dictionary of the class; modifying cls_dict
- # has no effect on the actual class itself.
- D,R,C = _analyze_classmethods(cls_dict, bases)
- if C:
- modelklass = _Memoizer_Comparable
- cls_dict['_MeMoIZeR_cmp'] = C
- else:
- modelklass = _Memoizer_Simple
- klassdict = memoize_classdict(cls, modelklass, cls_dict, D, R)
-
- init = klassdict.get('__init__', None)
- if not init:
- # Make sure filename has os.sep+'SCons'+os.sep so that
- # SCons.Script.find_deepest_user_frame doesn't stop here
- import inspect # It's OK, can't get here for Python < 2.1
- filename = inspect.getsourcefile(_MeMoIZeR_superinit)
- if not filename:
- # This file was compiled at a path name different from
- # how it's invoked now, so just make up something.
- filename = whoami('superinit', '???')
- superinitcode = compile(
- "lambda self, *args, **kw: MPI(self, cls, args, kw)",
- filename,
- "eval")
- superinit = eval(superinitcode,
- {'cls':cls,
- 'MPI':_MeMoIZeR_superinit})
- init = superinit
-
- newinitcode = compile(
- "\n"*(init.func_code.co_firstlineno-1) +
- "lambda self, args, kw: _MeMoIZeR_init(real_init, self, args, kw)",
- whoami('init', init.func_code.co_filename),
- 'eval')
- newinit = eval(newinitcode,
- {'real_init':init,
- '_MeMoIZeR_init':_MeMoIZeR_init},
- {})
- klassdict['__init__'] = lambda self, *args, **kw: newinit(self, args, kw)
-
- super(Memoized_Metaclass, cls).__init__(name, bases, klassdict)
- # Now, since klassdict doesn't seem to have affected the class
- # definition itself, apply klassdict.
- for attr in klassdict.keys():
- setattr(cls, attr, klassdict[attr])
-
-def DisableMemoization():
- global use_memoizer
- use_memoizer = None
-
-def use_old_memoization():
- return use_memoizer and not has_metaclass
+ super(Memoized_Metaclass, cls).__init__(name, bases, cls_dict)
+
+ for counter in cls_dict.get('memoizer_counters', []):
+ method_name = counter.method_name
+
+ counter.name = cls.__name__ + '.' + method_name
+ counter.underlying_method = cls_dict[method_name]
+
+ replacement_method = new.instancemethod(counter, None, cls)
+ setattr(cls, method_name, replacement_method)
+
+ def EnableMemoization():
+ global use_memoizer
+ use_memoizer = 1
diff --git a/src/engine/SCons/MemoizeTests.py b/src/engine/SCons/MemoizeTests.py
new file mode 100644
index 0000000..7102f30
--- /dev/null
+++ b/src/engine/SCons/MemoizeTests.py
@@ -0,0 +1,192 @@
+#
+# __COPYRIGHT__
+#
+# Permission is hereby granted, free of charge, to any person obtaining
+# a copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish,
+# distribute, sublicense, and/or sell copies of the Software, and to
+# permit persons to whom the Software is furnished to do so, subject to
+# the following conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
+# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
+# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+#
+
+__revision__ = "__FILE__ __REVISION__ __DATE__ __DEVELOPER__"
+
+import sys
+import unittest
+
+import SCons.Memoize
+
+
+
+class FakeObject:
+
+ __metaclass__ = SCons.Memoize.Memoized_Metaclass
+
+ memoizer_counters = []
+
+ def __init__(self):
+ self._memo = {}
+
+ def _dict_key(self, argument):
+ return argument
+
+ memoizer_counters.append(SCons.Memoize.CountDict('dict', _dict_key))
+
+ def dict(self, argument):
+
+ memo_key = argument
+ try:
+ memo_dict = self._memo['dict']
+ except KeyError:
+ memo_dict = {}
+ self._memo['dict'] = memo_dict
+ else:
+ try:
+ return memo_dict[memo_key]
+ except KeyError:
+ pass
+
+ result = self.compute_dict(argument)
+
+ memo_dict[memo_key] = result
+
+ return result
+
+ memoizer_counters.append(SCons.Memoize.CountValue('value'))
+
+ def value(self):
+
+ try:
+ return self._memo['value']
+ except KeyError:
+ pass
+
+ result = self.compute_value()
+
+ self._memo['value'] = result
+
+ return result
+
+ def get_memoizer_counter(self, name):
+ for mc in self.memoizer_counters:
+ if mc.method_name == name:
+ return mc
+ return None
+
+class Returner:
+ def __init__(self, result):
+ self.result = result
+ self.calls = 0
+ def __call__(self, *args, **kw):
+ self.calls = self.calls + 1
+ return self.result
+
+
+class CountDictTestCase(unittest.TestCase):
+
+ def test___call__(self):
+ """Calling a Memoized dict method
+ """
+ obj = FakeObject()
+
+ called = []
+
+ fd1 = Returner(1)
+ fd2 = Returner(2)
+
+ obj.compute_dict = fd1
+
+ r = obj.dict(11)
+ assert r == 1, r
+
+ obj.compute_dict = fd2
+
+ r = obj.dict(12)
+ assert r == 2, r
+
+ r = obj.dict(11)
+ assert r == 1, r
+
+ obj.compute_dict = fd1
+
+ r = obj.dict(11)
+ assert r == 1, r
+
+ r = obj.dict(12)
+ assert r == 2, r
+
+ assert fd1.calls == 1, fd1.calls
+ assert fd2.calls == 1, fd2.calls
+
+ c = obj.get_memoizer_counter('dict')
+
+ if SCons.Memoize.has_metaclass:
+ assert c.hit == 3, c.hit
+ assert c.miss == 2, c.miss
+ else:
+ assert c.hit == 0, c.hit
+ assert c.miss == 0, c.miss
+
+
+class CountValueTestCase(unittest.TestCase):
+
+ def test___call__(self):
+ """Calling a Memoized value method
+ """
+ obj = FakeObject()
+
+ called = []
+
+ fv1 = Returner(1)
+ fv2 = Returner(2)
+
+ obj.compute_value = fv1
+
+ r = obj.value()
+ assert r == 1, r
+ r = obj.value()
+ assert r == 1, r
+
+ obj.compute_value = fv2
+
+ r = obj.value()
+ assert r == 1, r
+ r = obj.value()
+ assert r == 1, r
+
+ assert fv1.calls == 1, fv1.calls
+ assert fv2.calls == 0, fv2.calls
+
+ c = obj.get_memoizer_counter('value')
+
+ if SCons.Memoize.has_metaclass:
+ assert c.hit == 3, c.hit
+ assert c.miss == 1, c.miss
+ else:
+ assert c.hit == 0, c.hit
+ assert c.miss == 0, c.miss
+
+
+if __name__ == "__main__":
+ suite = unittest.TestSuite()
+ tclasses = [
+ CountDictTestCase,
+ CountValueTestCase,
+ ]
+ for tclass in tclasses:
+ names = unittest.getTestCaseNames(tclass, 'test_')
+ suite.addTests(map(tclass, names))
+ if not unittest.TextTestRunner().run(suite).wasSuccessful():
+ sys.exit(1)
diff --git a/src/engine/SCons/Node/FS.py b/src/engine/SCons/Node/FS.py
index 382bca3..08b8d7d 100644
--- a/src/engine/SCons/Node/FS.py
+++ b/src/engine/SCons/Node/FS.py
@@ -346,9 +346,20 @@ class DiskChecker:
self.set_ignore()
def do_diskcheck_match(node, predicate, errorfmt):
- path = node.abspath
- if predicate(path):
- raise TypeError, errorfmt % path
+ result = predicate()
+ try:
+ # If calling the predicate() cached a None value from stat(),
+ # remove it so it doesn't interfere with later attempts to
+ # build this Node as we walk the DAG. (This isn't a great way
+ # to do this, we're reaching into an interface that doesn't
+ # really belong to us, but it's all about performance, so
+ # for now we'll just document the dependency...)
+ if node._memo['stat'] is None:
+ del node._memo['stat']
+ except (AttributeError, KeyError):
+ pass
+ if result:
+ raise TypeError, errorfmt % node.abspath
def ignore_diskcheck_match(node, predicate, errorfmt):
pass
@@ -520,6 +531,8 @@ class Base(SCons.Node.Node):
object identity comparisons.
"""
+ memoizer_counters = []
+
def __init__(self, name, directory, fs):
"""Initialize a generic Node.FS.Base object.
@@ -531,6 +544,7 @@ class Base(SCons.Node.Node):
SCons.Node.Node.__init__(self)
self.name = name
+ self.suffix = SCons.Util.splitext(name)[1]
self.fs = fs
assert directory, "A directory must be provided"
@@ -550,20 +564,11 @@ class Base(SCons.Node.Node):
self.cwd = None # will hold the SConscript directory for target nodes
self.duplicate = directory.duplicate
- def clear(self):
- """Completely clear a Node.FS.Base object of all its cached
- state (so that it can be re-evaluated by interfaces that do
- continuous integration builds).
- __cache_reset__
- """
- SCons.Node.Node.clear(self)
-
def get_dir(self):
return self.dir
def get_suffix(self):
- "__cacheable__"
- return SCons.Util.splitext(self.name)[1]
+ return self.suffix
def rfile(self):
return self
@@ -576,9 +581,16 @@ class Base(SCons.Node.Node):
return self._save_str()
return self._get_str()
+ memoizer_counters.append(SCons.Memoize.CountValue('_save_str'))
+
def _save_str(self):
- "__cacheable__"
- return self._get_str()
+ try:
+ return self._memo['_save_str']
+ except KeyError:
+ pass
+ result = self._get_str()
+ self._memo['_save_str'] = result
+ return result
def _get_str(self):
if self.duplicate or self.is_derived():
@@ -587,17 +599,20 @@ class Base(SCons.Node.Node):
rstr = __str__
+ memoizer_counters.append(SCons.Memoize.CountValue('stat'))
+
def stat(self):
- "__cacheable__"
- try: return self.fs.stat(self.abspath)
- except os.error: return None
+ try: return self._memo['stat']
+ except KeyError: pass
+ try: result = self.fs.stat(self.abspath)
+ except os.error: result = None
+ self._memo['stat'] = result
+ return result
def exists(self):
- "__cacheable__"
return not self.stat() is None
def rexists(self):
- "__cacheable__"
return self.rfile().exists()
def getmtime(self):
@@ -640,7 +655,7 @@ class Base(SCons.Node.Node):
"""If this node is in a build path, return the node
corresponding to its source file. Otherwise, return
ourself.
- __cacheable__"""
+ """
dir=self.dir
name=self.name
while dir:
@@ -707,9 +722,48 @@ class Base(SCons.Node.Node):
def target_from_source(self, prefix, suffix, splitext=SCons.Util.splitext):
return self.dir.Entry(prefix + splitext(self.name)[0] + suffix)
+ def _Rfindalldirs_key(self, pathlist):
+ return pathlist
+
+ memoizer_counters.append(SCons.Memoize.CountDict('Rfindalldirs', _Rfindalldirs_key))
+
+ def Rfindalldirs(self, pathlist):
+ """
+ Return all of the directories for a given path list, including
+ corresponding "backing" directories in any repositories.
+
+ The Node lookups are relative to this Node (typically a
+ directory), so memoizing result saves cycles from looking
+ up the same path for each target in a given directory.
+ """
+ try:
+ memo_dict = self._memo['Rfindalldirs']
+ except KeyError:
+ memo_dict = {}
+ self._memo['Rfindalldirs'] = memo_dict
+ else:
+ try:
+ return memo_dict[pathlist]
+ except KeyError:
+ pass
+
+ create_dir_relative_to_self = self.Dir
+ result = []
+ for path in pathlist:
+ if isinstance(path, SCons.Node.Node):
+ result.append(path)
+ else:
+ dir = create_dir_relative_to_self(path)
+ result.extend(dir.get_all_rdirs())
+
+ memo_dict[pathlist] = result
+
+ return result
+
def RDirs(self, pathlist):
"""Search for a list of directories in the Repository list."""
- return self.fs.Rfindalldirs(pathlist, self.cwd)
+ cwd = self.cwd or self.fs._cwd
+ return cwd.Rfindalldirs(pathlist)
class Entry(Base):
"""This is the class for generic Node.FS entries--that is, things
@@ -723,13 +777,35 @@ class Entry(Base):
pass
def disambiguate(self):
- if self.isdir() or self.srcnode().isdir():
+ """
+ """
+ if self.isdir():
self.__class__ = Dir
self._morph()
- else:
+ elif self.isfile():
self.__class__ = File
self._morph()
self.clear()
+ else:
+ # There was nothing on-disk at this location, so look in
+ # the src directory.
+ #
+ # We can't just use self.srcnode() straight away because
+ # that would create an actual Node for this file in the src
+ # directory, and there might not be one. Instead, use the
+ # dir_on_disk() method to see if there's something on-disk
+ # with that name, in which case we can go ahead and call
+ # self.srcnode() to create the right type of entry.
+ srcdir = self.dir.srcnode()
+ if srcdir != self.dir and \
+ srcdir.entry_exists_on_disk(self.name) and \
+ self.srcnode().isdir():
+ self.__class__ = Dir
+ self._morph()
+ else:
+ self.__class__ = File
+ self._morph()
+ self.clear()
return self
def rfile(self):
@@ -759,7 +835,8 @@ class Entry(Base):
return self.get_contents()
if self.islink():
return '' # avoid errors for dangling symlinks
- raise AttributeError
+ msg = "No such file or directory: '%s'" % self.abspath
+ raise SCons.Errors.UserError, msg
def must_be_a_Dir(self):
"""Called to make sure a Node is a Dir. Since we're an
@@ -867,14 +944,6 @@ class LocalFS:
return ''
-if SCons.Memoize.use_old_memoization():
- _FSBase = LocalFS
- class LocalFS(SCons.Memoize.Memoizer, _FSBase):
- def __init__(self, *args, **kw):
- apply(_FSBase.__init__, (self,)+args, kw)
- SCons.Memoize.Memoizer.__init__(self)
-
-
#class RemoteFS:
# # Skeleton for the obvious methods we might need from the
# # abstraction layer for a remote filesystem.
@@ -886,6 +955,8 @@ if SCons.Memoize.use_old_memoization():
class FS(LocalFS):
+ memoizer_counters = []
+
def __init__(self, path = None):
"""Initialize the Node.FS subsystem.
@@ -897,6 +968,8 @@ class FS(LocalFS):
"""
if __debug__: logInstanceCreation(self, 'Node.FS')
+ self._memo = {}
+
self.Root = {}
self.SConstruct_dir = None
self.CachePath = None
@@ -915,10 +988,6 @@ class FS(LocalFS):
self.Top.path = '.'
self.Top.tpath = '.'
self._cwd = self.Top
-
- def clear_cache(self):
- "__cache_reset__"
- pass
def set_SConstruct_dir(self, dir):
self.SConstruct_dir = dir
@@ -942,6 +1011,11 @@ class FS(LocalFS):
raise TypeError, "Tried to lookup %s '%s' as a %s." % \
(node.__class__.__name__, node.path, klass.__name__)
+ def _doLookup_key(self, fsclass, name, directory = None, create = 1):
+ return (fsclass, name, directory)
+
+ memoizer_counters.append(SCons.Memoize.CountDict('_doLookup', _doLookup_key))
+
def _doLookup(self, fsclass, name, directory = None, create = 1):
"""This method differs from the File and Dir factory methods in
one important way: the meaning of the directory parameter.
@@ -949,7 +1023,18 @@ class FS(LocalFS):
name is expected to be an absolute path. If you try to look up a
relative path with directory=None, then an AssertionError will be
raised.
- __cacheable__"""
+ """
+ memo_key = (fsclass, name, directory)
+ try:
+ memo_dict = self._memo['_doLookup']
+ except KeyError:
+ memo_dict = {}
+ self._memo['_doLookup'] = memo_dict
+ else:
+ try:
+ return memo_dict[memo_key]
+ except KeyError:
+ pass
if not name:
# This is a stupid hack to compensate for the fact that the
@@ -990,6 +1075,7 @@ class FS(LocalFS):
self.Root[''] = directory
if not path_orig:
+ memo_dict[memo_key] = directory
return directory
last_orig = path_orig.pop() # strip last element
@@ -1040,6 +1126,9 @@ class FS(LocalFS):
directory.add_wkid(result)
else:
result = self.__checkClass(e, fsclass)
+
+ memo_dict[memo_key] = result
+
return result
def _transformPath(self, name, directory):
@@ -1067,7 +1156,8 @@ class FS(LocalFS):
# Correct such that '#/foo' is equivalent
# to '#foo'.
name = name[1:]
- name = os.path.join('.', os.path.normpath(name))
+ name = os.path.normpath(os.path.join('.', name))
+ return (name, directory)
elif not directory:
directory = self._cwd
return (os.path.normpath(name), directory)
@@ -1116,7 +1206,6 @@ class FS(LocalFS):
This method will raise TypeError if a directory is found at the
specified path.
"""
-
return self.Entry(name, directory, create, File)
def Dir(self, name, directory = None, create = 1):
@@ -1129,7 +1218,6 @@ class FS(LocalFS):
This method will raise TypeError if a normal file is found at the
specified path.
"""
-
return self.Entry(name, directory, create, Dir)
def BuildDir(self, build_dir, src_dir, duplicate=1):
@@ -1155,22 +1243,6 @@ class FS(LocalFS):
d = self.Dir(d)
self.Top.addRepository(d)
- def Rfindalldirs(self, pathlist, cwd):
- """__cacheable__"""
- if SCons.Util.is_String(pathlist):
- pathlist = string.split(pathlist, os.pathsep)
- if not SCons.Util.is_List(pathlist):
- pathlist = [pathlist]
- result = []
- for path in filter(None, pathlist):
- if isinstance(path, SCons.Node.Node):
- result.append(path)
- continue
- path, dir = self._transformPath(path, cwd)
- dir = dir.Dir(path)
- result.extend(dir.get_all_rdirs())
- return result
-
def CacheDebugWrite(self, fmt, target, cachefile):
self.CacheDebugFP.write(fmt % (target, os.path.split(cachefile)[1]))
@@ -1194,7 +1266,10 @@ class FS(LocalFS):
Climb the directory tree, and look up path names
relative to any linked build directories we find.
- __cacheable__
+
+ Even though this loops and walks up the tree, we don't memoize
+ the return value because this is really only used to process
+ the command-line targets.
"""
targets = []
message = None
@@ -1223,6 +1298,8 @@ class Dir(Base):
"""A class for directories in a file system.
"""
+ memoizer_counters = []
+
NodeInfo = DirNodeInfo
BuildInfo = DirBuildInfo
@@ -1238,7 +1315,7 @@ class Dir(Base):
Set up this directory's entries and hook it into the file
system tree. Specify that directories (this Node) don't use
signatures for calculating whether they're current.
- __cache_reset__"""
+ """
self.repositories = []
self.srcdir = None
@@ -1258,8 +1335,8 @@ class Dir(Base):
self.get_executor().set_action_list(self.builder.action)
def diskcheck_match(self):
- diskcheck_match(self, self.fs.isfile,
- "File %s found where directory expected.")
+ diskcheck_match(self, self.isfile,
+ "File %s found where directory expected.")
def __clearRepositoryCache(self, duplicate=None):
"""Called when we change the repository(ies) for a directory.
@@ -1305,13 +1382,19 @@ class Dir(Base):
def getRepositories(self):
"""Returns a list of repositories for this directory.
- __cacheable__"""
+ """
if self.srcdir and not self.duplicate:
return self.srcdir.get_all_rdirs() + self.repositories
return self.repositories
+ memoizer_counters.append(SCons.Memoize.CountValue('get_all_rdirs'))
+
def get_all_rdirs(self):
- """__cacheable__"""
+ try:
+ return self._memo['get_all_rdirs']
+ except KeyError:
+ pass
+
result = [self]
fname = '.'
dir = self
@@ -1320,6 +1403,9 @@ class Dir(Base):
result.append(rep.Dir(fname))
fname = dir.name + os.sep + fname
dir = dir.up()
+
+ self._memo['get_all_rdirs'] = result
+
return result
def addRepository(self, dir):
@@ -1331,29 +1417,54 @@ class Dir(Base):
def up(self):
return self.entries['..']
+ def _rel_path_key(self, other):
+ return str(other)
+
+ memoizer_counters.append(SCons.Memoize.CountDict('rel_path', _rel_path_key))
+
def rel_path(self, other):
"""Return a path to "other" relative to this directory.
- __cacheable__"""
- if isinstance(other, Dir):
- name = []
+ """
+ try:
+ memo_dict = self._memo['rel_path']
+ except KeyError:
+ memo_dict = {}
+ self._memo['rel_path'] = memo_dict
else:
try:
- name = [other.name]
- other = other.dir
- except AttributeError:
- return str(other)
+ return memo_dict[other]
+ except KeyError:
+ pass
+
if self is other:
- return name and name[0] or '.'
- i = 0
- for x, y in map(None, self.path_elements, other.path_elements):
- if not x is y:
- break
- i = i + 1
- path_elems = ['..']*(len(self.path_elements)-i) \
- + map(lambda n: n.name, other.path_elements[i:]) \
- + name
+
+ result = '.'
+
+ elif not other in self.path_elements:
+
+ try:
+ other_dir = other.dir
+ except AttributeError:
+ result = str(other)
+ else:
+ dir_rel_path = self.rel_path(other_dir)
+ if dir_rel_path == '.':
+ result = other.name
+ else:
+ result = dir_rel_path + os.sep + other.name
+
+ else:
+
+ i = self.path_elements.index(other) + 1
+
+ path_elems = ['..'] * (len(self.path_elements) - i) \
+ + map(lambda n: n.name, other.path_elements[i:])
- return string.join(path_elems, os.sep)
+ result = string.join(path_elems, os.sep)
+
+ memo_dict[other] = result
+
+ return result
def get_env_scanner(self, env, kw={}):
return SCons.Defaults.DirEntryScanner
@@ -1362,10 +1473,13 @@ class Dir(Base):
return SCons.Defaults.DirEntryScanner
def get_found_includes(self, env, scanner, path):
- """Return the included implicit dependencies in this file.
- Cache results so we only scan the file once per path
- regardless of how many times this information is requested.
- __cacheable__"""
+ """Return this directory's implicit dependencies.
+
+ We don't bother caching the results because the scan typically
+ shouldn't be requested more than once (as opposed to scanning
+ .h file contents, which can be requested as many times as the
+ files is #included by other files).
+ """
if not scanner:
return []
# Clear cached info for this Dir. If we already visited this
@@ -1451,7 +1565,6 @@ class Dir(Base):
return 1
def rdir(self):
- "__cacheable__"
if not self.exists():
norm_name = _my_normcase(self.name)
for dir in self.dir.get_all_rdirs():
@@ -1500,7 +1613,6 @@ class Dir(Base):
return self
def entry_exists_on_disk(self, name):
- """__cacheable__"""
try:
d = self.on_disk_entries
except AttributeError:
@@ -1515,8 +1627,14 @@ class Dir(Base):
self.on_disk_entries = d
return d.has_key(_my_normcase(name))
+ memoizer_counters.append(SCons.Memoize.CountValue('srcdir_list'))
+
def srcdir_list(self):
- """__cacheable__"""
+ try:
+ return self._memo['srcdir_list']
+ except KeyError:
+ pass
+
result = []
dirname = '.'
@@ -1533,6 +1651,8 @@ class Dir(Base):
dirname = dir.name + os.sep + dirname
dir = dir.up()
+ self._memo['srcdir_list'] = result
+
return result
def srcdir_duplicate(self, name):
@@ -1547,8 +1667,23 @@ class Dir(Base):
return srcnode
return None
+ def _srcdir_find_file_key(self, filename):
+ return filename
+
+ memoizer_counters.append(SCons.Memoize.CountDict('srcdir_find_file', _srcdir_find_file_key))
+
def srcdir_find_file(self, filename):
- """__cacheable__"""
+ try:
+ memo_dict = self._memo['srcdir_find_file']
+ except KeyError:
+ memo_dict = {}
+ self._memo['srcdir_find_file'] = memo_dict
+ else:
+ try:
+ return memo_dict[filename]
+ except KeyError:
+ pass
+
def func(node):
if (isinstance(node, File) or isinstance(node, Entry)) and \
(node.is_derived() or node.is_pseudo_derived() or node.exists()):
@@ -1562,7 +1697,9 @@ class Dir(Base):
except KeyError: node = rdir.file_on_disk(filename)
else: node = func(node)
if node:
- return node, self
+ result = (node, self)
+ memo_dict[filename] = result
+ return result
for srcdir in self.srcdir_list():
for rdir in srcdir.get_all_rdirs():
@@ -1570,9 +1707,13 @@ class Dir(Base):
except KeyError: node = rdir.file_on_disk(filename)
else: node = func(node)
if node:
- return File(filename, self, self.fs), srcdir
+ result = (File(filename, self, self.fs), srcdir)
+ memo_dict[filename] = result
+ return result
- return None, None
+ result = (None, None)
+ memo_dict[filename] = result
+ return result
def dir_on_disk(self, name):
if self.entry_exists_on_disk(name):
@@ -1720,12 +1861,14 @@ class File(Base):
"""A class for files in a file system.
"""
+ memoizer_counters = []
+
NodeInfo = FileNodeInfo
BuildInfo = FileBuildInfo
def diskcheck_match(self):
- diskcheck_match(self, self.fs.isdir,
- "Directory %s found where file expected.")
+ diskcheck_match(self, self.isdir,
+ "Directory %s found where file expected.")
def __init__(self, name, directory, fs):
if __debug__: logInstanceCreation(self, 'Node.FS.File')
@@ -1760,7 +1903,7 @@ class File(Base):
# 'RDirs' : self.RDirs}
def _morph(self):
- """Turn a file system node into a File object. __cache_reset__"""
+ """Turn a file system node into a File object."""
self.scanner_paths = {}
if not hasattr(self, '_local'):
self._local = 0
@@ -1789,7 +1932,6 @@ class File(Base):
self.dir.sconsign().set_entry(self.name, entry)
def get_stored_info(self):
- "__cacheable__"
try:
stored = self.dir.sconsign().get_entry(self.name)
except (KeyError, OSError):
@@ -1816,14 +1958,37 @@ class File(Base):
def rel_path(self, other):
return self.dir.rel_path(other)
+ def _get_found_includes_key(self, env, scanner, path):
+ return (id(env), id(scanner), path)
+
+ memoizer_counters.append(SCons.Memoize.CountDict('get_found_includes', _get_found_includes_key))
+
def get_found_includes(self, env, scanner, path):
"""Return the included implicit dependencies in this file.
Cache results so we only scan the file once per path
regardless of how many times this information is requested.
- __cacheable__"""
- if not scanner:
- return []
- return scanner(self, env, path)
+ """
+ memo_key = (id(env), id(scanner), path)
+ try:
+ memo_dict = self._memo['get_found_includes']
+ except KeyError:
+ memo_dict = {}
+ self._memo['get_found_includes'] = memo_dict
+ else:
+ try:
+ return memo_dict[memo_key]
+ except KeyError:
+ pass
+
+ if scanner:
+ result = scanner(self, env, path)
+ result = map(lambda N: N.disambiguate(), result)
+ else:
+ result = []
+
+ memo_dict[memo_key] = result
+
+ return result
def _createDir(self):
# ensure that the directories for this node are
@@ -1875,13 +2040,17 @@ class File(Base):
def built(self):
"""Called just after this node is successfully built.
- __cache_reset__"""
+ """
# Push this file out to cache before the superclass Node.built()
# method has a chance to clear the build signature, which it
# will do if this file has a source scanner.
+ #
+ # We have to clear the memoized values *before* we push it to
+ # cache so that the memoization of the self.exists() return
+ # value doesn't interfere.
+ self.clear_memoized_values()
if self.fs.CachePath and self.exists():
CachePush(self, [], None)
- self.fs.clear_cache()
SCons.Node.Node.built(self)
def visited(self):
@@ -1926,11 +2095,10 @@ class File(Base):
return self.fs.build_dir_target_climb(self, self.dir, [self.name])
def is_pseudo_derived(self):
- "__cacheable__"
return self.has_src_builder()
def _rmv_existing(self):
- '__cache_reset__'
+ self.clear_memoized_values()
Unlink(self, [], None)
def prepare(self):
@@ -1973,29 +2141,36 @@ class File(Base):
# _rexists attributes so they can be reevaluated.
self.clear()
+ memoizer_counters.append(SCons.Memoize.CountValue('exists'))
+
def exists(self):
- "__cacheable__"
+ try:
+ return self._memo['exists']
+ except KeyError:
+ pass
# Duplicate from source path if we are set up to do this.
if self.duplicate and not self.is_derived() and not self.linked:
src = self.srcnode()
- if src is self:
- return Base.exists(self)
- # At this point, src is meant to be copied in a build directory.
- src = src.rfile()
- if src.abspath != self.abspath:
- if src.exists():
- self.do_duplicate(src)
- # Can't return 1 here because the duplication might
- # not actually occur if the -n option is being used.
- else:
- # The source file does not exist. Make sure no old
- # copy remains in the build directory.
- if Base.exists(self) or self.islink():
- self.fs.unlink(self.path)
- # Return None explicitly because the Base.exists() call
- # above will have cached its value if the file existed.
- return None
- return Base.exists(self)
+ if not src is self:
+ # At this point, src is meant to be copied in a build directory.
+ src = src.rfile()
+ if src.abspath != self.abspath:
+ if src.exists():
+ self.do_duplicate(src)
+ # Can't return 1 here because the duplication might
+ # not actually occur if the -n option is being used.
+ else:
+ # The source file does not exist. Make sure no old
+ # copy remains in the build directory.
+ if Base.exists(self) or self.islink():
+ self.fs.unlink(self.path)
+ # Return None explicitly because the Base.exists() call
+ # above will have cached its value if the file existed.
+ self._memo['exists'] = None
+ return None
+ result = Base.exists(self)
+ self._memo['exists'] = result
+ return result
#
# SIGNATURE SUBSYSTEM
@@ -2063,7 +2238,6 @@ class File(Base):
self.binfo = self.gen_binfo(calc)
return self._cur2()
def _cur2(self):
- "__cacheable__"
if self.always_build:
return None
if not self.exists():
@@ -2082,8 +2256,14 @@ class File(Base):
else:
return self.is_up_to_date()
+ memoizer_counters.append(SCons.Memoize.CountValue('rfile'))
+
def rfile(self):
- "__cacheable__"
+ try:
+ return self._memo['rfile']
+ except KeyError:
+ pass
+ result = self
if not self.exists():
norm_name = _my_normcase(self.name)
for dir in self.dir.get_all_rdirs():
@@ -2092,8 +2272,10 @@ class File(Base):
if node and node.exists() and \
(isinstance(node, File) or isinstance(node, Entry) \
or not node.is_derived()):
- return node
- return self
+ result = node
+ break
+ self._memo['rfile'] = result
+ return result
def rstr(self):
return str(self.rfile())
@@ -2121,72 +2303,82 @@ class File(Base):
default_fs = None
-def find_file(filename, paths, verbose=None):
+class FileFinder:
+ """
"""
- find_file(str, [Dir()]) -> [nodes]
+ if SCons.Memoize.use_memoizer:
+ __metaclass__ = SCons.Memoize.Memoized_Metaclass
- filename - a filename to find
- paths - a list of directory path *nodes* to search in. Can be
- represented as a list, a tuple, or a callable that is
- called with no arguments and returns the list or tuple.
+ memoizer_counters = []
- returns - the node created from the found file.
+ def __init__(self):
+ self._memo = {}
- Find a node corresponding to either a derived file or a file
- that exists already.
+ def _find_file_key(self, filename, paths, verbose=None):
+ return (filename, paths)
+
+ memoizer_counters.append(SCons.Memoize.CountDict('find_file', _find_file_key))
- Only the first file found is returned, and none is returned
- if no file is found.
- __cacheable__
- """
- if verbose:
- if not SCons.Util.is_String(verbose):
- verbose = "find_file"
- if not callable(verbose):
- verbose = ' %s: ' % verbose
- verbose = lambda s, v=verbose: sys.stdout.write(v + s)
- else:
- verbose = lambda x: x
+ def find_file(self, filename, paths, verbose=None):
+ """
+ find_file(str, [Dir()]) -> [nodes]
+
+ filename - a filename to find
+ paths - a list of directory path *nodes* to search in. Can be
+ represented as a list, a tuple, or a callable that is
+ called with no arguments and returns the list or tuple.
- if callable(paths):
- paths = paths()
+ returns - the node created from the found file.
- # Give Entries a chance to morph into Dirs.
- paths = map(lambda p: p.must_be_a_Dir(), paths)
+ Find a node corresponding to either a derived file or a file
+ that exists already.
- filedir, filename = os.path.split(filename)
- if filedir:
- def filedir_lookup(p, fd=filedir):
+ Only the first file found is returned, and none is returned
+ if no file is found.
+ """
+ memo_key = self._find_file_key(filename, paths)
+ try:
+ memo_dict = self._memo['find_file']
+ except KeyError:
+ memo_dict = {}
+ self._memo['find_file'] = memo_dict
+ else:
try:
- return p.Dir(fd)
- except TypeError:
- # We tried to look up a Dir, but it seems there's already
- # a File (or something else) there. No big.
- return None
- paths = filter(None, map(filedir_lookup, paths))
-
- for dir in paths:
- verbose("looking for '%s' in '%s' ...\n" % (filename, dir))
- node, d = dir.srcdir_find_file(filename)
- if node:
- verbose("... FOUND '%s' in '%s'\n" % (filename, d))
- return node
- return None
+ return memo_dict[memo_key]
+ except KeyError:
+ pass
-def find_files(filenames, paths):
- """
- find_files([str], [Dir()]) -> [nodes]
+ if verbose:
+ if not SCons.Util.is_String(verbose):
+ verbose = "find_file"
+ if not callable(verbose):
+ verbose = ' %s: ' % verbose
+ verbose = lambda s, v=verbose: sys.stdout.write(v + s)
+ else:
+ verbose = lambda x: x
- filenames - a list of filenames to find
- paths - a list of directory path *nodes* to search in
+ filedir, filename = os.path.split(filename)
+ if filedir:
+ def filedir_lookup(p, fd=filedir):
+ try:
+ return p.Dir(fd)
+ except TypeError:
+ # We tried to look up a Dir, but it seems there's
+ # already a File (or something else) there. No big.
+ return None
+ paths = filter(None, map(filedir_lookup, paths))
- returns - the nodes created from the found files.
+ result = None
+ for dir in paths:
+ verbose("looking for '%s' in '%s' ...\n" % (filename, dir))
+ node, d = dir.srcdir_find_file(filename)
+ if node:
+ verbose("... FOUND '%s' in '%s'\n" % (filename, d))
+ result = node
+ break
- Finds nodes corresponding to either derived files or files
- that exist already.
+ memo_dict[memo_key] = result
- Only the first file found is returned for each filename,
- and any files that aren't found are ignored.
- """
- nodes = map(lambda x, paths=paths: find_file(x, paths), filenames)
- return filter(None, nodes)
+ return result
+
+find_file = FileFinder().find_file
diff --git a/src/engine/SCons/Node/FSTests.py b/src/engine/SCons/Node/FSTests.py
index 1b38ffe..434709c 100644
--- a/src/engine/SCons/Node/FSTests.py
+++ b/src/engine/SCons/Node/FSTests.py
@@ -740,14 +740,22 @@ class FileNodeInfoTestCase(_tempdirTestCase):
test.write('fff', "fff\n")
- assert ni.timestamp != os.path.getmtime('fff'), ni.timestamp
- assert ni.size != os.path.getsize('fff'), ni.size
+ st = os.stat('fff')
+
+ mtime = st[stat.ST_MTIME]
+ assert ni.timestamp != mtime, (ni.timestamp, mtime)
+ size = st[stat.ST_SIZE]
+ assert ni.size != size, (ni.size, size)
fff.clear()
ni.update(fff)
- assert ni.timestamp == os.path.getmtime('fff'), ni.timestamp
- assert ni.size == os.path.getsize('fff'), ni.size
+ st = os.stat('fff')
+
+ mtime = st[stat.ST_MTIME]
+ assert ni.timestamp == mtime, (ni.timestamp, mtime)
+ size = st[stat.ST_SIZE]
+ assert ni.size == size, (ni.size, size)
class FileBuildInfoTestCase(_tempdirTestCase):
def test___init__(self):
@@ -1219,9 +1227,9 @@ class FSTestCase(_tempdirTestCase):
exc_caught = 0
try:
e.get_contents()
- except AttributeError:
+ except SCons.Errors.UserError:
exc_caught = 1
- assert exc_caught, "Should have caught an AttributError"
+ assert exc_caught, "Should have caught an IOError"
test.write("file", "file\n")
try:
@@ -1266,18 +1274,18 @@ class FSTestCase(_tempdirTestCase):
assert t == 0, "expected 0, got %s" % str(t)
test.subdir('tdir2')
- d = fs.Dir('tdir2')
f1 = test.workpath('tdir2', 'file1')
f2 = test.workpath('tdir2', 'file2')
test.write(f1, 'file1\n')
test.write(f2, 'file2\n')
- fs.File(f1)
- fs.File(f2)
current_time = float(int(time.time() / 2) * 2)
t1 = current_time - 4.0
t2 = current_time - 2.0
os.utime(f1, (t1 - 2.0, t1))
os.utime(f2, (t2 - 2.0, t2))
+ d = fs.Dir('tdir2')
+ fs.File(f1)
+ fs.File(f2)
t = d.get_timestamp()
assert t == t2, "expected %f, got %f" % (t2, t)
@@ -1861,9 +1869,9 @@ class EntryTestCase(_tempdirTestCase):
exc_caught = None
try:
e3n.get_contents()
- except AttributeError:
+ except SCons.Errors.UserError:
exc_caught = 1
- assert exc_caught, "did not catch expected AttributeError"
+ assert exc_caught, "did not catch expected SCons.Errors.UserError"
test.subdir('e4d')
test.write('e4f', "e4f\n")
@@ -2133,25 +2141,25 @@ class RepositoryTestCase(_tempdirTestCase):
rep2_sub_d1 = fs.Dir(test.workpath('rep2', 'sub', 'd1'))
rep3_sub_d1 = fs.Dir(test.workpath('rep3', 'sub', 'd1'))
- r = fs.Rfindalldirs(d1, fs.Top)
+ r = fs.Top.Rfindalldirs((d1,))
assert r == [d1], map(str, r)
- r = fs.Rfindalldirs([d1, d2], fs.Top)
+ r = fs.Top.Rfindalldirs((d1, d2))
assert r == [d1, d2], map(str, r)
- r = fs.Rfindalldirs('d1', fs.Top)
+ r = fs.Top.Rfindalldirs(('d1',))
assert r == [d1, rep1_d1, rep2_d1, rep3_d1], map(str, r)
- r = fs.Rfindalldirs('#d1', fs.Top)
+ r = fs.Top.Rfindalldirs(('#d1',))
assert r == [d1, rep1_d1, rep2_d1, rep3_d1], map(str, r)
- r = fs.Rfindalldirs('d1', sub)
+ r = sub.Rfindalldirs(('d1',))
assert r == [sub_d1, rep1_sub_d1, rep2_sub_d1, rep3_sub_d1], map(str, r)
- r = fs.Rfindalldirs('#d1', sub)
+ r = sub.Rfindalldirs(('#d1',))
assert r == [d1, rep1_d1, rep2_d1, rep3_d1], map(str, r)
- r = fs.Rfindalldirs(['d1', d2], fs.Top)
+ r = fs.Top.Rfindalldirs(('d1', d2))
assert r == [d1, rep1_d1, rep2_d1, rep3_d1, d2], map(str, r)
def test_rexists(self):
@@ -2223,6 +2231,7 @@ class find_fileTestCase(unittest.TestCase):
"""Testing find_file function"""
test = TestCmd(workdir = '')
test.write('./foo', 'Some file\n')
+ test.write('./foo2', 'Another file\n')
test.subdir('same')
test.subdir('bar')
test.write(['bar', 'on_disk'], 'Another file\n')
@@ -2237,7 +2246,7 @@ class find_fileTestCase(unittest.TestCase):
node_pseudo = fs.File(test.workpath('pseudo'))
node_pseudo.set_src_builder(1) # Any non-zero value.
- paths = map(fs.Dir, ['.', 'same', './bar'])
+ paths = tuple(map(fs.Dir, ['.', 'same', './bar']))
nodes = [SCons.Node.FS.find_file('foo', paths)]
nodes.append(SCons.Node.FS.find_file('baz', paths))
nodes.append(SCons.Node.FS.find_file('pseudo', paths))
@@ -2261,19 +2270,18 @@ class find_fileTestCase(unittest.TestCase):
try:
sio = StringIO.StringIO()
sys.stdout = sio
- SCons.Node.FS.find_file('foo', paths, verbose="xyz")
- expect = " xyz: looking for 'foo' in '.' ...\n" + \
- " xyz: ... FOUND 'foo' in '.'\n"
+ SCons.Node.FS.find_file('foo2', paths, verbose="xyz")
+ expect = " xyz: looking for 'foo2' in '.' ...\n" + \
+ " xyz: ... FOUND 'foo2' in '.'\n"
c = sio.getvalue()
assert c == expect, c
sio = StringIO.StringIO()
sys.stdout = sio
- SCons.Node.FS.find_file('baz', paths, verbose=1)
- expect = " find_file: looking for 'baz' in '.' ...\n" + \
- " find_file: looking for 'baz' in 'same' ...\n" + \
- " find_file: looking for 'baz' in 'bar' ...\n" + \
- " find_file: ... FOUND 'baz' in 'bar'\n"
+ SCons.Node.FS.find_file('baz2', paths, verbose=1)
+ expect = " find_file: looking for 'baz2' in '.' ...\n" + \
+ " find_file: looking for 'baz2' in 'same' ...\n" + \
+ " find_file: looking for 'baz2' in 'bar' ...\n"
c = sio.getvalue()
assert c == expect, c
@@ -2717,12 +2725,29 @@ class disambiguateTestCase(unittest.TestCase):
f = efile.disambiguate()
assert f.__class__ is fff.__class__, f.__class__
+ test.subdir('build')
+ test.subdir(['build', 'bdir'])
+ test.write(['build', 'bfile'], "build/bfile\n")
+
test.subdir('src')
+ test.write(['src', 'bdir'], "src/bdir\n")
+ test.subdir(['src', 'bfile'])
+
test.subdir(['src', 'edir'])
test.write(['src', 'efile'], "src/efile\n")
fs.BuildDir(test.workpath('build'), test.workpath('src'))
+ build_bdir = fs.Entry(test.workpath('build/bdir'))
+ d = build_bdir.disambiguate()
+ assert d is build_bdir, d
+ assert d.__class__ is ddd.__class__, d.__class__
+
+ build_bfile = fs.Entry(test.workpath('build/bfile'))
+ f = build_bfile.disambiguate()
+ assert f is build_bfile, f
+ assert f.__class__ is fff.__class__, f.__class__
+
build_edir = fs.Entry(test.workpath('build/edir'))
d = build_edir.disambiguate()
assert d.__class__ is ddd.__class__, d.__class__
@@ -2731,6 +2756,10 @@ class disambiguateTestCase(unittest.TestCase):
f = build_efile.disambiguate()
assert f.__class__ is fff.__class__, f.__class__
+ build_nonexistant = fs.Entry(test.workpath('build/nonexistant'))
+ f = build_nonexistant.disambiguate()
+ assert f.__class__ is fff.__class__, f.__class__
+
class postprocessTestCase(unittest.TestCase):
def runTest(self):
"""Test calling the postprocess() method."""
diff --git a/src/engine/SCons/Node/__init__.py b/src/engine/SCons/Node/__init__.py
index 42be5b1..e5d064e 100644
--- a/src/engine/SCons/Node/__init__.py
+++ b/src/engine/SCons/Node/__init__.py
@@ -166,6 +166,8 @@ class Node:
if SCons.Memoize.use_memoizer:
__metaclass__ = SCons.Memoize.Memoized_Metaclass
+ memoizer_counters = []
+
class Attrs:
pass
@@ -210,6 +212,8 @@ class Node:
self.post_actions = []
self.linked = 0 # is this node linked to the build directory?
+ self.clear_memoized_values()
+
# Let the interface in which the build engine is embedded
# annotate this Node with its own info (like a description of
# what line in what file created the node, for example).
@@ -223,7 +227,7 @@ class Node:
def get_build_env(self):
"""Fetch the appropriate Environment to build this node.
- __cacheable__"""
+ """
return self.get_executor().get_build_env()
def get_build_scanner_path(self, scanner):
@@ -349,8 +353,8 @@ class Node:
"""Completely clear a Node of all its cached state (so that it
can be re-evaluated by interfaces that do continuous integration
builds).
- __reset_cache__
"""
+ self.clear_memoized_values()
self.executor_cleanup()
self.del_binfo()
try:
@@ -361,13 +365,15 @@ class Node:
self.found_includes = {}
self.implicit = None
+ def clear_memoized_values(self):
+ self._memo = {}
+
def visited(self):
"""Called just after this node has been visited
without requiring a build.."""
pass
def builder_set(self, builder):
- "__cache_reset__"
self.builder = builder
def has_builder(self):
@@ -424,7 +430,6 @@ class Node:
signatures when they are used as source files to other derived files. For
example: source with source builders are not derived in this sense,
and hence should not return true.
- __cacheable__
"""
return self.has_builder() or self.side_effect
@@ -474,7 +479,6 @@ class Node:
d = filter(lambda x, seen=seen: not seen.has_key(x),
n.get_found_includes(env, scanner, path))
if d:
- d = map(lambda N: N.disambiguate(), d)
deps.extend(d)
for n in d:
seen[n] = 1
@@ -609,24 +613,34 @@ class Node:
env = self.env or SCons.Defaults.DefaultEnvironment()
return env.get_calculator()
+ memoizer_counters.append(SCons.Memoize.CountValue('calc_signature'))
+
def calc_signature(self, calc=None):
"""
Select and calculate the appropriate build signature for a node.
- __cacheable__
self - the node
calc - the signature calculation module
returns - the signature
"""
+ try:
+ return self._memo['calc_signature']
+ except KeyError:
+ pass
if self.is_derived():
import SCons.Defaults
env = self.env or SCons.Defaults.DefaultEnvironment()
if env.use_build_signature():
- return self.get_bsig(calc)
+ result = self.get_bsig(calc)
+ else:
+ result = self.get_csig(calc)
elif not self.rexists():
- return None
- return self.get_csig(calc)
+ result = None
+ else:
+ result = self.get_csig(calc)
+ self._memo['calc_signature'] = result
+ return result
def new_ninfo(self):
return self.NodeInfo(self)
@@ -661,7 +675,6 @@ class Node:
node's children's signatures. We expect that they're
already built and updated by someone else, if that's
what's wanted.
- __cacheable__
"""
if calc is None:
@@ -676,7 +689,7 @@ class Node:
def calc_signature(node, calc=calc):
return node.calc_signature(calc)
- sources = executor.process_sources(None, self.ignore)
+ sources = executor.get_unignored_sources(self.ignore)
sourcesigs = executor.process_sources(calc_signature, self.ignore)
depends = self.depends
@@ -767,7 +780,6 @@ class Node:
return self.exists()
def missing(self):
- """__cacheable__"""
return not self.is_derived() and \
not self.is_pseudo_derived() and \
not self.linked and \
@@ -850,7 +862,7 @@ class Node:
self.wkids.append(wkid)
def _children_reset(self):
- "__cache_reset__"
+ self.clear_memoized_values()
# We need to let the Executor clear out any calculated
# bsig info that it's cached so we can re-calculate it.
self.executor_cleanup()
@@ -881,11 +893,17 @@ class Node:
else:
return self.sources + self.depends + self.implicit
+ memoizer_counters.append(SCons.Memoize.CountValue('_children_get'))
+
def _children_get(self):
- "__cacheable__"
+ try:
+ return self._memo['children_get']
+ except KeyError:
+ pass
children = self._all_children_get()
if self.ignore:
children = filter(self.do_not_ignore, children)
+ self._memo['children_get'] = children
return children
def all_children(self, scan=1):
@@ -1101,14 +1119,6 @@ else:
del l
del ul
-if SCons.Memoize.use_old_memoization():
- _Base = Node
- class Node(SCons.Memoize.Memoizer, _Base):
- def __init__(self, *args, **kw):
- apply(_Base.__init__, (self,)+args, kw)
- SCons.Memoize.Memoizer.__init__(self)
-
-
def get_children(node, parent): return node.children()
def ignore_cycle(node, stack): pass
def do_nothing(node, parent): pass
diff --git a/src/engine/SCons/Options/__init__.py b/src/engine/SCons/Options/__init__.py
index 83798b3..5c30be6 100644
--- a/src/engine/SCons/Options/__init__.py
+++ b/src/engine/SCons/Options/__init__.py
@@ -163,7 +163,10 @@ class Options:
if option.converter and values.has_key(option.key):
value = env.subst('${%s}'%option.key)
try:
- env[option.key] = option.converter(value)
+ try:
+ env[option.key] = option.converter(value)
+ except TypeError:
+ env[option.key] = option.converter(value, env)
except ValueError, x:
raise SCons.Errors.UserError, 'Error converting option: %s\n%s'%(option.key, x)
diff --git a/src/engine/SCons/PathList.py b/src/engine/SCons/PathList.py
new file mode 100644
index 0000000..b757bd3
--- /dev/null
+++ b/src/engine/SCons/PathList.py
@@ -0,0 +1,217 @@
+#
+# __COPYRIGHT__
+#
+# Permission is hereby granted, free of charge, to any person obtaining
+# a copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish,
+# distribute, sublicense, and/or sell copies of the Software, and to
+# permit persons to whom the Software is furnished to do so, subject to
+# the following conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
+# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
+# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+#
+
+__revision__ = "__FILE__ __REVISION__ __DATE__ __DEVELOPER__"
+
+__doc__ = """SCons.PathList
+
+A module for handling lists of directory paths (the sort of things
+that get set as CPPPATH, LIBPATH, etc.) with as much caching of data and
+efficiency as we can while still keeping the evaluation delayed so that we
+Do the Right Thing (almost) regardless of how the variable is specified.
+
+"""
+
+import os
+import string
+
+import SCons.Util
+
+#
+# Variables to specify the different types of entries in a PathList object:
+#
+
+TYPE_STRING_NO_SUBST = 0 # string with no '$'
+TYPE_STRING_SUBST = 1 # string containing '$'
+TYPE_OBJECT = 2 # other object
+
+def node_conv(obj):
+ """
+ This is the "string conversion" routine that we have our substitutions
+ use to return Nodes, not strings. This relies on the fact that an
+ EntryProxy object has a get() method that returns the underlying
+ Node that it wraps, which is a bit of architectural dependence
+ that we might need to break or modify in the future in response to
+ additional requirements.
+ """
+ try:
+ get = obj.get
+ except AttributeError:
+ pass
+ else:
+ obj = get()
+ return obj
+
+class _PathList:
+ """
+ An actual PathList object.
+ """
+ def __init__(self, pathlist):
+ """
+ Initializes a PathList object, canonicalizing the input and
+ pre-processing it for quicker substitution later.
+
+ The stored representation of the PathList is a list of tuples
+ containing (type, value), where the "type" is one of the TYPE_*
+ variables defined above. We distinguish between:
+
+ strings that contain no '$' and therefore need no
+ delayed-evaluation string substitution (we expect that there
+ will be many of these and that we therefore get a pretty
+ big win from avoiding string substitution)
+
+ strings that contain '$' and therefore need substitution
+ (the hard case is things like '${TARGET.dir}/include',
+ which require re-evaluation for every target + source)
+
+ other objects (which may be something like an EntryProxy
+ that needs a method called to return a Node)
+
+ Pre-identifying the type of each element in the PathList up-front
+ and storing the type in the list of tuples is intended to reduce
+ the amount of calculation when we actually do the substitution
+ over and over for each target.
+ """
+ if SCons.Util.is_String(pathlist):
+ pathlist = string.split(pathlist, os.pathsep)
+ elif SCons.Util.is_List(pathlist) or SCons.Util.is_Tuple(pathlist):
+ pathlist = SCons.Util.flatten(pathlist)
+ else:
+ pathlist = [pathlist]
+
+ pl = []
+ for p in pathlist:
+ try:
+ index = string.find(p, '$')
+ except (AttributeError, TypeError):
+ type = TYPE_OBJECT
+ else:
+ if index == -1:
+ type = TYPE_STRING_NO_SUBST
+ else:
+ type = TYPE_STRING_SUBST
+ pl.append((type, p))
+
+ self.pathlist = tuple(pl)
+
+ def __len__(self): return len(self.pathlist)
+
+ def __getitem__(self, i): return self.pathlist[i]
+
+ def subst_path(self, env, target, source):
+ """
+ Performs construction variable substitution on a pre-digested
+ PathList for a specific target and source.
+ """
+ result = []
+ for type, value in self.pathlist:
+ if type == TYPE_STRING_SUBST:
+ value = env.subst(value, target=target, source=source,
+ conv=node_conv)
+ elif type == TYPE_OBJECT:
+ value = node_conv(value)
+ result.append(value)
+ return tuple(result)
+
+
+class PathListCache:
+ """
+ A class to handle caching of PathList lookups.
+
+ This class gets instantiated once and then deleted from the namespace,
+ so it's used as a Singleton (although we don't enforce that in the
+ usual Pythonic ways). We could have just made the cache a dictionary
+ in the module namespace, but putting it in this class allows us to
+ use the same Memoizer pattern that we use elsewhere to count cache
+ hits and misses, which is very valuable.
+
+ Lookup keys in the cache are computed by the _PathList_key() method.
+ Cache lookup should be quick, so we don't spend cycles canonicalizing
+ all forms of the same lookup key. For example, 'x:y' and ['x',
+ 'y'] logically represent the same list, but we don't bother to
+ split string representations and treat those two equivalently.
+ (Note, however, that we do, treat lists and tuples the same.)
+
+ The main type of duplication we're trying to catch will come from
+ looking up the same path list from two different clones of the
+ same construction environment. That is, given
+
+ env2 = env1.Clone()
+
+ both env1 and env2 will have the same CPPPATH value, and we can
+ cheaply avoid re-parsing both values of CPPPATH by using the
+ common value from this cache.
+ """
+ if SCons.Memoize.use_memoizer:
+ __metaclass__ = SCons.Memoize.Memoized_Metaclass
+
+ memoizer_counters = []
+
+ def __init__(self):
+ self._memo = {}
+
+ def _PathList_key(self, pathlist):
+ """
+ Returns the key for memoization of PathLists.
+
+ Note that we want this to be quick, so we don't canonicalize
+ all forms of the same list. For example, 'x:y' and ['x', 'y']
+ logically represent the same list, but we're not going to bother
+ massaging strings into canonical lists here.
+
+ The reason
+
+ """
+ if SCons.Util.is_List(pathlist):
+ pathlist = tuple(pathlist)
+ return pathlist
+
+ memoizer_counters.append(SCons.Memoize.CountDict('PathList', _PathList_key))
+
+ def PathList(self, pathlist):
+ """
+ Returns the cached _PathList object for the specified pathlist,
+ creating and caching a new object as necessary.
+ """
+ pathlist = self._PathList_key(pathlist)
+ try:
+ memo_dict = self._memo['PathList']
+ except KeyError:
+ memo_dict = {}
+ self._memo['PathList'] = memo_dict
+ else:
+ try:
+ return memo_dict[pathlist]
+ except KeyError:
+ pass
+
+ result = _PathList(pathlist)
+
+ memo_dict[pathlist] = result
+
+ return result
+
+PathList = PathListCache().PathList
+
+
+del PathListCache
diff --git a/src/engine/SCons/PathListTests.py b/src/engine/SCons/PathListTests.py
new file mode 100644
index 0000000..d6fae0e
--- /dev/null
+++ b/src/engine/SCons/PathListTests.py
@@ -0,0 +1,145 @@
+#
+# __COPYRIGHT__
+#
+# Permission is hereby granted, free of charge, to any person obtaining
+# a copy of this software and associated documentation files (the
+# "Software"), to deal in the Software without restriction, including
+# without limitation the rights to use, copy, modify, merge, publish,
+# distribute, sublicense, and/or sell copies of the Software, and to
+# permit persons to whom the Software is furnished to do so, subject to
+# the following conditions:
+#
+# The above copyright notice and this permission notice shall be included
+# in all copies or substantial portions of the Software.
+#
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
+# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
+# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
+# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
+# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
+# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+#
+
+__revision__ = "__FILE__ __REVISION__ __DATE__ __DEVELOPER__"
+
+import unittest
+
+import SCons.PathList
+
+
+class subst_pathTestCase(unittest.TestCase):
+
+ def setUp(self):
+
+ class FakeEnvironment:
+ def __init__(self, **kw):
+ self.kw = kw
+ def subst(self, s, target=None, source=None, conv=lambda x: x):
+ if s[0] == '$':
+ s = s[1:]
+ if s == 'target':
+ s = target
+ elif s == 'source':
+ s = source
+ else:
+ s = self.kw[s]
+ return s
+
+ self.env = FakeEnvironment(AAA = 'aaa')
+
+ def test_object(self):
+ """Test the subst_path() method on an object
+ """
+
+ class A:
+ pass
+
+ a = A()
+
+ pl = SCons.PathList.PathList((a,))
+
+ result = pl.subst_path(self.env, 'y', 'z')
+
+ assert result == (a,), result
+
+ def test_object_get(self):
+ """Test the subst_path() method on an object with a get() method
+ """
+
+ class B:
+ def get(self):
+ return 'b'
+
+ b = B()
+
+ pl = SCons.PathList.PathList((b,))
+
+ result = pl.subst_path(self.env, 'y', 'z')
+
+ assert result == ('b',), result
+
+ def test_string(self):
+ """Test the subst_path() method on a non-substitution string
+ """
+
+ self.env.subst = lambda s, target, source, conv: 'NOT THIS STRING'
+
+ pl = SCons.PathList.PathList(('x'))
+
+ result = pl.subst_path(self.env, 'y', 'z')
+
+ assert result == ('x',), result
+
+ def test_subst(self):
+ """Test the subst_path() method on a substitution string
+ """
+
+ pl = SCons.PathList.PathList(('$AAA',))
+
+ result = pl.subst_path(self.env, 'y', 'z')
+
+ assert result == ('aaa',), result
+
+
+class PathListCacheTestCase(unittest.TestCase):
+
+ def test_no_PathListCache(self):
+ """Make sure the PathListCache class is not visible
+ """
+ try:
+ SCons.PathList.PathListCache
+ except AttributeError:
+ pass
+ else:
+ self.fail("Found PathListCache unexpectedly\n")
+
+
+class PathListTestCase(unittest.TestCase):
+
+ def test_PathList(self):
+ """Test the PathList() entry point
+ """
+
+ x1 = SCons.PathList.PathList(('x',))
+ x2 = SCons.PathList.PathList(['x',])
+
+ assert x1 is x2, (x1, x2)
+
+ x3 = SCons.PathList.PathList('x')
+
+ assert not x1 is x3, (x1, x3)
+
+
+if __name__ == "__main__":
+ suite = unittest.TestSuite()
+ tclasses = [
+ subst_pathTestCase,
+ PathListCacheTestCase,
+ PathListTestCase,
+ ]
+ for tclass in tclasses:
+ names = unittest.getTestCaseNames(tclass, 'test_')
+ suite.addTests(map(tclass, names))
+ if not unittest.TextTestRunner().run(suite).wasSuccessful():
+ sys.exit(1)
diff --git a/src/engine/SCons/Scanner/CTests.py b/src/engine/SCons/Scanner/CTests.py
index 2ca522b..5d6765d 100644
--- a/src/engine/SCons/Scanner/CTests.py
+++ b/src/engine/SCons/Scanner/CTests.py
@@ -179,17 +179,17 @@ class DummyEnvironment(UserDict.UserDict):
def Dictionary(self, *args):
return self.data
- def subst(self, strSubst):
+ def subst(self, strSubst, target=None, source=None, conv=None):
if strSubst[0] == '$':
return self.data[strSubst[1:]]
return strSubst
- def subst_list(self, strSubst):
+ def subst_list(self, strSubst, target=None, source=None, conv=None):
if strSubst[0] == '$':
return [self.data[strSubst[1:]]]
return [[strSubst]]
- def subst_path(self, path, target=None, source=None):
+ def subst_path(self, path, target=None, source=None, conv=None):
if type(path) != type([]):
path = [path]
return map(self.subst, path)
@@ -401,9 +401,12 @@ class CScannerTestCase13(unittest.TestCase):
def runTest(self):
"""Find files in directories named in a substituted environment variable"""
class SubstEnvironment(DummyEnvironment):
- def subst(self, arg, test=test):
- return test.workpath("d1")
- env = SubstEnvironment(CPPPATH=["blah"])
+ def subst(self, arg, target=None, source=None, conv=None, test=test):
+ if arg == "$blah":
+ return test.workpath("d1")
+ else:
+ return arg
+ env = SubstEnvironment(CPPPATH=["$blah"])
s = SCons.Scanner.C.CScanner()
path = s.path(env)
deps = s(env.File('f1.cpp'), env, path)
diff --git a/src/engine/SCons/Scanner/D.py b/src/engine/SCons/Scanner/D.py
index 2ea2614..5a0b383 100644
--- a/src/engine/SCons/Scanner/D.py
+++ b/src/engine/SCons/Scanner/D.py
@@ -46,7 +46,6 @@ def DScanner():
class D(SCons.Scanner.Classic):
def find_include(self, include, source_dir, path):
- if callable(path): path=path()
# translate dots (package separators) to slashes
inc = string.replace(include, '.', '/')
diff --git a/src/engine/SCons/Scanner/Fortran.py b/src/engine/SCons/Scanner/Fortran.py
index 8f7a6ce..31a1e16 100644
--- a/src/engine/SCons/Scanner/Fortran.py
+++ b/src/engine/SCons/Scanner/Fortran.py
@@ -78,7 +78,6 @@ class F90Scanner(SCons.Scanner.Classic):
apply(SCons.Scanner.Current.__init__, (self,) + args, kw)
def scan(self, node, env, path=()):
- "__cacheable__"
# cache the includes list in node so we only scan it once:
if node.includes != None:
@@ -112,6 +111,8 @@ class F90Scanner(SCons.Scanner.Classic):
# is actually found in a Repository or locally.
nodes = []
source_dir = node.get_dir()
+ if callable(path):
+ path = path()
for dep in mods_and_includes:
n, i = self.find_include(dep, source_dir, path)
diff --git a/src/engine/SCons/Scanner/FortranTests.py b/src/engine/SCons/Scanner/FortranTests.py
index da4a023..82db694 100644
--- a/src/engine/SCons/Scanner/FortranTests.py
+++ b/src/engine/SCons/Scanner/FortranTests.py
@@ -232,12 +232,12 @@ class DummyEnvironment:
def __delitem__(self,key):
del self.Dictionary()[key]
- def subst(self, arg):
+ def subst(self, arg, target=None, source=None, conv=None):
if arg[0] == '$':
return self[arg[1:]]
return arg
- def subst_path(self, path, target=None, source=None):
+ def subst_path(self, path, target=None, source=None, conv=None):
if type(path) != type([]):
path = [path]
return map(self.subst, path)
@@ -461,10 +461,13 @@ class FortranScannerTestCase14(unittest.TestCase):
class FortranScannerTestCase15(unittest.TestCase):
def runTest(self):
class SubstEnvironment(DummyEnvironment):
- def subst(self, arg, test=test):
- return test.workpath("d1")
+ def subst(self, arg, target=None, source=None, conv=None, test=test):
+ if arg == "$junk":
+ return test.workpath("d1")
+ else:
+ return arg
test.write(['d1', 'f2.f'], " INCLUDE 'fi.f'\n")
- env = SubstEnvironment(["junk"])
+ env = SubstEnvironment(["$junk"])
s = SCons.Scanner.Fortran.FortranScan()
path = s.path(env)
deps = s(env.File('fff1.f'), env, path)
diff --git a/src/engine/SCons/Scanner/IDLTests.py b/src/engine/SCons/Scanner/IDLTests.py
index 153951d..2332a57 100644
--- a/src/engine/SCons/Scanner/IDLTests.py
+++ b/src/engine/SCons/Scanner/IDLTests.py
@@ -199,10 +199,10 @@ class DummyEnvironment:
else:
raise KeyError, "Dummy environment only has CPPPATH attribute."
- def subst(self, arg):
+ def subst(self, arg, target=None, source=None, conv=None):
return arg
- def subst_path(self, path, target=None, source=None):
+ def subst_path(self, path, target=None, source=None, conv=None):
if type(path) != type([]):
path = [path]
return map(self.subst, path)
@@ -411,9 +411,12 @@ class IDLScannerTestCase11(unittest.TestCase):
class IDLScannerTestCase12(unittest.TestCase):
def runTest(self):
class SubstEnvironment(DummyEnvironment):
- def subst(self, arg, test=test):
- return test.workpath("d1")
- env = SubstEnvironment(["blah"])
+ def subst(self, arg, target=None, source=None, conv=None, test=test):
+ if arg == "$blah":
+ return test.workpath("d1")
+ else:
+ return arg
+ env = SubstEnvironment(["$blah"])
s = SCons.Scanner.IDL.IDLScan()
path = s.path(env)
deps = s(env.File('t1.idl'), env, path)
diff --git a/src/engine/SCons/Scanner/LaTeX.py b/src/engine/SCons/Scanner/LaTeX.py
index 6451a58..d875e6e 100644
--- a/src/engine/SCons/Scanner/LaTeX.py
+++ b/src/engine/SCons/Scanner/LaTeX.py
@@ -31,13 +31,15 @@ __revision__ = "__FILE__ __REVISION__ __DATE__ __DEVELOPER__"
import SCons.Scanner
+import string
+import os.path
def LaTeXScanner(fs = SCons.Node.FS.default_fs):
"""Return a prototype Scanner instance for scanning LaTeX source files"""
ds = LaTeX(name = "LaTeXScanner",
suffixes = '$LATEXSUFFIXES',
path_variable = 'TEXINPUTS',
- regex = '\\\\(include|includegraphics(?:\[[^\]]+\])?|input){([^}]*)}',
+ regex = '\\\\(include|includegraphics(?:\[[^\]]+\])?|input|bibliography){([^}]*)}',
recursive = 0)
return ds
@@ -46,21 +48,75 @@ class LaTeX(SCons.Scanner.Classic):
Unlike most scanners, which use regular expressions that just
return the included file name, this returns a tuple consisting
- of the keyword for the inclusion ("include", "includegraphics" or
- "input"), and then the file name itself. Base on a quick look at
- LaTeX documentation, it seems that we need a should append .tex
- suffix for "include" and "input" keywords, but leave the file name
- untouched for "includegraphics."
+ of the keyword for the inclusion ("include", "includegraphics",
+ "input", or "bibliography"), and then the file name itself.
+ Based on a quick look at LaTeX documentation, it seems that we
+ need a should append .tex suffix for the "include" keywords,
+ append .tex if there is no extension for the "input" keyword,
+ but leave the file name untouched for "includegraphics." For
+ the "bibliography" keyword we need to add .bib if there is
+ no extension. (This need to be revisited since if there
+ is no extension for an :includegraphics" keyword latex will
+ append .ps or .eps to find the file; while pdftex will use
+ other extensions.)
"""
def latex_name(self, include):
filename = include[1]
- if include[0][:15] != 'includegraphics':
+ if include[0] == 'input':
+ base, ext = os.path.splitext( filename )
+ if ext == "":
+ filename = filename + '.tex'
+ if (include[0] == 'include'):
filename = filename + '.tex'
+ if include[0] == 'bibliography':
+ base, ext = os.path.splitext( filename )
+ if ext == "":
+ filename = filename + '.bib'
return filename
def sort_key(self, include):
return SCons.Node.FS._my_normcase(self.latex_name(include))
def find_include(self, include, source_dir, path):
- if callable(path): path=path()
i = SCons.Node.FS.find_file(self.latex_name(include),
(source_dir,) + path)
return i, include
+
+ def scan(self, node, path=()):
+ #
+ # Modify the default scan function to allow for the regular
+ # expression to return a comma separated list of file names
+ # as can be the case with the bibliography keyword.
+ #
+ # cache the includes list in node so we only scan it once:
+ if node.includes != None:
+ includes = node.includes
+ else:
+ includes = self.cre.findall(node.get_contents())
+ node.includes = includes
+
+ # This is a hand-coded DSU (decorate-sort-undecorate, or
+ # Schwartzian transform) pattern. The sort key is the raw name
+ # of the file as specifed on the #include line (including the
+ # " or <, since that may affect what file is found), which lets
+ # us keep the sort order constant regardless of whether the file
+ # is actually found in a Repository or locally.
+ nodes = []
+ source_dir = node.get_dir()
+ for include in includes:
+ #
+ # Handle multiple filenames in include[1]
+ #
+ inc_list = string.split(include[1],',')
+ for j in range(len(inc_list)):
+ include_local = [include[0],inc_list[j]]
+ n, i = self.find_include(include_local, source_dir, path)
+
+ if n is None:
+ SCons.Warnings.warn(SCons.Warnings.DependencyWarning,
+ "No dependency generated for file: %s (included from: %s) -- file not found" % (i, node))
+ else:
+ sortkey = self.sort_key(include)
+ nodes.append((sortkey, n))
+
+ nodes.sort()
+ nodes = map(lambda pair: pair[1], nodes)
+ return nodes
diff --git a/src/engine/SCons/Scanner/LaTeXTests.py b/src/engine/SCons/Scanner/LaTeXTests.py
index 45a387a..9cfe3ca 100644
--- a/src/engine/SCons/Scanner/LaTeXTests.py
+++ b/src/engine/SCons/Scanner/LaTeXTests.py
@@ -70,17 +70,17 @@ class DummyEnvironment(UserDict.UserDict):
def Dictionary(self, *args):
return self.data
- def subst(self, strSubst):
+ def subst(self, strSubst, target=None, source=None, conv=None):
if strSubst[0] == '$':
return self.data[strSubst[1:]]
return strSubst
- def subst_list(self, strSubst):
+ def subst_list(self, strSubst, target=None, source=None, conv=None):
if strSubst[0] == '$':
return [self.data[strSubst[1:]]]
return [[strSubst]]
- def subst_path(self, path, target=None, source=None):
+ def subst_path(self, path, target=None, source=None, conv=None):
if type(path) != type([]):
path = [path]
return map(self.subst, path)
diff --git a/src/engine/SCons/Scanner/Prog.py b/src/engine/SCons/Scanner/Prog.py
index 54db9a8..e8d1669 100644
--- a/src/engine/SCons/Scanner/Prog.py
+++ b/src/engine/SCons/Scanner/Prog.py
@@ -80,8 +80,9 @@ def scan(node, env, libpath = ()):
result = []
- if callable(libpath): libpath = libpath()
-
+ if callable(libpath):
+ libpath = libpath()
+
find_file = SCons.Node.FS.find_file
adjustixes = SCons.Util.adjustixes
for lib in libs:
diff --git a/src/engine/SCons/Scanner/ProgTests.py b/src/engine/SCons/Scanner/ProgTests.py
index bac10b7..47d4afd 100644
--- a/src/engine/SCons/Scanner/ProgTests.py
+++ b/src/engine/SCons/Scanner/ProgTests.py
@@ -71,7 +71,7 @@ class DummyEnvironment:
def __delitem__(self,key):
del self.Dictionary()[key]
- def subst(self, s):
+ def subst(self, s, target=None, source=None, conv=None):
try:
if s[0] == '$':
return self._dict[s[1:]]
@@ -79,7 +79,7 @@ class DummyEnvironment:
return ''
return s
- def subst_path(self, path, target=None, source=None):
+ def subst_path(self, path, target=None, source=None, conv=None):
if type(path) != type([]):
path = [path]
return map(self.subst, path)
@@ -165,12 +165,12 @@ class ProgramScannerTestCase3(unittest.TestCase):
class ProgramScannerTestCase5(unittest.TestCase):
def runTest(self):
class SubstEnvironment(DummyEnvironment):
- def subst(self, arg, path=test.workpath("d1")):
- if arg == "blah":
+ def subst(self, arg, target=None, source=None, conv=None, path=test.workpath("d1")):
+ if arg == "$blah":
return test.workpath("d1")
else:
return arg
- env = SubstEnvironment(LIBPATH=[ "blah" ],
+ env = SubstEnvironment(LIBPATH=[ "$blah" ],
LIBS=string.split('l2 l3'))
s = SCons.Scanner.Prog.ProgramScanner()
path = s.path(env)
diff --git a/src/engine/SCons/Scanner/ScannerTests.py b/src/engine/SCons/Scanner/ScannerTests.py
index 29ca063..bd8546f 100644
--- a/src/engine/SCons/Scanner/ScannerTests.py
+++ b/src/engine/SCons/Scanner/ScannerTests.py
@@ -31,27 +31,23 @@ import SCons.Sig
import SCons.Scanner
class DummyFS:
- def __init__(self, search_result=[]):
- self.search_result = search_result
def File(self, name):
return DummyNode(name)
- def Rfindalldirs(self, pathlist, cwd):
- return self.search_result + pathlist
class DummyEnvironment(UserDict.UserDict):
def __init__(self, dict=None, **kw):
UserDict.UserDict.__init__(self, dict)
self.data.update(kw)
self.fs = DummyFS()
- def subst(self, strSubst):
+ def subst(self, strSubst, target=None, source=None, conv=None):
if strSubst[0] == '$':
return self.data[strSubst[1:]]
return strSubst
- def subst_list(self, strSubst):
+ def subst_list(self, strSubst, target=None, source=None, conv=None):
if strSubst[0] == '$':
return [self.data[strSubst[1:]]]
return [[strSubst]]
- def subst_path(self, path, target=None, source=None):
+ def subst_path(self, path, target=None, source=None, conv=None):
if type(path) != type([]):
path = [path]
return map(self.subst, path)
@@ -61,20 +57,24 @@ class DummyEnvironment(UserDict.UserDict):
return factory or self.fs.File
class DummyNode:
- def __init__(self, name):
+ def __init__(self, name, search_result=()):
self.name = name
+ self.search_result = tuple(search_result)
def rexists(self):
return 1
def __str__(self):
return self.name
+ def Rfindalldirs(self, pathlist):
+ return self.search_result + pathlist
class FindPathDirsTestCase(unittest.TestCase):
def test_FindPathDirs(self):
"""Test the FindPathDirs callable class"""
env = DummyEnvironment(LIBPATH = [ 'foo' ])
- env.fs = DummyFS(['xxx'])
+ env.fs = DummyFS()
+ dir = DummyNode('dir', ['xxx'])
fpd = SCons.Scanner.FindPathDirs('LIBPATH')
result = fpd(env, dir)
assert str(result) == "('xxx', 'foo')", result
@@ -473,10 +473,11 @@ class ClassicTestCase(unittest.TestCase):
ret = s.function(n, env, ('foo3',))
assert ret == ['def'], ret
- # Verify that overall scan results are cached even if individual
- # results are de-cached
- ret = s.function(n, env, ('foo2',))
- assert ret == ['abc'], 'caching inactive; got: %s'%ret
+ # We no longer cache overall scan results, which would be returned
+ # if individual results are de-cached. If we ever restore that
+ # functionality, this test goes back here.
+ #ret = s.function(n, env, ('foo2',))
+ #assert ret == ['abc'], 'caching inactive; got: %s'%ret
# Verify that it sorts what it finds.
n.includes = ['xyz', 'uvw']
@@ -501,8 +502,6 @@ class ClassicCPPTestCase(unittest.TestCase):
s = SCons.Scanner.ClassicCPP("Test", [], None, "")
def _find_file(filename, paths):
- if callable(paths):
- paths = paths()
return paths[0]+'/'+filename
save = SCons.Node.FS.find_file
diff --git a/src/engine/SCons/Scanner/__init__.py b/src/engine/SCons/Scanner/__init__.py
index 1fd77e5..679efca 100644
--- a/src/engine/SCons/Scanner/__init__.py
+++ b/src/engine/SCons/Scanner/__init__.py
@@ -54,25 +54,6 @@ def Scanner(function, *args, **kw):
return apply(Base, (function,) + args, kw)
-class _Binder:
- def __init__(self, bindval):
- self._val = bindval
- def __call__(self):
- return self._val
- def __str__(self):
- return str(self._val)
- #debug: return 'B<%s>'%str(self._val)
-
-BinderDict = {}
-
-def Binder(path):
- try:
- return BinderDict[path]
- except KeyError:
- b = _Binder(path)
- BinderDict[path] = b
- return b
-
class FindPathDirs:
"""A class to bind a specific *PATH variable name to a function that
@@ -80,16 +61,17 @@ class FindPathDirs:
def __init__(self, variable):
self.variable = variable
def __call__(self, env, dir, target=None, source=None, argument=None):
- # The goal is that we've made caching this unnecessary
- # because the caching takes place at higher layers.
+ import SCons.PathList
try:
path = env[self.variable]
except KeyError:
return ()
- path = env.subst_path(path, target=target, source=source)
- path_tuple = tuple(env.fs.Rfindalldirs(path, dir))
- return Binder(path_tuple)
+ dir = dir or env.fs._cwd
+ path = SCons.PathList.PathList(path).subst_path(env, target, source)
+ return tuple(dir.Rfindalldirs(path))
+
+
class Base:
"""
@@ -97,9 +79,6 @@ class Base:
straightforward, single-pass scanning of a single file.
"""
- if SCons.Memoize.use_memoizer:
- __metaclass__ = SCons.Memoize.Memoized_Metaclass
-
def __init__(self,
function,
name = "NONE",
@@ -257,14 +236,6 @@ class Base:
recurse_nodes = _recurse_no_nodes
-if SCons.Memoize.use_old_memoization():
- _Base = Base
- class Base(SCons.Memoize.Memoizer, _Base):
- "Cache-backed version of Scanner Base"
- def __init__(self, *args, **kw):
- apply(_Base.__init__, (self,)+args, kw)
- SCons.Memoize.Memoizer.__init__(self)
-
class Selector(Base):
"""
@@ -333,8 +304,6 @@ class Classic(Current):
apply(Current.__init__, (self,) + args, kw)
def find_include(self, include, source_dir, path):
- "__cacheable__"
- if callable(path): path = path()
n = SCons.Node.FS.find_file(include, (source_dir,) + tuple(path))
return n, include
@@ -342,7 +311,6 @@ class Classic(Current):
return SCons.Node.FS._my_normcase(include)
def scan(self, node, path=()):
- "__cacheable__"
# cache the includes list in node so we only scan it once:
if node.includes != None:
@@ -359,6 +327,8 @@ class Classic(Current):
# is actually found in a Repository or locally.
nodes = []
source_dir = node.get_dir()
+ if callable(path):
+ path = path()
for include in includes:
n, i = self.find_include(include, source_dir, path)
@@ -384,14 +354,10 @@ class ClassicCPP(Classic):
the contained filename in group 1.
"""
def find_include(self, include, source_dir, path):
- "__cacheable__"
- if callable(path):
- path = path() #kwq: extend callable to find_file...
-
if include[0] == '"':
- paths = Binder( (source_dir,) + tuple(path) )
+ paths = (source_dir,) + tuple(path)
else:
- paths = Binder( tuple(path) + (source_dir,) )
+ paths = tuple(path) + (source_dir,)
n = SCons.Node.FS.find_file(include[1], paths)
diff --git a/src/engine/SCons/Script/Main.py b/src/engine/SCons/Script/Main.py
index 2c18112..6eedbab 100644
--- a/src/engine/SCons/Script/Main.py
+++ b/src/engine/SCons/Script/Main.py
@@ -309,6 +309,7 @@ command_time = 0
exit_status = 0 # exit status, assume success by default
repositories = []
num_jobs = 1 # this is modifed by SConscript.SetJobs()
+delayed_warnings = []
diskcheck_all = SCons.Node.FS.diskcheck_types()
diskcheck_option_set = None
@@ -671,12 +672,13 @@ class OptParser(OptionParser):
"build all Default() targets.")
debug_options = ["count", "dtree", "explain", "findlibs",
- "includes", "memoizer", "memory",
- "nomemoizer", "objects",
+ "includes", "memoizer", "memory", "objects",
"pdb", "presub", "stacktrace", "stree",
"time", "tree"]
- def opt_debug(option, opt, value, parser, debug_options=debug_options):
+ deprecated_debug_options = [ "nomemoizer", ]
+
+ def opt_debug(option, opt, value, parser, debug_options=debug_options, deprecated_debug_options=deprecated_debug_options):
if value in debug_options:
try:
if parser.values.debug is None:
@@ -684,6 +686,9 @@ class OptParser(OptionParser):
except AttributeError:
parser.values.debug = []
parser.values.debug.append(value)
+ elif value in deprecated_debug_options:
+ w = "The --debug=%s option is deprecated and has no effect." % value
+ delayed_warnings.append((SCons.Warnings.DeprecatedWarning, w))
else:
raise OptionValueError("Warning: %s is not a valid debug type" % value)
self.add_option('--debug', action="callback", type="string",
@@ -945,6 +950,8 @@ class SConscriptSettableOptions:
def _main(args, parser):
+ global exit_status
+
# Here's where everything really happens.
# First order of business: set up default warnings and and then
@@ -954,6 +961,7 @@ def _main(args, parser):
SCons.Warnings.DeprecatedWarning,
SCons.Warnings.DuplicateEnvironmentWarning,
SCons.Warnings.MissingSConscriptWarning,
+ SCons.Warnings.NoMetaclassSupportWarning,
SCons.Warnings.NoParallelSupportWarning,
SCons.Warnings.MisleadingKeywordsWarning, ]
for warning in default_warnings:
@@ -962,6 +970,9 @@ def _main(args, parser):
if options.warn:
_setup_warn(options.warn)
+ for warning_type, message in delayed_warnings:
+ SCons.Warnings.warn(warning_type, message)
+
# Next, we want to create the FS object that represents the outside
# world's file system, as that's central to a lot of initialization.
# To do this, however, we need to be in the directory from which we
@@ -1019,7 +1030,8 @@ def _main(args, parser):
# Give them the options usage now, before we fail
# trying to read a non-existent SConstruct file.
parser.print_help()
- sys.exit(0)
+ exit_status = 0
+ return
raise SCons.Errors.UserError, "No SConstruct file found."
if scripts[0] == "-":
@@ -1105,7 +1117,6 @@ def _main(args, parser):
# reading SConscript files and haven't started building
# things yet, stop regardless of whether they used -i or -k
# or anything else.
- global exit_status
sys.stderr.write("scons: *** %s Stop.\n" % e)
exit_status = 2
sys.exit(exit_status)
@@ -1134,7 +1145,8 @@ def _main(args, parser):
else:
print help_text
print "Use scons -H for help about command-line options."
- sys.exit(0)
+ exit_status = 0
+ return
# Now that we've read the SConscripts we can set the options
# that are SConscript settable:
@@ -1285,12 +1297,9 @@ def _main(args, parser):
count_stats.append(('post-', 'build'))
def _exec_main():
- all_args = sys.argv[1:]
- try:
- all_args = string.split(os.environ['SCONSFLAGS']) + all_args
- except KeyError:
- # it's OK if there's no SCONSFLAGS
- pass
+ sconsflags = os.environ.get('SCONSFLAGS', '')
+ all_args = string.split(sconsflags) + sys.argv[1:]
+
parser = OptParser()
global options
options, args = parser.parse_args(all_args)
@@ -1353,8 +1362,7 @@ def main():
#SCons.Debug.dumpLoggedInstances('*')
if print_memoizer:
- print "Memoizer (memory cache) hits and misses:"
- SCons.Memoize.Dump()
+ SCons.Memoize.Dump("Memoizer (memory cache) hits and misses:")
# Dump any development debug info that may have been enabled.
# These are purely for internal debugging during development, so
diff --git a/src/engine/SCons/Script/SConscript.py b/src/engine/SCons/Script/SConscript.py
index dc896a0..749be6d 100644
--- a/src/engine/SCons/Script/SConscript.py
+++ b/src/engine/SCons/Script/SConscript.py
@@ -183,6 +183,7 @@ def _SConscript(fs, *files, **kw):
# the builder so that it doesn't get built *again*
# during the actual build phase.
f.build()
+ f.built()
f.builder_set(None)
if f.exists():
_file_ = open(f.get_abspath(), "r")
@@ -286,9 +287,12 @@ def SConscript_exception(file=sys.stderr):
# in SCons itself. Show the whole stack.
tb = exc_tb
stack = traceback.extract_tb(tb)
- type = str(exc_type)
- if type[:11] == "exceptions.":
- type = type[11:]
+ try:
+ type = exc_type.__name__
+ except AttributeError:
+ type = str(exc_type)
+ if type[:11] == "exceptions.":
+ type = type[11:]
file.write('%s: %s:\n' % (type, exc_value))
for fname, line, func, text in stack:
file.write(' File "%s", line %d:\n' % (fname, line))
diff --git a/src/engine/SCons/Script/__init__.py b/src/engine/SCons/Script/__init__.py
index ce96867..55797df 100644
--- a/src/engine/SCons/Script/__init__.py
+++ b/src/engine/SCons/Script/__init__.py
@@ -44,29 +44,33 @@ import string
import sys
import UserList
-# Special chicken-and-egg handling of the "--debug=memoizer"
-# and "--debug=nomemoizer" flags:
+# Special chicken-and-egg handling of the "--debug=memoizer" flag:
#
# SCons.Memoize contains a metaclass implementation that affects how
-# the other classes are instantiated. The Memoizer handles optional
-# counting of the hits and misses by using a different, parallel set of
-# functions, so we don't slow down normal operation any more than we
-# have to. We can also tell it disable memoization completely.
+# the other classes are instantiated. The Memoizer may add shim methods
+# to classes that have methods that cache computed values in order to
+# count and report the hits and misses.
#
-# If we wait to enable the counting or disable memoization completely
-# until we've parsed the command line options normally, it will be too
-# late, because the Memoizer will have already analyzed the classes
-# that it's Memoizing and bound the (non-counting) versions of the
-# functions. So we have to use a special-case, up-front check for
-# the "--debug=memoizer" and "--debug=nomemoizer" flags and do what's
-# appropriate before we import any of the other modules that use it.
+# If we wait to enable the Memoization until after we've parsed the
+# command line options normally, it will be too late, because the Memoizer
+# will have already analyzed the classes that it's Memoizing and decided
+# to not add the shims. So we use a special-case, up-front check for
+# the "--debug=memoizer" flag and enable Memoizer before we import any
+# of the other modules that use it.
+
_args = sys.argv + string.split(os.environ.get('SCONSFLAGS', ''))
if "--debug=memoizer" in _args:
import SCons.Memoize
- SCons.Memoize.EnableCounting()
-if "--debug=nomemoizer" in _args:
- import SCons.Memoize
- SCons.Memoize.DisableMemoization()
+ import SCons.Warnings
+ try:
+ SCons.Memoize.EnableMemoization()
+ except SCons.Warnings.Warning:
+ # Some warning was thrown (inability to --debug=memoizer on
+ # Python 1.5.2 because it doesn't have metaclasses). Arrange
+ # for it to be displayed or not after warnings are configured.
+ import Main
+ exc_type, exc_value, tb = sys.exc_info()
+ Main.delayed_warnings.append(exc_type, exc_value)
del _args
import SCons.Action
@@ -77,6 +81,7 @@ import SCons.Options
import SCons.Platform
import SCons.Scanner
import SCons.SConf
+import SCons.Subst
import SCons.Tool
import SCons.Util
import SCons.Defaults
@@ -127,6 +132,7 @@ call_stack = _SConscript.call_stack
#
Action = SCons.Action.Action
+AllowSubstExceptions = SCons.Subst.SetAllowableExceptions
BoolOption = SCons.Options.BoolOption
Builder = SCons.Builder.Builder
Configure = _SConscript.Configure
diff --git a/src/engine/SCons/Subst.py b/src/engine/SCons/Subst.py
index b100473..115f7db 100644
--- a/src/engine/SCons/Subst.py
+++ b/src/engine/SCons/Subst.py
@@ -44,6 +44,24 @@ _strconv = [SCons.Util.to_String,
SCons.Util.to_String,
SCons.Util.to_String_for_signature]
+
+
+AllowableExceptions = (IndexError, NameError)
+
+def SetAllowableExceptions(*excepts):
+ global AllowableExceptions
+ AllowableExceptions = filter(None, excepts)
+
+def raise_exception(exception, target, s):
+ name = exception.__class__.__name__
+ msg = "%s `%s' trying to evaluate `%s'" % (name, exception, s)
+ if target:
+ raise SCons.Errors.BuildError, (target[0], msg)
+ else:
+ raise SCons.Errors.UserError, msg
+
+
+
class Literal:
"""A wrapper for a string. If you use this object wrapped
around a string, then it will be interpreted as literal.
@@ -377,21 +395,19 @@ def scons_subst(strSubst, env, mode=SUBST_RAW, target=None, source=None, gvars={
key = key[1:-1]
try:
s = eval(key, self.gvars, lvars)
- except AttributeError, e:
- raise SCons.Errors.UserError, \
- "Error trying to evaluate `%s': %s" % (s, e)
- except (IndexError, NameError, TypeError):
- return ''
- except SyntaxError,e:
- if self.target:
- raise SCons.Errors.BuildError, (self.target[0], "Syntax error `%s' trying to evaluate `%s'" % (e,s))
- else:
- raise SCons.Errors.UserError, "Syntax error `%s' trying to evaluate `%s'" % (e,s)
+ except KeyboardInterrupt:
+ raise
+ except Exception, e:
+ if e.__class__ in AllowableExceptions:
+ return ''
+ raise_exception(e, self.target, s)
else:
if lvars.has_key(key):
s = lvars[key]
elif self.gvars.has_key(key):
s = self.gvars[key]
+ elif not NameError in AllowableExceptions:
+ raise_exception(NameError(key), self.target, s)
else:
return ''
@@ -590,21 +606,19 @@ def scons_subst_list(strSubst, env, mode=SUBST_RAW, target=None, source=None, gv
key = key[1:-1]
try:
s = eval(key, self.gvars, lvars)
- except AttributeError, e:
- raise SCons.Errors.UserError, \
- "Error trying to evaluate `%s': %s" % (s, e)
- except (IndexError, NameError, TypeError):
- return
- except SyntaxError,e:
- if self.target:
- raise SCons.Errors.BuildError, (self.target[0], "Syntax error `%s' trying to evaluate `%s'" % (e,s))
- else:
- raise SCons.Errors.UserError, "Syntax error `%s' trying to evaluate `%s'" % (e,s)
+ except KeyboardInterrupt:
+ raise
+ except Exception, e:
+ if e.__class__ in AllowableExceptions:
+ return
+ raise_exception(e, self.target, s)
else:
if lvars.has_key(key):
s = lvars[key]
elif self.gvars.has_key(key):
s = self.gvars[key]
+ elif not NameError in AllowableExceptions:
+ raise_exception(NameError(), self.target, s)
else:
return
diff --git a/src/engine/SCons/SubstTests.py b/src/engine/SCons/SubstTests.py
index 4de3348..e8419f1 100644
--- a/src/engine/SCons/SubstTests.py
+++ b/src/engine/SCons/SubstTests.py
@@ -290,7 +290,6 @@ class SubstTestCase(unittest.TestCase):
"${FFF[0]}", "G",
"${FFF[7]}", "",
"${NOTHING[1]}", "",
- "${NONE[2]}", "",
# Test various combinations of strings and lists.
#None, '',
@@ -336,7 +335,11 @@ class SubstTestCase(unittest.TestCase):
while cases:
input, expect = cases[:2]
expect = cvt(expect)
- result = apply(scons_subst, (input, env), kwargs)
+ try:
+ result = apply(scons_subst, (input, env), kwargs)
+ except Exception, e:
+ print " input %s generated %s %s" % (repr(input), e.__class__.__name__, str(e))
+ failed = failed + 1
if result != expect:
if failed == 0: print
print " input %s => %s did not match %s" % (repr(input), repr(result), repr(expect))
@@ -459,8 +462,8 @@ class SubstTestCase(unittest.TestCase):
scons_subst('${foo.bar}', env, gvars={'foo':Foo()})
except SCons.Errors.UserError, e:
expect = [
- "Error trying to evaluate `${foo.bar}': bar",
- "Error trying to evaluate `${foo.bar}': Foo instance has no attribute 'bar'",
+ "AttributeError `bar' trying to evaluate `${foo.bar}'",
+ "AttributeError `Foo instance has no attribute 'bar'' trying to evaluate `${foo.bar}'",
]
assert str(e) in expect, e
else:
@@ -470,9 +473,44 @@ class SubstTestCase(unittest.TestCase):
try:
scons_subst('$foo.bar.3.0', env)
except SCons.Errors.UserError, e:
- expect1 = "Syntax error `invalid syntax' trying to evaluate `$foo.bar.3.0'"
- expect2 = "Syntax error `invalid syntax (line 1)' trying to evaluate `$foo.bar.3.0'"
- assert str(e) in [expect1, expect2], e
+ expect = [
+ # Python 1.5
+ "SyntaxError `invalid syntax' trying to evaluate `$foo.bar.3.0'",
+ # Python 2.2, 2.3, 2.4
+ "SyntaxError `invalid syntax (line 1)' trying to evaluate `$foo.bar.3.0'",
+ # Python 2.5
+ "SyntaxError `invalid syntax (<string>, line 1)' trying to evaluate `$foo.bar.3.0'",
+ ]
+ assert str(e) in expect, e
+ else:
+ raise AssertionError, "did not catch expected UserError"
+
+ # Test that we handle type errors
+ try:
+ scons_subst("${NONE[2]}", env, gvars={'NONE':None})
+ except SCons.Errors.UserError, e:
+ expect = [
+ # Python 1.5, 2.2, 2.3, 2.4
+ "TypeError `unsubscriptable object' trying to evaluate `${NONE[2]}'",
+ # Python 2.5 and later
+ "TypeError `'NoneType' object is unsubscriptable' trying to evaluate `${NONE[2]}'",
+ ]
+ assert str(e) in expect, e
+ else:
+ raise AssertionError, "did not catch expected UserError"
+
+ try:
+ def func(a, b, c):
+ pass
+ scons_subst("${func(1)}", env, gvars={'func':func})
+ except SCons.Errors.UserError, e:
+ expect = [
+ # Python 1.5
+ "TypeError `not enough arguments; expected 3, got 1' trying to evaluate `${func(1)}'",
+ # Python 2.2, 2.3, 2.4, 2.5
+ "TypeError `func() takes exactly 3 arguments (1 given)' trying to evaluate `${func(1)}'"
+ ]
+ assert str(e) in expect, repr(str(e))
else:
raise AssertionError, "did not catch expected UserError"
@@ -933,8 +971,8 @@ class SubstTestCase(unittest.TestCase):
scons_subst_list('${foo.bar}', env, gvars={'foo':Foo()})
except SCons.Errors.UserError, e:
expect = [
- "Error trying to evaluate `${foo.bar}': bar",
- "Error trying to evaluate `${foo.bar}': Foo instance has no attribute 'bar'",
+ "AttributeError `bar' trying to evaluate `${foo.bar}'",
+ "AttributeError `Foo instance has no attribute 'bar'' trying to evaluate `${foo.bar}'",
]
assert str(e) in expect, e
else:
@@ -944,9 +982,12 @@ class SubstTestCase(unittest.TestCase):
try:
scons_subst_list('$foo.bar.3.0', env)
except SCons.Errors.UserError, e:
- expect1 = "Syntax error `invalid syntax' trying to evaluate `$foo.bar.3.0'"
- expect2 = "Syntax error `invalid syntax (line 1)' trying to evaluate `$foo.bar.3.0'"
- assert str(e) in [expect1, expect2], e
+ expect = [
+ "SyntaxError `invalid syntax' trying to evaluate `$foo.bar.3.0'",
+ "SyntaxError `invalid syntax (line 1)' trying to evaluate `$foo.bar.3.0'",
+ "SyntaxError `invalid syntax (<string>, line 1)' trying to evaluate `$foo.bar.3.0'",
+ ]
+ assert str(e) in expect, e
else:
raise AssertionError, "did not catch expected SyntaxError"
diff --git a/src/engine/SCons/Taskmaster.py b/src/engine/SCons/Taskmaster.py
index 7cdecf3..2ea3f0d 100644
--- a/src/engine/SCons/Taskmaster.py
+++ b/src/engine/SCons/Taskmaster.py
@@ -240,7 +240,7 @@ class Task:
for t in self.targets:
t.disambiguate().set_state(SCons.Node.executing)
for s in t.side_effects:
- s.set_state(SCons.Node.pending)
+ s.set_state(SCons.Node.executing)
def make_ready_current(self):
"""Mark all targets in a task ready for execution if any target
@@ -256,7 +256,7 @@ class Task:
self.out_of_date.append(t)
t.set_state(SCons.Node.executing)
for s in t.side_effects:
- s.set_state(SCons.Node.pending)
+ s.set_state(SCons.Node.executing)
make_ready = make_ready_current
@@ -268,7 +268,7 @@ class Task:
parents[p] = parents.get(p, 0) + 1
for t in self.targets:
for s in t.side_effects:
- if s.get_state() == SCons.Node.pending:
+ if s.get_state() == SCons.Node.executing:
s.set_state(SCons.Node.no_state)
for p in s.waiting_parents.keys():
if not parents.has_key(p):
@@ -515,12 +515,11 @@ class Taskmaster:
T.write(' waiting on unfinished children:\n %s\n' % c)
continue
- # Skip this node if it has side-effects that are
- # currently being built:
- side_effects = reduce(lambda E,N:
- E or N.get_state() == SCons.Node.executing,
- node.side_effects,
- 0)
+ # Skip this node if it has side-effects that are currently being
+ # built themselves or waiting for something else being built.
+ side_effects = filter(lambda N:
+ N.get_state() == SCons.Node.executing,
+ node.side_effects)
if side_effects:
map(lambda n, P=node: n.add_to_waiting_s_e(P), side_effects)
if S: S.side_effects = S.side_effects + 1
diff --git a/src/engine/SCons/TaskmasterTests.py b/src/engine/SCons/TaskmasterTests.py
index 8d71d71..4fefb9d 100644
--- a/src/engine/SCons/TaskmasterTests.py
+++ b/src/engine/SCons/TaskmasterTests.py
@@ -374,7 +374,7 @@ class TaskmasterTestCase(unittest.TestCase):
tm = SCons.Taskmaster.Taskmaster([n1,n2,n3,n4,n5])
t = tm.next_task()
assert t.get_target() == n1
- assert n4.state == SCons.Node.pending, n4.state
+ assert n4.state == SCons.Node.executing, n4.state
t.executed()
t.postprocess()
t = tm.next_task()
diff --git a/src/engine/SCons/Tool/386asm.py b/src/engine/SCons/Tool/386asm.py
index f2a221b..1a59525 100644
--- a/src/engine/SCons/Tool/386asm.py
+++ b/src/engine/SCons/Tool/386asm.py
@@ -37,11 +37,11 @@ __revision__ = "__FILE__ __REVISION__ __DATE__ __DEVELOPER__"
from SCons.Tool.PharLapCommon import addPharLapPaths
import SCons.Util
-import as
+as_module = __import__('as', globals(), locals(), [])
def generate(env):
"""Add Builders and construction variables for ar to an Environment."""
- as.generate(env)
+ as_module.generate(env)
env['AS'] = '386asm'
env['ASFLAGS'] = SCons.Util.CLVar('')
diff --git a/src/engine/SCons/Tool/dvipdf.py b/src/engine/SCons/Tool/dvipdf.py
index 51dfae1..179159e 100644
--- a/src/engine/SCons/Tool/dvipdf.py
+++ b/src/engine/SCons/Tool/dvipdf.py
@@ -66,7 +66,7 @@ def generate(env):
env['DVIPDF'] = 'dvipdf'
env['DVIPDFFLAGS'] = SCons.Util.CLVar('')
- env['DVIPDFCOM'] = '$DVIPDF $DVIPDFFLAGS $SOURCE $TARGET'
+ env['DVIPDFCOM'] = 'cd ${TARGET.dir} && $DVIPDF $DVIPDFFLAGS ${SOURCE.file} ${TARGET.file}'
# Deprecated synonym.
env['PDFCOM'] = ['$DVIPDFCOM']
diff --git a/src/engine/SCons/Tool/dvips.py b/src/engine/SCons/Tool/dvips.py
index b987ea1..9996fc2 100644
--- a/src/engine/SCons/Tool/dvips.py
+++ b/src/engine/SCons/Tool/dvips.py
@@ -58,7 +58,9 @@ def generate(env):
env['DVIPS'] = 'dvips'
env['DVIPSFLAGS'] = SCons.Util.CLVar('')
- env['PSCOM'] = '$DVIPS $DVIPSFLAGS -o $TARGET $SOURCE'
+ # I'm not quite sure I got the directories and filenames right for build_dir
+ # We need to be in the correct directory for the sake of latex \includegraphics eps included files.
+ env['PSCOM'] = 'cd ${TARGET.dir} && $DVIPS $DVIPSFLAGS -o ${TARGET.file} ${SOURCE.file}'
env['PSPREFIX'] = ''
env['PSSUFFIX'] = '.ps'
diff --git a/src/engine/SCons/Tool/gas.py b/src/engine/SCons/Tool/gas.py
index 3b35424..e44a28d 100644
--- a/src/engine/SCons/Tool/gas.py
+++ b/src/engine/SCons/Tool/gas.py
@@ -33,13 +33,13 @@ selection method.
__revision__ = "__FILE__ __REVISION__ __DATE__ __DEVELOPER__"
-import as
+as_module = __import__('as', globals(), locals(), [])
assemblers = ['as', 'gas']
def generate(env):
"""Add Builders and construction variables for as to an Environment."""
- as.generate(env)
+ as_module.generate(env)
env['AS'] = env.Detect(assemblers) or 'as'
diff --git a/src/engine/SCons/Tool/latex.py b/src/engine/SCons/Tool/latex.py
index 72371b3..c4934c3 100644
--- a/src/engine/SCons/Tool/latex.py
+++ b/src/engine/SCons/Tool/latex.py
@@ -64,7 +64,7 @@ def generate(env):
env['LATEX'] = 'latex'
env['LATEXFLAGS'] = SCons.Util.CLVar('')
- env['LATEXCOM'] = '$LATEX $LATEXFLAGS $SOURCE'
+ env['LATEXCOM'] = 'cd ${TARGET.dir} && $LATEX $LATEXFLAGS ${SOURCE.file}'
env['LATEXRETRIES'] = 3
def exists(env):
diff --git a/src/engine/SCons/Tool/mslink.py b/src/engine/SCons/Tool/mslink.py
index 4458183..4b90e3d 100644
--- a/src/engine/SCons/Tool/mslink.py
+++ b/src/engine/SCons/Tool/mslink.py
@@ -222,8 +222,14 @@ def generate(env):
env['LDMODULECOM'] = compositeLinkAction
def exists(env):
+ platform = env.get('PLATFORM', '')
if SCons.Tool.msvs.is_msvs_installed():
# there's at least one version of MSVS installed.
return 1
- else:
+ elif platform in ('win32', 'cygwin'):
+ # Only explicitly search for a 'link' executable on Windows
+ # systems. Some other systems (e.g. Ubuntu Linux) have an
+ # executable named 'link' and we don't want that to make SCons
+ # think Visual Studio is installed.
return env.Detect('link')
+ return None
diff --git a/src/engine/SCons/Tool/msvc.xml b/src/engine/SCons/Tool/msvc.xml
index 1c0f3fa..be155bd 100644
--- a/src/engine/SCons/Tool/msvc.xml
+++ b/src/engine/SCons/Tool/msvc.xml
@@ -55,6 +55,7 @@ The default value expands expands to the appropriate
Microsoft Visual C++ command-line options
when the &cv-PCH; construction variable is set.
</summary>
+</cvar>
<cvar name="CCPDBFLAGS">
<summary>
@@ -93,6 +94,7 @@ the &cv-CCPDBFLAGS; variable as follows:
env['CCPDBFLAGS'] = '/Zi /Fd${TARGET}.pdb'
</example>
</summary>
+</cvar>
<cvar name="PCH">
<summary>
diff --git a/src/engine/SCons/Tool/pdflatex.py b/src/engine/SCons/Tool/pdflatex.py
index acf67b2..97420a8 100644
--- a/src/engine/SCons/Tool/pdflatex.py
+++ b/src/engine/SCons/Tool/pdflatex.py
@@ -67,7 +67,7 @@ def generate(env):
env['PDFLATEX'] = 'pdflatex'
env['PDFLATEXFLAGS'] = SCons.Util.CLVar('')
- env['PDFLATEXCOM'] = '$PDFLATEX $PDFLATEXFLAGS $SOURCE'
+ env['PDFLATEXCOM'] = 'cd ${TARGET.dir} && $PDFLATEX $PDFLATEXFLAGS ${SOURCE.file}'
env['LATEXRETRIES'] = 3
def exists(env):
diff --git a/src/engine/SCons/Tool/pdftex.py b/src/engine/SCons/Tool/pdftex.py
index ddf5a23..e740fac 100644
--- a/src/engine/SCons/Tool/pdftex.py
+++ b/src/engine/SCons/Tool/pdftex.py
@@ -82,17 +82,13 @@ def generate(env):
env['PDFTEX'] = 'pdftex'
env['PDFTEXFLAGS'] = SCons.Util.CLVar('')
- env['PDFTEXCOM'] = '$PDFTEX $PDFTEXFLAGS $SOURCE'
+ env['PDFTEXCOM'] = 'cd ${TARGET.dir} && $PDFTEX $PDFTEXFLAGS ${SOURCE.file}'
# Duplicate from latex.py. If latex.py goes away, then this is still OK.
env['PDFLATEX'] = 'pdflatex'
env['PDFLATEXFLAGS'] = SCons.Util.CLVar('')
- env['PDFLATEXCOM'] = '$PDFLATEX $PDFLATEXFLAGS $SOURCE'
+ env['PDFLATEXCOM'] = 'cd ${TARGET.dir} && $PDFLATEX $PDFLATEXFLAGS ${SOURCE.file}'
env['LATEXRETRIES'] = 3
- env['BIBTEX'] = 'bibtex'
- env['BIBTEXFLAGS'] = SCons.Util.CLVar('')
- env['BIBTEXCOM'] = '$BIBTEX $BIBTEXFLAGS ${SOURCE.base}'
-
def exists(env):
return env.Detect('pdftex')
diff --git a/src/engine/SCons/Tool/swig.py b/src/engine/SCons/Tool/swig.py
index ed066a7..a8e12a2 100644
--- a/src/engine/SCons/Tool/swig.py
+++ b/src/engine/SCons/Tool/swig.py
@@ -49,19 +49,20 @@ def swigSuffixEmitter(env, source):
else:
return '$SWIGCFILESUFFIX'
-_reSwig = re.compile(r"%include\s+(\S+)")
+_reInclude = re.compile(r'%include\s+(\S+)')
+_reModule = re.compile(r'%module\s+(.+)')
def recurse(path, searchPath):
- global _reSwig
+ global _reInclude
f = open(path)
try: contents = f.read()
finally: f.close()
found = []
# Better code for when we drop Python 1.5.2.
- #for m in _reSwig.finditer(contents):
+ #for m in _reInclude.finditer(contents):
# fname = m.group(1)
- for fname in _reSwig.findall(contents):
+ for fname in _reInclude.findall(contents):
for dpath in searchPath:
absPath = os.path.join(dpath, fname)
if os.path.isfile(absPath):
@@ -88,7 +89,7 @@ def _swigEmitter(target, source, env):
f = open(src)
try:
for l in f.readlines():
- m = re.match("%module (.+)", l)
+ m = _reModule.match(l)
if m:
mname = m.group(1)
finally:
diff --git a/src/engine/SCons/Tool/tex.py b/src/engine/SCons/Tool/tex.py
index d613958..0329667 100644
--- a/src/engine/SCons/Tool/tex.py
+++ b/src/engine/SCons/Tool/tex.py
@@ -43,7 +43,13 @@ import SCons.Node.FS
import SCons.Util
warning_rerun_re = re.compile("^LaTeX Warning:.*Rerun", re.MULTILINE)
-undefined_references_re = re.compile("^LaTeX Warning:.*undefined references", re.MULTILINE)
+
+rerun_citations_str = "^LaTeX Warning:.*\n.*Rerun to get citations correct"
+rerun_citations_re = re.compile(rerun_citations_str, re.MULTILINE)
+
+undefined_references_str = '(^LaTeX Warning:.*undefined references)|(^Package \w+ Warning:.*undefined citations)'
+undefined_references_re = re.compile(undefined_references_str, re.MULTILINE)
+
openout_aux_re = re.compile(r"\\openout.*`(.*\.aux)'")
# An Action sufficient to build any generic tex file.
@@ -63,7 +69,8 @@ def InternalLaTeXAuxAction(XXXLaTeXAction, target = None, source= None, env=None
"""A builder for LaTeX files that checks the output in the aux file
and decides how many times to use LaTeXAction, and BibTeXAction."""
- basename, ext = SCons.Util.splitext(str(target[0]))
+ basename = SCons.Util.splitext(str(source[0]))[0]
+ basedir = os.path.split(str(source[0]))[0]
# Run LaTeX once to generate a new aux file.
XXXLaTeXAction(target, source, env)
@@ -82,11 +89,11 @@ def InternalLaTeXAuxAction(XXXLaTeXAction, target = None, source= None, env=None
# Now decide if bibtex will need to be run.
for auxfilename in auxfiles:
- if os.path.exists(auxfilename):
- content = open(auxfilename, "rb").read()
+ if os.path.exists(os.path.join(basedir, auxfilename)):
+ content = open(os.path.join(basedir, auxfilename), "rb").read()
if string.find(content, "bibdata") != -1:
bibfile = env.fs.File(basename)
- BibTeXAction(None, bibfile, env)
+ BibTeXAction(bibfile, bibfile, env)
break
# Now decide if makeindex will need to be run.
@@ -94,7 +101,13 @@ def InternalLaTeXAuxAction(XXXLaTeXAction, target = None, source= None, env=None
if os.path.exists(idxfilename):
idxfile = env.fs.File(basename)
# TODO: if ( idxfile has changed) ...
- MakeIndexAction(None, idxfile, env)
+ MakeIndexAction(idxfile, idxfile, env)
+ XXXLaTeXAction(target, source, env)
+
+ # Now decide if latex will need to be run again due to table of contents.
+ tocfilename = basename + '.toc'
+ if os.path.exists(tocfilename):
+ # TODO: if ( tocfilename has changed) ...
XXXLaTeXAction(target, source, env)
# Now decide if latex needs to be run yet again.
@@ -104,6 +117,7 @@ def InternalLaTeXAuxAction(XXXLaTeXAction, target = None, source= None, env=None
break
content = open(logfilename, "rb").read()
if not warning_rerun_re.search(content) and \
+ not rerun_citations_re.search(content) and \
not undefined_references_re.search(content):
break
XXXLaTeXAction(target, source, env)
@@ -135,9 +149,12 @@ def TeXLaTeXFunction(target = None, source= None, env=None):
def tex_emitter(target, source, env):
base = SCons.Util.splitext(str(source[0]))[0]
target.append(base + '.aux')
+ env.Precious(base + '.aux')
target.append(base + '.log')
for f in source:
content = f.get_contents()
+ if string.find(content, r'\tableofcontents') != -1:
+ target.append(base + '.toc')
if string.find(content, r'\makeindex') != -1:
target.append(base + '.ilg')
target.append(base + '.ind')
@@ -151,7 +168,10 @@ def tex_emitter(target, source, env):
if os.path.exists(logfilename):
content = open(logfilename, "rb").read()
aux_files = openout_aux_re.findall(content)
- target.extend(filter(lambda f, b=base+'.aux': f != b, aux_files))
+ aux_files = filter(lambda f, b=base+'.aux': f != b, aux_files)
+ dir = os.path.split(base)[0]
+ aux_files = map(lambda f, d=dir: d+os.sep+f, aux_files)
+ target.extend(aux_files)
return (target, source)
@@ -194,21 +214,21 @@ def generate(env):
env['TEX'] = 'tex'
env['TEXFLAGS'] = SCons.Util.CLVar('')
- env['TEXCOM'] = '$TEX $TEXFLAGS $SOURCE'
+ env['TEXCOM'] = 'cd ${TARGET.dir} && $TEX $TEXFLAGS ${SOURCE.file}'
# Duplicate from latex.py. If latex.py goes away, then this is still OK.
env['LATEX'] = 'latex'
env['LATEXFLAGS'] = SCons.Util.CLVar('')
- env['LATEXCOM'] = '$LATEX $LATEXFLAGS $SOURCE'
+ env['LATEXCOM'] = 'cd ${TARGET.dir} && $LATEX $LATEXFLAGS ${SOURCE.file}'
env['LATEXRETRIES'] = 3
env['BIBTEX'] = 'bibtex'
env['BIBTEXFLAGS'] = SCons.Util.CLVar('')
- env['BIBTEXCOM'] = '$BIBTEX $BIBTEXFLAGS ${SOURCE.base}'
+ env['BIBTEXCOM'] = 'cd ${TARGET.dir} && $BIBTEX $BIBTEXFLAGS ${SOURCE.filebase}'
env['MAKEINDEX'] = 'makeindex'
env['MAKEINDEXFLAGS'] = SCons.Util.CLVar('')
- env['MAKEINDEXCOM'] = '$MAKEINDEX $MAKEINDEXFLAGS $SOURCES'
+ env['MAKEINDEXCOM'] = 'cd ${TARGET.dir} && $MAKEINDEX $MAKEINDEXFLAGS ${SOURCE.file}'
def exists(env):
return env.Detect('tex')
diff --git a/src/engine/SCons/Warnings.py b/src/engine/SCons/Warnings.py
index 123f36b..27614bf 100644
--- a/src/engine/SCons/Warnings.py
+++ b/src/engine/SCons/Warnings.py
@@ -54,6 +54,9 @@ class DuplicateEnvironmentWarning(Warning):
class MissingSConscriptWarning(Warning):
pass
+class NoMetaclassSupportWarning(Warning):
+ pass
+
class NoParallelSupportWarning(Warning):
pass