| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
| |
Lexer::ReadIdent() now sets last_token_ before returning, like
Lexer::ReadEvalString() does. So all "expected identifiers" and things
that call ReadIdent (pool parser, rule parser, let parser, code parsing
the rule name after a : in a build line) now point the "^ near here" at
what was there instead of the previous last_token
According to manifest_parser_perftest, this is perf-neutral.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
This reverts commit 904c9610fe66c4f4bd63a07d6f057c8603d24394.
The commit caused issue #380, this revert fixes it. The revert
also makes the test from the previous commit pass.
|
|\
| |
| | |
Make StringPiece data members private.
|
| |
| |
| |
| | |
Signed-off-by: Thiago Farina <tfarina@chromium.org>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It turns out to be trickier than expected to process these correctly.
It turns out to also be trickier than expected to give a nice error
message on encountering these. But the behavior prior to this patch
would just be silent failures where we attempted to examine paths
that accidentally contained embedded \r.
For now, fix all regexes of the form [^...] to include \r in the
excluded block, then assert that we get a vague lexer error near the
problem.
In the future perhaps we can open manifest files in text mode on Windows
or just disallow Windows-style CRLF in the manual.
|
|
|
|
|
| |
The lexer already mostly allowed this, except that chars >127 were
being interpreted as negative indexes into the lexer table.
|
|
|
|
|
| |
'$:' is a valid string now, it expands to ':'
update error messages and show a hint when something went wrong.
|
|
|
|
|
|
|
|
| |
Needed for Windows drive names.
For instance configure with gtest:
python configure.py --with-gtest=c$:\gtest-1.6.0
|
|
|
|
| |
This means that indented blank lines are skipped without causing errors.
|
|
|
|
|
| |
Now it's consistent with other errors.
Fixes part of issue #187.
|
|
|
|
| |
Indented comments are ignored rather than causing errors.
|
| |
|
| |
|
| |
|
|
- Delete the old "Tokenizer" code.
- Write separate tests for the lexer distinct from the parser.
- Switch the parser to use the new code.
- New lexer error output has file:line numbers so e.g. Emacs can
jump your editor to the syntax error.
- The EvalEnv ($-interpolation) code is now part of the lexer as well.
|