Copper is a parser and scanner generator that generates integrated LR parsers and context-aware scanners from language specifications based on context-free grammars and regular expressions. The unique feature in Copper is that the generated parser provides contextual information to the scanner in the form of the current LR parse state when the scanner is called to return the next token. The scanner uses this information to ensure that it only returns tokens that are valid for the current state, that is, those for terminals whose entry in the parse table for the current state is shift, reduce, or accept (but not error). Context-aware scanners are more discriminating than those that lack context and allow the specification of simpler grammars that are more likely to be in the desired LALR(1) grammar class.
One language for which this approach is useful is AspectJ, an extension bringing aspect constructs to Java. Previously AspectJ has only been parsed by hand-coding a scanner (sacrificing declarativeness) or using a GLR-based parsing tool (sacrificing determinism). We have adapted a declarative AspectJ grammar that can be parsed deterministically in Copper; the source code of this grammar is linked from the downloads page.
Our GPCE 2007 paper Context-Aware Scanning for Parsing Extensible Languages provides a detailed discussion of parser-based context-aware scanning.
Copper can also subject a language extension to a test guaranteeing that all extensions that pass the test can be composed together without any parse-table conflicts. This test is documented in our PLDI 2009 paper Verifiable Composition of Deterministic Grammars.
Copper is written in Java and generates parsers and scanner written in Java. It is used by our attribute grammar system Silver and distributed with it. It is also available as a stand-alone package.
Current versions of Copper are maintained on GitHub. We maintain downloads and information here for a legacy version, 0.5, used with older versions of Silver.
Development of Copper versions 0.6 and 0.7 was supported by funding from Adventium Labs.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.