Generating Parsers with ANTLR 3

Generating Parsers with ANTLR 3

By Dean Wette, OCI Principal Software Engineer

November 2007


Introduction

Someone hands you a file containing information - data or metadata - and instructs you to implement the facility to read it into a program for processing. As a Java developer, what do you do? In the past, using Java 1.1 or 1.2, parsing involved using general purpose classes and language facilities like String, StringBuffer, StringTokenizer, array traversal, and so on. Starting with version 1.4, the Java regular expression API added power and flexibility to the process. Java 5 improved the situation further with the Scanner API. The common thread in all of this we are constructing a brute force reader. We consume characters from the input and try to make sense of it within some structural context.

There is a better way, and one that is more readily adaptable and tolerant to change. Treat structure of the input as a language, define a grammar for it, and create a parser based on the grammar. A grammar describes a language, but many kinds of structure can be thought of terms of a language, and once the concepts of language description are understood, defining grammars becomes relatively simple. Writing the parser is still the hard part. Fortunately, tools exist that automate the generation of a parser based on the grammar one defines for a language. ANTLR 3 is one such tool.

ANTLR stands for ANother Tool for Language Recognition. It provides a Java-based framework for generating recognizers, parsers, and translators from a grammar. Terrence Parr, a professor of computer science at the University of San Francisco, created ANTLR and continues to develop it actively. ANTLR - along with the ANTLRWorks IDE, documentation, contributions, and a wiki - are provided free to users at http://www.antlr.org. In addition to the official web site, Parr is the author of The Definitive ANTLR Reference, an excellent book on ANTLR (also available as a PDF file).

Domain specific languages

Domain specific languages (DSL) represent the class of programming languages that solve specific problems, unlike general purpose languages such as C++ or Java. DSLs are higher level than general purpose languages and run the gamut from scripting languages to simple configuration file formats. Writing a grammar for a DSL is easier than writing your own brute force parser. When the DSL changes, ANTLR makes it very easy to regenerate new code for parsing and translating it.

ANTLR grammars are based on Extended Backus-Naur Form (EBNF), used to describe recursive grammar rule definitions. The basic form of EBNF is

A : B

where A is a symbol replaced by B. If B is a rule, it is replaced by its right side, until all non-terminal symbols (rule labels, etc) are replaced by terminal symbols (literals).

The following simple grammar from the ANTLR Quick Starter guide illustrates a simple example of EBNF:

  1. utterance : greeting | exclamation;
  2. greeting : interjection subject;
  3. exclamation : 'Hooray!';
  4. interjection : 'Hello';
  5. subject : 'World!';

Notice that rules to the right of the colon - the non-terminal symbols - get replaced by their definitions repeatedly until only the literals (terminals) remain. A somewhat more complicated example comes from the Java Language Specification. While it is not a valid ANTLR grammar, it serves as another good example of BNF.

  1. Statement:
  2. Block
  3. ...
  4. StatementExpression ;
  5. ...
  6.  
  7. Block:
  8. { BlockStatements }
  9. BlockStatements:
  10. { BlockStatement }
  11. BlockStatement :
  12. LocalVariableDeclarationStatement
  13. ClassOrInterfaceDeclaration
  14. Statement

This case represents an example of the recursive nature of EBNF. The rule for Statement above is defined in terms of itself within the BlockStatement rule. As with all recursion Statement must resolve ultimately to a base case that is replaced by a terminal symbol.

An Example

An application I support that analyses input data requires a metadata file describing what that input contains. The metadata file serves as the bridge to provide meaning to the analysis workflow implemented by the application. The details are really unimportant to this discussion. The application began life using XML as the metadata format. In the early days it was fairly straightforward, and the application included a tool to generate the metadata from sample input data. Using that generated metadata, users added some additional detail that couldn't be extracted from the data itself. As the application evolved the XML metadata grew increasingly complex. Finally the users rebelled, stating the format became inefficient and was too difficult to handle effectively. To make a long story short - rather than spending time creating something else they might not like - the users were asked what they would like to see. They provided a configuration file format based on a legacy tool they used in the past, similar to this example.

  1. # this is repeated any number of times
  2. DEFINE-GROUP NAME=BODF TYPE=VMT COMPONENT=BODY SIDE=ALL
  3. VARIABLES
  4. X UNITS-IN=LBS UNITS-OUT=KIPS TEXT="X"
  5. Y UNITS-IN=LBS UNITS-OUT=KIPS TEXT="Y"
  6. Z UNITS-IN=LBS UNITS-OUT=KIPS TEXT="Z"
  7. END-VARIABLES
  8. STATIONS TEXT="Body Station (in)"
  9. 1.0 STAT00
  10. 1.1 STAT01
  11. 1.2 STAT02
  12. 1.3 STAT03
  13. END-STATIONS
  14. END-DEFINE-GROUP

The following grammar defines the rules used to recognize the new metadata format. Rule names to the left of the colons are defined by sequences on the right consisting of other rules and tokens.

  1. grammar MetaDef;
  2.  
  3. metadef : group+ EOF;
  4. group : 'DEFINE-GROUP'
  5. property* NEWLINE+
  6. variables
  7. stations
  8. 'END-DEFINE-GROUP' NEWLINE+;
  9. variables : 'VARIABLES' NEWLINE+
  10. variable*
  11. 'END-VARIABLES' NEWLINE+;
  12. variable : STRING property* NEWLINE+ ;
  13. stations : 'STATIONS' property* NEWLINE+
  14. station*
  15. 'END-STATIONS' NEWLINE+;
  16. station : STRING NEWLINE+;
  17. property : STRING EQ STRING;
  18.  
  19. // lexer rules - must start with uppercase letter
  20. EQ : '=';
  21. DIGITS : '0'..'9' ;
  22. LC : 'a'..'z' ;
  23. UC : 'A'..'Z' ;
  24. NEWLINE : '\n'|'\r'('\n')? ;
  25.  
  26. STRING : (LC|UC|DIGITS|'_'|'-'|','|'.')+ | ('"' (~'"')* '"');
  27. WS : (' '|'\t')+ { $channel=HIDDEN; } ;

The labels in all uppercase define lexer rules. Grammar recognition involves two steps for our example, and possibly more for grammars with greater complexity. The lexer - or process of lexical analysis - converts sequences of input characters into sequences of tokens. Parsing analyses the sequence of tokens to determine grammatical structure.

However, the example given above is not enough to provide a functional solution. It only performs input recognition. The lexer and parser generated by ANTLR for this grammar automatically emit warnings for input that doesn't conform to the grammar, but they don't translate the input into anything usable beyond verifying correctness. Generating output requires another step, and involves one or more of several possibilities:

  1. Generate an Abstract Syntax Tree (AST), which is often needed for situations that require multiple pass parsing.
  2. Use the StringTemplate utility included with ANTLR 3, or for our example
  3. Embed actions into the grammar that ANTLR includes in the generated parser.

Adding ANTLR actions to the grammar provides the translation and interpretation component to the overall workflow; otherwise, we only get a recognizer for the DSL. Embedding Java statements in the grammar directs the ANTLR generator to embed that code into the generated lexer and parser classes. This is similar in concept to how JSP scriptlet code becomes part of the generated servlet code.

  1. grammar MetaDef2;
  2.  
  3. @header {
  4. package com.ociweb.dsl;
  5. import java.util.List;
  6. import java.util.ArrayList;
  7. import java.util.Set;
  8. import java.util.LinkedHashSet;
  9. import org.apache.log4j.Logger;
  10. }
  11.  
  12. @lexer::header {
  13. package com.ociweb.dsl;
  14. }
  15.  
  16. @members {
  17. private static Logger logger =
  18. Logger.getLogger(MetaDef2Parser.class);
  19. private List<Group> groups = new ArrayList<Group>();
  20. private List<Property> controlData = new ArrayList<Property>();
  21.  
  22. public List<Group> getGroups() {
  23. return groups;
  24. }
  25. } // end of @members

The three sections above illustrate the use of global actions. The @header block defines code that appears in the parser before the class definition. Package and import statements belong here. The @lexer::header block works identically, but for the lexer class generated by ANTLR. The @members block encloses actions for initializing class fields and member methods. Use this section to initialize data used by rule bodies and to define supporting methods, including public methods callable from parser client code.

The rule definitions appear next. Actions found within the rules- any code appearing in blocks enclosed by braces - become local code in parser methods ANTLR generates to handle the specific rule. ANTLR inserts the @init action code after its own initialization code, but before code it generates for the rule body. @after actions (not shown in the example) can also be added to specify code that appears after that generated for the rule body.

  1. // start rule
  2. metadef : (group)+ EOF;
  3. group
  4. @init {
  5. Group group = new Group();
  6. groups.add(group);
  7. }
  8. : 'DEFINE-GROUP'
  9. { System.out.println("begin group"); }
  10. (p=property { group.addProperty(p); })* NEWLINE+
  11. // can also use group.addProperty($property)
  12. // and not assign to p
  13. vars=variables
  14. { group.setVariables(vars); }
  15. stats=stations { group.setStations(stats); }
  16. 'END-DEFINE-GROUP' NEWLINE+
  17. { System.out.println("end group"); } ;

ANTLR also supports rule parameters and rule return values for actions. Rule parameters are not presented in the example. When used they effectively create user-defined attributes available the the rule body. Rule returns also create user-defined attributes. Actions in the calling rule use these attributes via label properties. Notice how the action in the rule above assigns the return from the variables rule reference to an attribute (vars) used by an action within the rule body. The variables rule itself instantiates the return value within a rule action of its own, illustrated below.

  1. variables returns [Set<Variable> vars]
  2. : 'VARIABLES' NEWLINE+
  3. {
  4. System.out.println("begin variables");
  5. vars = new LinkedHashSet<Variable>();
  6. }
  7. (v=variable { vars.add(v); })*
  8. 'END-VARIABLES' NEWLINE+
  9. { System.out.println("end variables"); };
  10.  
  11. variable returns [Variable variable]
  12. : varName=STRING
  13. {
  14. System.out.println("variable = " + $varName.text);
  15. variable = new Variable($varName.text);
  16. }
  17. (p=property { variable.addProperty(p); })* NEWLINE+ ;
  18.  
  19. stations returns [Stations stations]
  20. : 'STATIONS'
  21. {
  22. System.out.println("begin stations");
  23. stations = new Stations();
  24. }
  25. (p=property { stations.addProperty(p); })* NEWLINE+
  26. (s=station { stations.addStation(s); })*
  27. 'END-STATIONS' NEWLINE+
  28. { System.out.println("end stations"); } ;
  29.  
  30. station returns [Station stat]
  31. : s=STRING
  32. {
  33. stat = new Station($s.text);
  34. System.out.println("station=" + $s.text);
  35. }
  36. (r=STRING { stat.addResultId(r); })*
  37. NEWLINE+;
  38.  
  39. property returns [Property p]
  40. : (key=STRING EQ (value=STRING)?)
  41. {
  42. p = new Property($key.text, $value.text);
  43. System.out.println("key = " + $key.text + ", value = " + $value.text);
  44. } ;
  45.  
  46. // lexer rules - must start with uppercase letter
  47. EQ : '=';
  48. DIGITS : '0'..'9' ;
  49. LC : 'a'..'z' ;
  50. UC : 'A'..'Z' ;
  51. NEWLINE : '\n'|'\r'('\n')? ;
  52. STRING : (LC|UC|DIGITS|'_'|'-'|','|'.')+ | ('"' (~'"')* '"');
  53. WS : (' '|'\t')+ { $channel=HIDDEN; } ;

The Java code for invoking the parser is straightforward. Notice that the final statement of the following example invokes the action defined in the @members block of the grammar.

  1. // Create an input character stream
  2. ANTLRInputStream input = new ANTLRInputStream(new FileInputStream(file));
  3. // Create a lexer that feeds from that stream
  4. MetaDef2Lexer lexer = new MetaDef2Lexer(input);
  5. // Create a stream of tokens fed by the lexer
  6. CommonTokenStream tokens = new CommonTokenStream(lexer);
  7. // Create a parser that feeds off the token stream
  8. MetaDef2Parser parser = new MetaDef2Parser(tokens);
  9. // Begin parsing at rule metadef
  10. parser.metadef();
  11. List<Group> groups = parser.getGroups();

Building DSLs with ANTLR

Although the ANTLR tool itself provides the ability to generate the final lexer and parser code for Java, it doesn't include a way to integrate that step easily into a build process. While ANTLRWorks (see below) makes it easy to generate the code, it doesn't help with automated builds since it requires user interaction in a GUI. Fortunately, one of the contributions found at the ANTLR web site includes an Ant task for invoking the ANTLR parser generator, and is featured prominently on the front page of the ANTLR web site.

To get started, add the antlr3-task.jar file to the lib directory in your installed Ant. Add the ANTLR jar files for parser generation to your build classpath. For example:

  1. [project]
  2. |- lib/tools/antlr
  3. |- antlr-2.7.7.jar
  4. |- antlr-3.0.1.jar
  5. |- stringtemplate-3.1b1

It's unnecessary to include these files in the runtime classpath. Deployment requires only the runtime jar file (antlr-runtime-3.0.1) in the execution classpath. I define the following Ant path for the antlr3 task.

  1. <path id="antlr.class.path">
  2. <fileset dir="${lib.dir}/tools/antlr">
  3. <include name="**/*.jar"></include>
  4. </fileset>
  5. </path>

The Ant task distribution includes an example project with an Ant build.xml file demonstrating its use. I prefer creating an Ant macrodef with defaults that work for my projects to simplify use (and reuse), as follows:

  1. <macrodef name="antlr3-def">
  2. <attribute name="antlr.grammar.name"></attribute>
  3. <attribute name="antlr.package.dir"></attribute>
  4. <attribute name="antlr.gensrc.dir"></attribute>
  5. <sequential>
  6. <echo message="antlr @{antlr.gensrc.dir}/@{antlr.grammar.name}" ></echo>
  7. <antlr:antlr3 xmlns:antlr="antlib:org/apache/tools/ant/antlr"
  8. target="@{antlr.gensrc.dir}/@{antlr.grammar.name}"
  9. outputdirectory="${gen.antlr.dir}/@{antlr.package.dir}"
  10. ibdirectory="${gen.antlr.dir}/@{antlr.package.dir}"
  11. multithreaded="true" report="true" profile="false">
  12. <classpath>
  13. <path refid="antlr.class.path" ></path>
  14. </classpath>
  15. </antlr:antlr3>
  16. </sequential>
  17. </macrodef>

Using the Ant macrodef is straightforward. Define the following properties, either in a properties file or a build.xml, and invoke the gen-antlr target.

  1. gen.src.dir=gen-src
  2. gen.antlr.dir=${gen.src.dir}/antlr
  3. gen.antlr.package.dir=com/ociweb/dsl
  4. antlr.dsl.grammar=Datadefinition.g
  5.  
  6. <target name="gen-antlr" depends="clean.antlr">
  7. <mkdir dir="${gen.antlr.dir}/${gen.antlr.package.dir}"></mkdir>
  8. <antlr3-def
  9. antlr.gensrc.dir="${gen.src.dir}"
  10. antlr.grammar.name="${antlr.dsl.grammar}"
  11. antlr.package.dir="${gen.antlr.package.dir}"></antlr3>
  12. </target>

There's really a lot more to explore and learn about ANTLR and parser generation that is beyond the scope of this discussion. The ANTLR web site and Parr's book are excellent places to start. But something that helped me a lot is ANTLRWorks, the grammar development IDE developed by Jean Bovet. Prior to working with ANTLR, language grammars were not an intellectual focus for me, so I was happy to get help with the learning curve associated with creating my first grammar.ANTLRWorks includes a very nice debugger, as well as tools for generating the lexer and parser code in Java, an interpreter, and syntax diagrammer. The grammar interpreter and debugger alone are reason enough to use ANTLRWorks, especially for those new to creating grammars for DSLs.

References

secret