There are many ways to write the lexical specification for a grammar, but the performance of the generated token manager can vary significantly depending on how you do this.
This section presents a few tips for writing good lexical specifications.
Try to specify as many string literals as possible.
These are recognized by a Deterministic Finite Automata (DFA), which is much faster than the Non-deterministic Finite Automata (NFA) needed to recognize other kinds of complex regular expressions.
For example, to skip blanks / tabs / new lines:
SKIP : { " " | "\t" | "\n" }
is more efficient than doing:
SKIP : { < ([" ", "\t", "\n"])+ > }
Because in the first case you only have String
literals, it will generate a DFA whereas for the second case it will generate an NFA.
Try to avoid having a choice of String literals for the same token.
For example:
< NONE : "\"none\"" | "\'none\'" >
Instead, have two different token types for this and use a non-terminal which is a choice between those choices.
The above example can be written as:
< NONE1 : "\"none\"" >
|
< NONE2 : "\'none\'" >
and define a non-terminal called None()
as:
void None() : {}
{
<NONE1> | <NONE2>
}
This will make recognition much faster. Note that if the choice is between two complex regular expressions, it is OK to have the choice.
Specify all string literals in order of increasing length, i.e. all shorter string literals before longer ones.
This will help optimizing the bit vectors needed for string literals.
Try to minimize the use of lexical states.
When using them, try to move all your complex regular expressions into a single lexical state, leaving others to just recognize simple string literals.
Try to SKIP
as much possible if you don't care about certain patterns.
Here, you have to be a bit careful about EOF
. Seeing an EOF
after SKIP
is fine whereas, seeing an EOF
after a MORE
is a lexical error.
Try to avoid lexical actions and lexical state changes with SKIP
specifications, especially for single character SKIP
's like
, \t
, \n
etc).
For such cases, a simple loop is generated to eat up the SKIP
'ed single characters. So, if there is a lexical action or state change associated with this, it is not possible to it this way.
Try to avoid specifying lexical actions with MORE
specifications.
Generally every MORE
should end up in a TOKEN
(or SPECIAL_TOKEN
) finally so you can do the action there at the TOKEN
level, if it is possible.
Try to use the pattern ~[]
by itself as much as possible.
For example, doing
MORE : { < ~[] > }
is better than doing
TOKEN : { < (~[])+ > }
Of course, if your grammar dictates that one of these cannot be used, then you don't have a choice, but try to use < ~[] >
as much as possible.
There is heavy performance penalty for setting IGNORE_CASE
for some regular expressions and not for others in the same lexical state.
Best practise is to set the IGNORE_CASE
option at the grammar level. If that is not possible, then try to have it set for all regular expressions in a lexical state.