Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
LL parser
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
{{Short description|Top-down parser that parses input from left to right}} In [[computer science]], an '''LL parser''' (left-to-right, [[leftmost derivation]]) is a [[Top-down parsing|top-down parser]] for a restricted [[context-free language]]. It parses the input from '''L'''eft to right, performing [[Context-free grammar#Derivations and syntax trees|'''L'''eftmost derivation]] of the sentence. An LL parser is called an LL(''k'') parser if it uses ''k'' [[token (parser)|tokens]] of [[Parsing#Lookahead|lookahead]] when parsing a sentence. A grammar is called an [[LL grammar|LL(''k'') grammar]] if an LL(''k'') parser can be constructed from it. A formal language is called an LL(''k'') language if it has an LL(''k'') grammar. The set of LL(''k'') languages is properly contained in that of LL(''k''+1) languages, for each ''k'' β₯ 0.<ref>{{cite journal | last1=Rosenkrantz| first1=D. J.| last2=Stearns| first2=R. E.| title=Properties of Deterministic Top Down Grammars| journal=Information and Control| year=1970| volume=17| issue=3| pages=226β256| doi=10.1016/s0019-9958(70)90446-8| doi-access=free}}</ref> A corollary of this is that not all context-free languages can be recognized by an LL(''k'') parser. An LL parser is called LL-regular (LLR) if it parses an [[LL-regular language]].{{clarify|reason=LLR-parsers are defined by their employed method (lookahead on regular partitions), not by their accepted language. An LLR language can well be parsed using another method.|date=August 2021}}<ref>{{cite journal |last1=Jarzabek |first1=Stanislav |last2= Krawczyk |first2= Tomasz |title=LL-Regular Grammars |journal=Instytutu Maszyn Matematycznych |date=1974 |pages=107β119}}</ref><ref>{{cite journal | url=https://www.sciencedirect.com/science/article/abs/pii/0020019075900095 | last1=Jarzabek |first1=Stanislav |last2= Krawczyk |first2= Tomasz | title=LL-Regular Grammars | journal=[[Information Processing Letters]] | volume=4 | number=2 | pages=31–37 | date=Nov 1975 | doi=10.1016/0020-0190(75)90009-5 | url-access=subscription }}</ref><ref>{{cite report | url=https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1176&context=cstech | author=David A. Poplawski | title=Properties of LL-Regular Languages | institution=[[Purdue University]], Department of Computer Science | type=Technical Report | number=77β241 | date=Aug 1977 }}</ref> The class of [[LL-regular grammar|LLR grammars]] contains every LL(''k'') grammar for every ''k''. For every LLR grammar there exists an LLR parser that parses the grammar in linear time.{{citation needed|date=June 2022}} Two nomenclative outlier parser types are LL(*) and LL(finite). A parser is called LL(*)/LL(finite) if it uses the LL(*)/LL(finite) parsing strategy.<ref>{{cite journal |last1=Parr, Terence and Fisher, Kathleen |title=LL (*) the foundation of the ANTLR parser generator |journal=ACM SIGPLAN Notices |date=2011 |volume=46 |issue=6 |pages=425β436|doi=10.1145/1993316.1993548 }}</ref><ref>{{cite arXiv |last1=Belcak |first1=Peter |title=The LL(finite) parsing strategy for optimal LL(k) parsing |year=2020 |class=cs.PL |eprint=2010.07874 }}</ref> LL(*) and LL(finite) parsers are functionally closer to [[Parsing expression grammar|PEG]] parsers. An LL(finite) parser can parse an arbitrary LL(''k'') grammar optimally in the amount of lookahead and lookahead comparisons. The class of grammars parsable by the LL(*) strategy encompasses some context-sensitive languages due to the use of syntactic and semantic predicates and has not been identified. It has been suggested that LL(*) parsers are better thought of as [[Top-down parsing language|TDPL]] parsers.<ref>{{cite journal |last1=Ford |first1=Bryan |title=Parsing Expression Grammars: A Recognition-Based Syntactic Foundation |journal=ACM SIGPLAN Notices |date=2004 |doi=10.1145/982962.964011}}</ref> Against the popular misconception, LL(*) parsers are not LLR in general, and are guaranteed by construction to perform worse on average (super-linear against linear time) and far worse in the worst-case (exponential against linear time). LL grammars, particularly LL(1) grammars, are of great practical interest, as parsers for these grammars are easy to construct, and many [[computer language]]s are designed to be LL(1) for this reason.<ref>{{cite book | author=Pat Terry | title=Compiling with C# and Java | url=https://books.google.com/books?id=4O9ffYfX_H0C | publisher=Pearson Education | pages=159β164| isbn=9780321263605 | year=2005 }}</ref> LL parsers may be table-based,{{citation needed|reason=All LL parsers I ever saw were recursive-descent based.|date=February 2019}} i.e. similar to [[LR parser]]s, but LL grammars can also be parsed by [[recursive descent parser]]s. According to Waite and Goos (1984),<ref>{{cite book | isbn=978-3-540-90821-0 | author=William M. Waite and Gerhard Goos | title=Compiler Construction | location=Heidelberg | publisher=Springer | series=Texts and Monographs in Computer Science | year=1984 }} Here: Sect. 5.3.2, p. 121-127; in particular, p. 123.</ref> LL(''k'') grammars were introduced by Stearns and Lewis (1969).<ref>{{cite journal | author=[[Richard E. Stearns]] and P.M. Lewis | title=Property Grammars and Table Machines | journal=[[Information and Control]] | volume=14 | number=6 | pages=524–549 | year=1969 | doi=10.1016/S0019-9958(69)90312-X | doi-access=free }} </ref> == Overview == For a given [[context-free grammar]], the parser attempts to find the [[Context-free grammar#Derivations and syntax trees|leftmost derivation]]. Given an example grammar ''G'': # <math>S \to E</math> # <math>E \to ( E + E )</math> # <math>E \to i</math> the leftmost derivation for <math>w = ((i+i)+i)</math> is: : <math>S\ \overset{(1)}{\Rightarrow}\ E\ \overset{(2)}{\Rightarrow}\ (E+E)\ \overset{(2)}{\Rightarrow}\ ((E+E)+E)\ \overset{(3)}{\Rightarrow}\ ((i+E)+E)\ \overset{(3)}{\Rightarrow}\ ((i+i)+E)\ \overset{(3)}{\Rightarrow}\ ((i+i)+i)</math> Generally, there are multiple possibilities when selecting a rule to expand the leftmost non-terminal. In step 2 of the previous example, the parser must choose whether to apply rule 2 or rule 3: : <math>S\ \overset{(1)}{\Rightarrow}\ E\ \overset{(?)}{\Rightarrow}\ ?</math> To be efficient, the parser must be able to make this choice deterministically when possible, without backtracking. For some grammars, it can do this by peeking on the unread input (without reading). In our example, if the parser knows that the next unread symbol is '''(''', the only correct rule that can be used is 2. Generally, an LL(''k'') parser can look ahead at ''k'' symbols. However, given a grammar, the problem of determining if there exists a LL(''k'') parser for some ''k'' that recognizes it is undecidable. For each ''k'', there is a language that cannot be recognized by an LL(''k'') parser, but can be by an {{nowrap|LL(''k'' + 1)}}. We can use the above analysis to give the following formal definition: Let ''G'' be a context-free grammar and {{nowrap|''k'' β₯ 1}}. We say that ''G'' is LL(''k''), if and only if for any two leftmost derivations: # <math>S\ \Rightarrow\ \cdots\ \Rightarrow\ wA\alpha\ \Rightarrow\ \cdots\ \Rightarrow\ w\beta\alpha\ \Rightarrow\ \cdots\ \Rightarrow\ wu</math> # <math>S\ \Rightarrow\ \cdots\ \Rightarrow\ wA\alpha\ \Rightarrow\ \cdots\ \Rightarrow\ w\gamma\alpha\ \Rightarrow\ \cdots\ \Rightarrow\ wv</math> the following condition holds: the prefix of the string <math>u</math> of length <math>k</math> equals the prefix of the string <math>v </math> of length ''k'' implies <math>\beta = \gamma</math>. In this definition, <math>S</math> is the start symbol and <math>A</math> any non-terminal. The already derived input <math>w</math>, and yet unread <math>u</math> and <math>v</math> are strings of terminals. The Greek letters <math>\alpha</math>, <math>\beta</math> and <math>\gamma</math> represent any string of both terminals and non-terminals (possibly empty). The prefix length corresponds to the lookahead buffer size, and the definition says that this buffer is enough to distinguish between any two derivations of different words. == Parser == The LL(''k'') parser is a [[deterministic pushdown automaton]] with the ability to peek on the next ''k'' input symbols without reading. This peek capability can be emulated by storing the lookahead buffer contents in the finite state space, since both buffer and input alphabet are finite in size. As a result, this does not make the automaton more powerful, but is a convenient abstraction. The stack alphabet is <math>\Gamma = N \cup \Sigma</math>, where: * <math>N</math> is the set of non-terminals; * <math>\Sigma</math> the set of terminal (input) symbols with a special end-of-input (EOI) symbol '''$'''. The parser stack initially contains the starting symbol above the EOI: {{nowrap|[ S '''$''' ]}}. During operation, the parser repeatedly replaces the symbol <math>X</math> on top of the stack: * with some <math>\alpha</math>, if <math>X \in N</math> and there is a rule <math>X \to \alpha</math>; * with <math>\epsilon</math> (in some notations <math>\lambda</math>), i.e. <math>X</math> is popped off the stack, if <math>X \in \Sigma</math>. In this case, an input symbol <math>x</math> is read and if <math>x \neq X</math>, the parser rejects the input. If the last symbol to be removed from the stack is the EOI, the parsing is successful; the automaton accepts via an empty stack. The states and the transition function are not explicitly given; they are specified (generated) using a more convenient ''parse table'' instead. The table provides the following mapping: * row: top-of-stack symbol <math>X</math> * column: {{abs|''w''}} β€ ''k'' lookahead buffer contents * cell: rule number for <math>X \to \alpha</math> or <math>\epsilon</math> If the parser cannot perform a valid transition, the input is rejected (empty cells). To make the table more compact, only the non-terminal rows are commonly displayed, since the action is the same for terminals. == Concrete example == === Set up === To explain an LL(1) parser's workings we will consider the following small LL(1) grammar: # S β F # S β '''(''' S '''+''' F ''')''' # F β '''a''' and parse the following input: : '''(''' '''a''' '''+''' '''a''' ''')''' An LL(1) parsing table for a grammar has a row for each of the non-terminals and a column for each terminal (including the special terminal, represented here as '''$''', that is used to indicate the end of the input stream). Each cell of the table may point to at most one rule of the grammar (identified by its number). For example, in the parsing table for the above grammar, the cell for the non-terminal 'S' and terminal '<nowiki/>'''('''<nowiki/>' points to the rule number 2: : {| class="wikitable" ! ! ( ! ) ! a ! + ! $ |- align="center" ! S | 2 || {{sdash}} || 1 || {{sdash}} | {{sdash}} |- align="center" ! F | {{sdash}} || {{sdash}} || 3 || {{sdash}} | {{sdash}} |} The algorithm to construct a parsing table is described in a later section, but first let's see how the parser uses the parsing table to process its input. === Parsing procedure === In each step, the parser reads the next-available symbol from the input stream, and the top-most symbol from the stack. If the input symbol and the stack-top symbol match, the parser discards them both, leaving only the unmatched symbols in the input stream and on the stack. Thus, in its first step, the parser reads the input symbol '<nowiki/>'''('''<nowiki/>' and the stack-top symbol 'S'. The parsing table instruction comes from the column headed by the input symbol '<nowiki/>'''('''<nowiki/>' and the row headed by the stack-top symbol 'S'; this cell contains '2', which instructs the parser to apply rule (2). The parser has to rewrite 'S' to '<nowiki/>'''(''' S '''+''' F ''')'''<nowiki/>' on the stack by removing 'S' from stack and pushing ')', 'F', '<nowiki/>'''+'''<nowiki/>', 'S', '<nowiki/>'''('''<nowiki/>' onto the stack, and this writes the rule number 2 to the output. The stack then becomes: [ '''(''', S, '''+''', F, ''')''', '''$''' ] In the second step, the parser removes the '<nowiki/>'''('''<nowiki/>' from its input stream and from its stack, since they now match. The stack now becomes: [ S, '''+''', F, ''')''', '''$''' ] Now the parser has an ''''a'''' on its input stream and an 'S' as its stack top. The parsing table instructs it to apply rule (1) from the grammar and write the rule number 1 to the output stream. The stack becomes: [ F, '''+''', F, ''')''', '''$''' ] The parser now has an ''''a'''' on its input stream and an 'F' as its stack top. The parsing table instructs it to apply rule (3) from the grammar and write the rule number 3 to the output stream. The stack becomes: [ '''a''', '''+''', F, ''')''', '''$''' ] The parser now has an '<nowiki/>'''a'''' on the input stream and an '<nowiki/>'''a'''<nowiki/>' at its stack top. Because they are the same, it removes it from the input stream and pops it from the top of the stack. The parser then has an '<nowiki/>'''+'''<nowiki/>' on the input stream and '<nowiki/>'''+'''<nowiki/>' is at the top of the stack meaning, like with 'a', it is popped from the stack and removed from the input stream. This results in: [ F, ''')''', '''$''' ] In the next three steps the parser will replace 'F' on the stack by '<nowiki/>'''a'''<nowiki/>', write the rule number 3 to the output stream and remove the '<nowiki/>'''a'''<nowiki/>' and '<nowiki/>''')'''<nowiki/>' from both the stack and the input stream. The parser thus ends with '<nowiki/>'''$'''<nowiki/>' on both its stack and its input stream. In this case the parser will report that it has accepted the input string and write the following list of rule numbers to the output stream: : [ 2, 1, 3, 3 ] This is indeed a list of rules for a [[Context-free grammar#Derivations and syntax trees|leftmost derivation]] of the input string, which is: : S β '''(''' S '''+''' F ''')''' β '''(''' F '''+''' F ''')''' β '''(''' '''a''' '''+''' F ''')''' β '''(''' '''a''' '''+''' '''a''' ''')''' === Parser implementation in C++ === Below follows a C++ implementation of a table-based LL parser for the example language: <syntaxhighlight lang=cpp> # include <iostream> # include <map> # include <stack> enum Symbols { // the symbols: // Terminal symbols: TS_L_PARENS, // ( TS_R_PARENS, // ) TS_A, // a TS_PLUS, // + TS_EOS, // $, in this case corresponds to '\0' TS_INVALID, // invalid token // Non-terminal symbols: NTS_S, // S NTS_F // F }; /* Converts a valid token to the corresponding terminal symbol */ Symbols lexer(char c) { switch (c) { case '(': return TS_L_PARENS; case ')': return TS_R_PARENS; case 'a': return TS_A; case '+': return TS_PLUS; case '\0': return TS_EOS; // end of stack: the $ terminal symbol default: return TS_INVALID; } } int main(int argc, char **argv) { using namespace std; if (argc < 2) { cout << "usage:\n\tll '(a+a)'" << endl; return 0; } // LL parser table, maps < non-terminal, terminal> pair to action map< Symbols, map<Symbols, int> > table; stack<Symbols> ss; // symbol stack char *p; // input buffer // initialize the symbols stack ss.push(TS_EOS); // terminal, $ ss.push(NTS_S); // non-terminal, S // initialize the symbol stream cursor p = &argv[1][0]; // set up the parsing table table[NTS_S][TS_L_PARENS] = 2; table[NTS_S][TS_A] = 1; table[NTS_F][TS_A] = 3; while (ss.size() > 0) { if (lexer(*p) == ss.top()) { cout << "Matched symbols: " << lexer(*p) << endl; p++; ss.pop(); } else { cout << "Rule " << table[ss.top()][lexer(*p)] << endl; switch (table[ss.top()][lexer(*p)]) { case 1: // 1. S β F ss.pop(); ss.push(NTS_F); // F break; case 2: // 2. S β ( S + F ) ss.pop(); ss.push(TS_R_PARENS); // ) ss.push(NTS_F); // F ss.push(TS_PLUS); // + ss.push(NTS_S); // S ss.push(TS_L_PARENS); // ( break; case 3: // 3. F β a ss.pop(); ss.push(TS_A); // a break; default: cout << "parsing table defaulted" << endl; return 0; } } } cout << "finished parsing" << endl; return 0; } </syntaxhighlight> === Parser implementation in Python === <syntaxhighlight lang="python"> from enum import Enum from collections.abc import Generator class Term(Enum): pass class Rule(Enum): pass # All constants are indexed from 0 class Terminal(Term): LPAR = 0 RPAR = 1 A = 2 PLUS = 3 END = 4 INVALID = 5 def __str__(self): return f"T_{self.name}" class NonTerminal(Rule): S = 0 F = 1 def __str__(self): return f"N_{self.name}" # Parse table table = [[1, -1, 0, -1, -1, -1], [-1, -1, 2, -1, -1, -1]] RULES = [ [NonTerminal.F], [ Terminal.LPAR, NonTerminal.S, Terminal.PLUS, NonTerminal.F, Terminal.RPAR, ], [Terminal.A], ] stack = [Terminal.END, NonTerminal.S] def lexical_analysis(input_string: str) -> Generator[Terminal]: print("Lexical analysis") for c in input_string: match c: case "a": yield Terminal.A case "+": yield Terminal.PLUS case "(": yield Terminal.LPAR case ")": yield Terminal.RPAR case _: yield Terminal.INVALID yield Terminal.END def syntactic_analysis(tokens: list[Terminal]) -> None: print("tokens:", end=" ") print(*tokens, sep=", ") print("Syntactic analysis") position = 0 while stack: svalue = stack.pop() token = tokens[position] if isinstance(svalue, Term): if svalue == token: position += 1 print("pop", svalue) if token == Terminal.END: print("input accepted") else: raise ValueError("bad term on input:", str(token)) elif isinstance(svalue, Rule): print(f"{svalue = !s}, {token = !s}") rule = table[svalue.value][token.value] print(f"{rule = }") for r in reversed(RULES[rule]): stack.append(r) print("stacks:", end=" ") print(*stack, sep=", ") if __name__ == "__main__": inputstring = "(a+a)" syntactic_analysis(list(lexical_analysis(inputstring))) </syntaxhighlight> Outputs: {{sxhl|2=output|1= Lexical analysis tokens: T_LPAR, T_A, T_PLUS, T_A, T_RPAR, T_END Syntactic analysis svalue = N_S, token = T_LPAR rule = 1 stacks: T_END, T_RPAR, N_F, T_PLUS, N_S, T_LPAR pop T_LPAR stacks: T_END, T_RPAR, N_F, T_PLUS, N_S svalue = N_S, token = T_A rule = 0 stacks: T_END, T_RPAR, N_F, T_PLUS, N_F svalue = N_F, token = T_A rule = 2 stacks: T_END, T_RPAR, N_F, T_PLUS, T_A pop T_A stacks: T_END, T_RPAR, N_F, T_PLUS pop T_PLUS stacks: T_END, T_RPAR, N_F svalue = N_F, token = T_A rule = 2 stacks: T_END, T_RPAR, T_A pop T_A stacks: T_END, T_RPAR pop T_RPAR stacks: T_END pop T_END input accepted stacks: }} == Remarks == As can be seen from the example, the parser performs three types of steps depending on whether the top of the stack is a nonterminal, a terminal or the special symbol '''$''': * If the top is a nonterminal then the parser looks up in the parsing table, on the basis of this nonterminal and the symbol on the input stream, which rule of the grammar it should use to replace nonterminal on the stack. The number of the rule is written to the output stream. If the parsing table indicates that there is no such rule then the parser reports an error and stops. * If the top is a terminal then the parser compares it to the symbol on the input stream and if they are equal they are both removed. If they are not equal the parser reports an error and stops. * If the top is '''$''' and on the input stream there is also a '''$''' then the parser reports that it has successfully parsed the input, otherwise it reports an error. In both cases the parser will stop. These steps are repeated until the parser stops, and then it will have either completely parsed the input and written a [[Context-free grammar#Derivations and syntax trees|leftmost derivation]] to the output stream or it will have reported an error. == Constructing an LL(1) parsing table == In order to fill the parsing table, we have to establish what grammar rule the parser should choose if it sees a nonterminal ''A'' on the top of its stack and a symbol ''a'' on its input stream. It is easy to see that such a rule should be of the form ''A'' β ''w'' and that the language corresponding to ''w'' should have at least one string starting with ''a''. For this purpose we define the ''First-set'' of ''w'', written here as '''Fi'''(''w''), as the set of terminals that can be found at the start of some string in ''w'', plus Ξ΅ if the empty string also belongs to ''w''. Given a grammar with the rules ''A''<sub>1</sub> β ''w''<sub>1</sub>, ..., ''A''<sub>''n''</sub> β ''w''<sub>''n''</sub>, we can compute the '''Fi'''(''w''<sub>''i''</sub>) and '''Fi'''(''A''<sub>''i''</sub>) for every rule as follows: # initialize every '''Fi'''(''A''<sub>''i''</sub>) with the empty set # add Fi(''w''<sub>''i''</sub>) to '''Fi'''(''A''<sub>''i''</sub>) for every rule ''A''<sub>''i''</sub> β ''w''<sub>''i''</sub>, where Fi is defined as follows: #* {{math|1=Fi(''a{{prime|w}}'') = {{mset| ''a'' }}}} for every terminal ''a'' #* {{math|1=Fi(''A{{prime|w}}'') = '''Fi'''(''A'')}} for every nonterminal ''A'' with Ξ΅ not in '''Fi'''(''A'') #* {{math|1=Fi(''A{{prime|w}}'') = ('''Fi'''(''A'') ∖ { Ξ΅ }) βͺ Fi(''w' '')}} for every nonterminal ''A'' with Ξ΅ in '''Fi'''(''A'') #* Fi(Ξ΅) = { Ξ΅ } # add Fi(''w''<sub>''i''</sub>) to '''Fi'''(''A''<sub>''i''</sub>) for every rule ''A''<sub>''i''</sub> β ''w''<sub>''i''</sub> # do steps 2 and 3 until all '''Fi''' sets stay the same. The result is the least fixed point solution to the following system: * {{math|1='''Fi'''(''A'') β '''Fi'''(''w'')}} for each rule A β ''w'' * {{math|1='''Fi'''(''a'') β {{mset| ''a'' }},}} for each terminal ''a'' * {{math|1='''Fi'''(''w''<sub>0</sub> ''w''<sub>1</sub>) β '''Fi'''(''w''<sub>0</sub>)Β·'''Fi'''(''w''<sub>1</sub>),}} for all words ''w''<sub>0</sub> and ''w''<sub>1</sub> * {{math|1='''Fi'''(''Ξ΅'') β {{mset|''Ξ΅''}}}} where, for sets of words ''U'' and ''V'', the truncated product is defined by <math>U \cdot V = \{ (uv):1 \mid u \in U, v \in V \}</math>, and w:1 denotes the initial length-1 prefix of words w of length 2 or more, or ''w'', itself, if w has length 0 or 1. Unfortunately, the First-sets are not sufficient to compute the parsing table. This is because a right-hand side ''w'' of a rule might ultimately be rewritten to the empty string. So the parser should also use the rule ''A'' β ''w'' if ''Ξ΅'' is in '''Fi'''(''w'') and it sees on the input stream a symbol that could follow ''A''. Therefore, we also need the ''Follow-set'' of ''A'', written as '''Fo'''(''A'') here, which is defined as the set of terminals ''a'' such that there is a string of symbols ''Ξ±AaΞ²'' that can be derived from the start symbol. We use '''$''' as a special terminal indicating end of input stream, and ''S'' as start symbol. Computing the Follow-sets for the nonterminals in a grammar can be done as follows: # initialize '''Fo'''(''S'') with { '''$''' } and every other '''Fo'''(''A''<sub>''i''</sub>) with the empty set # if there is a rule of the form ''A''<sub>''j''</sub> β ''wA<sub>i</sub>{{prime|w}}'', then #* if the terminal ''a'' is in '''Fi'''(''{{prime|w}}''), then add ''a'' to '''Fo'''(''A''<sub>''i''</sub>) #* if Ξ΅ is in '''Fi'''(''{{prime|w}}''), then add '''Fo'''(''A''<sub>''j''</sub>) to '''Fo'''(''A''<sub>''i''</sub>) #* if ''w' '' has length 0, then add '''Fo'''(''A''<sub>''j''</sub>) to '''Fo'''(''A''<sub>''i''</sub>) # repeat step 2 until all '''Fo''' sets stay the same. This provides the least fixed point solution to the following system: * {{math|'''Fo'''(''S'') β {{mset|'''$'''}}}} * {{math|'''Fo'''(''A'') β '''Fi'''(''w'')Β·'''Fo'''(''B'')}} for each rule of the form {{tmath|B \to \dots A w}} Now we can define exactly which rules will appear where in the parsing table. If ''T''[''A'', ''a''] denotes the entry in the table for nonterminal ''A'' and terminal ''a'', then : ''T''[''A'',''a''] contains the rule ''A'' β ''w'' if and only if :: ''a'' is in '''Fi'''(''w'') or :: Ξ΅ is in '''Fi'''(''w'') and ''a'' is in '''Fo'''(''A''). Equivalently: ''T''[''A'', ''a''] contains the rule ''A'' β ''w'' for each {{math|''a'' β '''Fi'''(''w'')Β·'''Fo'''(''A'').}} If the table contains at most one rule in every one of its cells, then the parser will always know which rule it has to use and can therefore parse strings without backtracking. It is in precisely this case that the grammar is called an ''LL(1) grammar''. == Constructing an LL(''k'') parsing table == The construction for LL(1) parsers can be adapted to LL(''k'') for ''k'' > 1 with the following modifications: * the truncated product is defined <math>U \cdot V = \{ (uv):k \mid u \in U, v \in V \}</math>, where ''w'':''k'' denotes the initial length-''k'' prefix of words of length > ''k'', or ''w'', itself, if ''w'' has length ''k'' or less, * '''Fo'''(''S'') = {'''$'''<sup>k</sup>} * Apply '''Fi'''(''Ξ±Ξ²'') = '''Fi'''(''Ξ±'') β '''Fi'''(''Ξ²'') also in step 2 of the '''Fi''' construction given for LL(1). * In step 2 of the '''Fo''' construction, for ''A<sub>j</sub>'' β ''wA<sub>i</sub>w''{{prime}} simply add '''Fi'''(''{{prime|w}}'')Β·'''Fo'''(''A<sub>j</sub>'') to '''Fo'''(''A<sub>i</sub>''). where an input is suffixed by ''k'' end-markers '''$''', to fully account for the ''k'' lookahead context. This approach eliminates special cases for Ξ΅, and can be applied equally well in the LL(1) case. Until the mid-1990s, it was widely believed that {{clarify span|LL(''k'') parsing|reason=Should be 'table-based LL(k) parsing'? I saw several ad hoc recursive-descent parsers in common use in the 1970s and 1980s. Moreover, if the *language* is LL(k), the *parser* can't do anything about it.|date=February 2024}} (for ''k'' > 1) was impractical,<ref>{{cite book |last1=Fritzson |first1=Peter A. |title=Compiler Construction: 5th International Conference, CC '94, Edinburgh, U.K., April 7β9, 1994. Proceedings |date=23 March 1994 |publisher=Springer Science & Business Media |isbn=978-3-540-57877-2 |language=en}}</ref>{{rp|263β265}} since the parser table would have [[exponential function|exponential]] size in ''k'' in the worst case. This perception changed gradually after the release of the [[Antlr|Purdue Compiler Construction Tool Set]] around 1992, when it was demonstrated that many [[programming language]]s can be parsed efficiently by an LL(''k'') parser without triggering the worst-case behavior of the parser. Moreover, in certain cases LL parsing is feasible even with unlimited lookahead. By contrast, traditional parser generators like [[yacc]] use [[LALR parser|LALR(1)]] parser tables to construct a restricted [[LR parser]] with a fixed one-token lookahead. == Conflicts == As described in the introduction, LL(1) parsers recognize languages that have LL(1) grammars, which are a special case of context-free grammars; LL(1) parsers cannot recognize all context-free languages. The LL(1) languages are a proper subset of the LR(1) languages, which in turn are a proper subset of all context-free languages. In order for a context-free grammar to be an LL(1) grammar, certain conflicts must not arise, which we describe in this section. === Terminology === Let ''A'' be a non-terminal. FIRST(''A'') is (defined to be) the set of terminals that can appear in the first position of any string derived from ''A''. FOLLOW(''A'') is the union over:<ref>{{Cite web|title=LL Grammars|url=http://www.cs.uaf.edu/~cs331/notes/LL.pdf|url-status=live|archive-url=https://web.archive.org/web/20100618193203/http://www.cs.uaf.edu/~cs331/notes/LL.pdf|archive-date=2010-06-18|access-date=2010-05-11}}</ref> # FIRST(''B'') where ''B'' is any non-terminal that immediately follows ''A'' in the right-hand side of a [[Formal grammar#The syntax of grammars|production rule]]. # FOLLOW(''B'') where ''B'' is any head of a rule of the form {{nowrap|''B'' β ''wA''}}. === LL(1) conflicts === There are two main types of LL(1) conflicts: ==== FIRST/FIRST conflict ==== The FIRST sets of two different grammar rules for the same non-terminal intersect. An example of an LL(1) FIRST/FIRST conflict: S -> E | E 'a' E -> 'b' | Ξ΅ FIRST(''E'') = {''b'', Ξ΅} and FIRST(''E'' ''a'') = {{brace|''b'', ''a''}}, so when the table is drawn, there is conflict under terminal ''b'' of production rule ''S''. ===== Special case: left recursion ===== [[Left recursion]] will cause a FIRST/FIRST conflict with all alternatives. E -> E '+' term | alt1 | alt2 ==== FIRST/FOLLOW conflict ==== The FIRST and FOLLOW set of a grammar rule overlap. With an [[empty string]] (Ξ΅) in the FIRST set, it is unknown which alternative to select. An example of an LL(1) conflict: S -> A 'a' 'b' A -> 'a' | Ξ΅ The FIRST set of ''A'' is {''a'', Ξ΅}, and the FOLLOW set is {''a''}. === Solutions to LL(1) conflicts === ==== Left factoring ==== A common left-factor is "factored out". A -> X | X Y Z becomes A -> X B B -> Y Z | Ξ΅ Can be applied when two alternatives start with the same symbol like a FIRST/FIRST conflict. Another example (more complex) using above FIRST/FIRST conflict example: S -> E | E 'a' E -> 'b' | Ξ΅ becomes (merging into a single non-terminal) S -> 'b' | Ξ΅ | 'b' 'a' | 'a' then through left-factoring, becomes S -> 'b' E | E E -> 'a' | Ξ΅ ==== Substitution ==== Substituting a rule into another rule to remove indirect or FIRST/FOLLOW conflicts. Note that this may cause a FIRST/FIRST conflict. ==== Left recursion removal ==== : ''See <ref>[https://books.google.com/books?id=zkpFTBtK7a4C&q=%22left+recursion%22 Modern Compiler Design], Grune, Bal, Jacobs and Langendoen</ref>'' For a general method, see [[Left recursion#Removing left recursion|removing left recursion]]. A simple example for left recursion removal: The following production rule has left recursion on E E -> E '+' T E -> T This rule is nothing but list of Ts separated by '+'. In a regular expression form T ('+' T)*. So the rule could be rewritten as E -> T Z Z -> '+' T Z Z -> Ξ΅ Now there is no left recursion and no conflicts on either of the rules. However, not all context-free grammars have an equivalent LL(k)-grammar, e.g.: S -> A | B A -> 'a' A 'b' | Ξ΅ B -> 'a' B 'b' 'b' | Ξ΅ It can be shown that there does not exist any LL(k)-grammar accepting the language generated by this grammar. == See also == * [[Comparison of parser generators]] * [[Parse tree]] * [[Top-down parsing]] * [[Bottom-up parsing]] == Notes == {{reflist}} == External links == * [https://web.archive.org/web/20131228024914/http://www.itu.dk/people/kfl/parsernotes.pdf A tutorial on implementing LL(1) parsers in C# (archived)] * [http://www.supereasyfree.com/software/simulators/compilers/principles-techniques-and-tools/parsing-simulator/parsing-simulator.php Parsing Simulator] This simulator is used to generate parsing tables LL(1) and to resolve the exercises of the book. * [https://cs.stackexchange.com/q/43 Language theoretic comparison of LL and LR grammars] * [http://www.h8dems.com/llkparse.html LL(k) Parsing Theory] {{Parsers}} {{DEFAULTSORT:Ll Parser}} [[Category:Parsing algorithms]] [[Category:Articles with example C++ code]] [[Category:Articles with example Python (programming language) code]]
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)
Pages transcluded onto the current version of this page
(
help
)
:
Template:Abs
(
edit
)
Template:Brace
(
edit
)
Template:Citation needed
(
edit
)
Template:Cite arXiv
(
edit
)
Template:Cite book
(
edit
)
Template:Cite journal
(
edit
)
Template:Cite report
(
edit
)
Template:Cite web
(
edit
)
Template:Clarify
(
edit
)
Template:Clarify span
(
edit
)
Template:Math
(
edit
)
Template:Nowrap
(
edit
)
Template:Parsers
(
edit
)
Template:Prime
(
edit
)
Template:Reflist
(
edit
)
Template:Rp
(
edit
)
Template:Sdash
(
edit
)
Template:Short description
(
edit
)
Template:Sxhl
(
edit
)
Template:Tmath
(
edit
)