mirror of
https://github.com/Mercury-Language/mercury.git
synced 2026-04-15 01:13:30 +00:00
Fix indentation.
This commit is contained in:
@@ -87,7 +87,7 @@
|
|||||||
:- func *(T) = regexp <= (regexp(T)).
|
:- func *(T) = regexp <= (regexp(T)).
|
||||||
|
|
||||||
% One of the following two functions may be deprecated in future,
|
% One of the following two functions may be deprecated in future,
|
||||||
% depending upon whether there's a consensus concerning
|
% depending upon whether there is a consensus concerning
|
||||||
% which is preferable. Both express alternation.
|
% which is preferable. Both express alternation.
|
||||||
%
|
%
|
||||||
:- func T1 \/ T2 = regexp <= (regexp(T1), regexp(T2)).
|
:- func T1 \/ T2 = regexp <= (regexp(T1), regexp(T2)).
|
||||||
@@ -153,7 +153,7 @@
|
|||||||
%
|
%
|
||||||
:- func charset(char_range) = charset.
|
:- func charset(char_range) = charset.
|
||||||
|
|
||||||
% Creates a union of all char ranges in the list. Returns the empty set
|
% Create a union of all char ranges in the list. Return the empty set
|
||||||
% if the list is empty. Any invalid codepoints are ignored.
|
% if the list is empty. Any invalid codepoints are ignored.
|
||||||
%
|
%
|
||||||
:- func charset_from_ranges(list(char_range)) = charset.
|
:- func charset_from_ranges(list(char_range)) = charset.
|
||||||
@@ -181,23 +181,22 @@
|
|||||||
:- func (T1 -> token_creator(Tok)) = pair(regexp, token_creator(Tok))
|
:- func (T1 -> token_creator(Tok)) = pair(regexp, token_creator(Tok))
|
||||||
<= regexp(T1).
|
<= regexp(T1).
|
||||||
|
|
||||||
% Construct a lexer from which we can generate running
|
% Construct a lexer from which we can generate running instances.
|
||||||
% instances.
|
|
||||||
%
|
%
|
||||||
% NOTE: If several lexemes match the same string only
|
% NOTE: If several lexemes match the same string, this returns only
|
||||||
% the token generated by the one closest to the start
|
% the token generated by the one closest to the start of the
|
||||||
% of the list of lexemes is returned.
|
% list of lexemes.
|
||||||
%
|
%
|
||||||
:- func init(list(lexeme(Tok))::in, read_pred(Src)::in(read_pred))
|
:- func init(list(lexeme(Tok))::in, read_pred(Src)::in(read_pred))
|
||||||
= (lexer(Tok, Src)::out) is det.
|
= (lexer(Tok, Src)::out) is det.
|
||||||
|
|
||||||
% Construct a lexer from which we can generate running
|
% Construct a lexer from which we can generate running instances.
|
||||||
% instances. If we construct a lexer with init/4, we
|
% If we construct a lexer with init/4, we can additionally ignore
|
||||||
% can additionally ignore specific tokens.
|
% specific tokens.
|
||||||
%
|
%
|
||||||
% NOTE: If several lexemes match the same string only
|
% NOTE: If several lexemes match the same string, this returns only
|
||||||
% the token generated by the one closest to the start
|
% the token generated by the one closest to the start of the
|
||||||
% of the list of lexemes is returned.
|
% list of lexemes.
|
||||||
%
|
%
|
||||||
:- func init(list(lexeme(Tok))::in, read_pred(Src)::in(read_pred),
|
:- func init(list(lexeme(Tok))::in, read_pred(Src)::in(read_pred),
|
||||||
ignore_pred(Tok)::in(ignore_pred)) = (lexer(Tok, Src)::out) is det.
|
ignore_pred(Tok)::in(ignore_pred)) = (lexer(Tok, Src)::out) is det.
|
||||||
|
|||||||
Reference in New Issue
Block a user