Files
mercury/doc/user_guide.texi
Thomas Conway 66a5fbe65d Documented the changed interfaces to list, std_util and graph.
Estimated hours taken: 6

NEWS:
	Documented the changed interfaces to list, std_util and graph.

configure.in:
	Added the number of bytes per word (calculated as sizeof(void *))
	as a configuration variable.

compiler/goal_util.m:
	Add an optional sanity check for ensuring that all variables in a goal
	get renamed in goal_util__rename_vars_in_goal[s].
	Also fixed a bug in goal_util__create_variables which was giving
	wrong names to some variables (which lead to very confusing hlds dumps).

compiler/excess.m:
	in the calls to goal_util__rename_vars_in_goals add the bool which
	indicates that we do not want to do the sanity checking operation of
	making sure that *all* variables get renamed.

compiler/inlining.m:
	in the calls to goal_util__rename_vars_in_goals add the bool which
	indicates that we do want to do the sanity checking operation of
	making sure that *all* variables get renamed.
	Also fixed up calls to goal_util__create_variables for the bug fix
	described above.

compiler/quantification.m:
	in the calls to goal_util__rename_vars_in_goals add the bool which
	indicates that we do not want to do the sanity checking operation of
	making sure that *all* variables get renamed.
	Also fixed up calls to goal_util__create_variables for the bug fix
	described above.

compiler/lookup_switch.m:
	changed lookup_switch to use a configuration option "word_size" to
	find out the number of bytes (and hence the number of bits) per
	word, rather than having a magic number.

compiler/options.m:
	added "word_size" for the number of bytes per word. Defaults to 4,
	but my next checkin will add a configuration parameter to mc.in.
	Don't port to any 16 bit machines in the next couple of days. ;-)
	also changed req_density to dense_switch_req_density and added
	lookup_switch_req_density for the minimum density of lookup switches.

compiler/switch_gen.m:
	changed req_density to dense_switch_req_density and
	lookup_switch_req_density appropriately.

library/graph.m:
	Add lots of comments.
	Fix the interface to make it more consistent.
	Fixed some bugs.

library/list.m:
	Added some HO stuff from philip:
		list__filter/3, list__filter/4
		list__filter_map, list_sort/3 (takes a cmp predicate).
	Moved the HO interface stuff into the interface at the
	top of the file.
	Removed list__map_maybe/3.

library/std_util.m:
	added a pair/3 predicate from philip for avoiding type ambiguities
	when using -/2.
	added maybe_pred/3.

doc/user_guide.texi:
	added documentation for the changes to the command line options.
1996-04-03 02:30:34 +00:00

1697 lines
56 KiB
Plaintext

\input texinfo
@setfilename mercury_user_guide.info
@settitle The Mercury User's Guide
@ignore
@ifinfo
@format
START-INFO-DIR-ENTRY
* Mercury: (mercury). The Mercury User's Guide.
END-INFO-DIR-ENTRY
@end format
@end ifinfo
@end ignore
@c @smallbook
@c @cropmarks
@finalout
@setchapternewpage off
@ifinfo
This file documents the Mercury implementation.
Copyright (C) 1995 University of Melbourne.
Permission is granted to make and distribute verbatim copies of
this manual provided the copyright notice and this permission notice
are preserved on all copies.
@ignore
Permission is granted to process this file through Tex and print the
results, provided the printed document carries copying permission
notice identical to this one except for the removal of this paragraph
(this paragraph not being relevant to the printed manual).
@end ignore
Permission is granted to copy and distribute modified versions of this
manual under the conditions for verbatim copying, provided also that
the entire resulting derived work is distributed under the terms of a
permission notice identical to this one.
Permission is granted to copy and distribute translations of this manual
into another language, under the above conditions for modified versions.
@end ifinfo
@titlepage
@c @finalout
@title The Mercury User's Guide
@author Fergus Henderson
@author Thomas Conway
@author Zoltan Somogyi
@author Peter Ross
@page
@vskip 0pt plus 1filll
Copyright @copyright{} 1995 University of Melbourne.
Permission is granted to make and distribute verbatim copies of
this manual provided the copyright notice and this permission notice
are preserved on all copies.
Permission is granted to copy and distribute modified versions of this
manual under the conditions for verbatim copying, provided also that
the entire resulting derived work is distributed under the terms of a
permission notice identical to this one.
Permission is granted to copy and distribute translations of this manual
into another language, under the above conditions for modified versions.
@end titlepage
@page
@ifinfo
@node Top,,, (mercury)
@top The Mercury User's Guide
This guide describes the compilation environment of Mercury ---
how to build and debug Mercury programs.
@menu
* Introduction:: General overview
* Filenames:: File naming conventions
* Using mc:: Compiling and linking programs with the Mercury compiler
* Using Prolog:: Building and debugging Mercury programs with Prolog
* Using Mmake:: ``Mercury Make'', a tool for building Mercury programs
* Profiling:: The Mercury profiler @samp{mprof}, a tool for analyzing
program performance
* Invocation:: List of options for the Mercury compiler
* Environment:: Environment variables used by the compiler and utilities
* C compilers:: How to use a C compiler other than GNU C
@end menu
@end ifinfo
@node Introduction
@chapter Introduction
This document describes the compilation environment of Mercury.
It describes how to use @samp{mc}, the Mercury compiler;
how to use @samp{mmake}, the ``Mercury make'' program,
a tool built on top of ordinary or GNU make
to simplify the handling of Mercury programs;
and how to use Prolog to debug Mercury programs.
We strongly recommend that programmers use @samp{mmake} rather
than invoking @samp{mc} directly, because @samp{mmake} is generally
easier to use and avoids unnecessary recompilation.
Since the Mercury implementation is essentially a native-code compiler
which happens to compile via GNU C, not an interpreter,
debugging of compiled Mercury programs would require
a dedicated Mercury debugger program - or at least some
significant extensions to gdb. It would also require
the Mercury compiler to arrange for the object
code to contain suitable debugging information.
Although it is possible to debug the intermediate C code, this is
not productive, since the intermediate C code is extremely low-level
and bears little resemblance to the source code.
As an alternative, Mercury programmers may wish to use a Prolog system
to execute their Mercury programs in order to gain access to this facility.
The feasibility of this technique is dependent upon the program
being written in the intersection of the Prolog and Mercury languages,
which is possible because the two languages have almost the same syntax.
The Mercury implementation allows you to run a Mercury program
using NU-Prolog or SICStus Prolog (@pxref{Using Prolog}).
@node Filenames
@chapter File naming conventions
Mercury source files should be named @file{*.m}.
Each Mercury source file must contain a single Mercury module
whose name should be the same as the filename without
the @samp{.m} extension.
Files ending in @file{.int} and @file{.int2} are interface files;
these are generated automatically by the compiler,
using the @samp{--make-interface} option.
(The @file{.int} files are for direct dependencies,
while the @file{.int2} files are a shorter version
used for indirect dependencies.)
Since the interface of a module changes less often than its implementation,
the @file{.int} and @file{.int2} files will remain unchanged
on many compilations.
To avoid unnecessary recompilations of the clients of the module,
the timestamps on the @file{.int} and @file{.int2} files
are updated only if their contents change.
A @file{.date} file associated with the module is used as a date stamp;
it is used when deciding whether the interface files need to be regenerated.
Files ending in @file{.d} are automatically-generated Makefile fragments
which contain the dependencies for a module.
Files ending in @file{.dep} are automatically-generated Makefile fragments
which contain the rules for an entire program.
In the source code for the Mercury runtime library,
we use a few files ending in @file{.mod};
these are preprocessed using the perl script @samp{mod2c} to produce C files.
Originally the Mercury compiler also produced @file{.mod} files,
but now we compile directly to C.
As usual, @file{.c} files are C source code, @file{.o} files are object code,
@file{.no} files are NU-Prolog object code, and
@file{.ql} files are SICStus Prolog object code.
@node Using mc
@chapter Using the Mercury compiler
Following a long Unix tradition,
the Mercury compiler is called @samp{mc}.
Some of its options (e.g. @samp{-c}, @samp{-o}, and @samp{-I})
have a similar meaning to that in other Unix compilers.
Arguments to @samp{mc} that name Mercury source files
may omit the @samp{.m} suffix.
To compile a program which consists of just a single module,
use the command
@example
mc @var{module}.m
@end example
Unlike traditional Unix compilers, however,
@samp{mc} will put the executable into a file called @file{@var{module}},
not @file{a.out}.
To compile a module to object code without creating an executable,
use the command
@example
mc -c @var{module}.m
@end example
@samp{mc} will put the object code into a file called @file{@var{module}.o}.
It also will leave the intermediate C code in a file called
@file{@var{module}.c}.
Before you can compile a module,
you must make the interface files
for the modules that it imports (directly or indirectly).
You can create the interface files for one or more modules using the command
@example
mc -i @var{module1}.m @var{module2}.m ...
@end example
Given that you have made all the interface files,
one way to create an executable for a multi-module program
is to compile all the modules at the same time
using the command
@example
mc @var{module1}.m @var{module2}.m ...
@end example
This will by default put the resulting executable in @file{@var{module1}},
but you can use the @samp{-o @var{filename}} option to specify a different
name for the output file, if you so desire.
The other way to create an executable for a multi-module program
is to compile each module separately using @samp{mc -c},
and then link the resulting object files together.
The linking is a two stage process.
First, you must create and compile an @emph{initialization file},
which is a C source file
containing calls to automatically generated initialization functions
contained in the C code of the modules of the program:
@example
c2init @var{module1}.c @var{module2}.c ... > @var{main_module}_init.c,
mgnuc -c @var{main_module}_init.c
@end example
The @samp{c2init} command line must contain
the name of the C file of every module in the program.
The order of the arguments is not important.
The @samp{mgnuc} command is the Mercury GNU C compiler;
it is a shell script that invokes the GNU C compiler @samp{gcc}
@c (or some other C compiler if GNU C is not available)
with the options appropriate for compiling
the C programs generated by Mercury.
You then link the object code of each module
with the object code of the initialization file to yield the executable:
@example
ml -o @var{main_module} @var{module1}.o @var{module2}.o ... @var{main_module}_init.o
@end example
@samp{ml}, the Mercury linker, is another shell script
that invokes a C compiler with options appropriate for Mercury,
this time for linking. @samp{ml} also pipes any error messages
from the linker through @samp{mdemangle}, the Mercury symbol demangler,
so that error messages refer to predicate names in the Mercury source
code rather than to the names used in the intermediate C code.
The above command puts the executable in the file @file{@var{main_module}}.
The same command line without the @samp{-o} option
would put the executable into the file @file{a.out}.
@samp{mc} and @samp{ml} both accept a @samp{-v} (verbose) option.
You can use that option to see what is actually going on.
For the full set of options of @samp{mc}, see @ref{Invocation}.
Once you have created an executable for a Mercury program,
you can go ahead and execute it. You may however wish to specify
certain options to the Mercury runtime system.
The Mercury runtime accepts
options via the @samp{MERCURY_OPTIONS} environment variable.
Setting @samp{MERCURY_OPTIONS} to @samp{-h}
will list the available options.
The most useful of these are the options that set the size of the stacks.
The det stack and the nondet stack
are allocated fixed sizes at program start-up.
The default size is 512k for the det stack and 128k for the nondet stack,
but these can be overridden with the @samp{-sd} and @samp{-sn} options,
whose arguments are the desired sizes of the det and nondet stacks
respectively, in units of kilobytes.
On operating systems that provide the appropriate support,
the Mercury runtime will ensure that stack overflow
is trapped by the virtual memory system.
With conservative garbage collection (the default),
the heap will start out with a zero size,
and will be dynamically expanded as needed,
When not using conservative garbage collection,
the heap has a fixed size like the stacks.
The default size is 4 Mb, but this can be overridden
with the @samp{-sh} option.
@node Using Prolog
@chapter Using Prolog
Since the current Mercury implementation does not yet provide any
useful support for debugging, we recommend that you use a Prolog
system for debugging your Mercury programs.
However, there is no point in using a Prolog debugger to track down a bug
that can be detected statically by the Mercury compiler.
The command
@example
mc -e @var{module1}.m ...
@end example
causes the Mercury compiler
to perform all its syntactic and semantic checks on the named modules,
but not to generate any code.
In our experience, omitting that step is not wise.
If you do omit it, you often waste a lot of time
debugging problems that the compiler could have detected for you.
@menu
* Using NU-Prolog:: Building and debugging Mercury programs with NU-Prolog
* Using SICStus:: Building and debugging Mercury programs with SICStus Prolog
* Prolog hazards:: The hazards of executing Mercury programs using Prolog
@end menu
@node Using NU-Prolog
@section Using NU-Prolog
You can compile a Mercury source module using NU-Prolog via the command
@example
mnc @var{module1}.m ...
@end example
@samp{mnc} is the Mercury variant of @samp{nc}, the NU-Prolog compiler.
It adapts @code{nc} to compile Mercury programs,
e.g. by defining do-nothing predicates for the various Mercury declarations
(which are executed by @code{nc}).
Some invocations of @samp{mnc} will result in warnings such as
@example
Warning: main is a system predicate.
It shouldn't be used as a non-terminal.
@end example
Such warnings should be ignored.
@samp{mnc} compiles the modules it is given into NU-Prolog bytecode,
stored in files with a @file{.no} suffix.
You can link these together using the command
@example
mnl -o @var{main_module} @var{module1}.no ...
@end example
Ignore any warnings such as
@example
Warning: main/2 redefined
Warning: solutions/2 redefined
Warning: !/0 redefined
@end example
@samp{mnl}, the Mercury NU-Prolog linker,
will put the executable (actually a shell script invoking a save file)
into the file @file{@var{main_module}.nu}.
This can be executed normally using
@example
./@var{main_module}.nu @var{arguments}
@end example
Alternatively, one can execute such programs using @samp{mnp},
the Mercury version of np, the NU-Prolog interpreter.
The command
@example
mnp
@end example
will start up the Mercury NU-Prolog interpreter.
Inside the interpreter, you can load your source files
with a normal consulting command such as
@example
['@var{module}.m'].
@end example
You can also use the @file{--debug} option to @samp{mnl} when linking.
This will produce an executable whose entry point is the NU-Prolog interpreter,
rather than main/2 in your program.
In both cases, you can start executing your program by typing
@example
r("@var{program-name} @var{arguments}").
@end example
at the prompt of the NU-Prolog interpreter.
All the NU-Prolog debugging commands work as usual.
The most useful ones are
the @code{trace} and @code{spy} commands at the main prompt
to turn on complete or selective tracing respectively,
and the @code{l} (leap), @code{s} (skip), and @code{r} (redo)
commands of the tracer.
For more information, see the NU-Prolog documentation.
By default the debugger only displays the top levels of terms;
you can use the @samp{|} command to enter an interactive term browser.
(Within the term browser, type @samp{h.} for help.)
Also note that in the debugger, we use a version of @code{error/1}
which fails rather than aborting after printing the "Software Error:" message.
This makes debugging easier,
but will of course change the behaviour after an error occurs.
@node Using SICStus
@section Using SICStus Prolog
Using SICStus Prolog is similar to using NU-Prolog,
except that the commands to use are @samp{msc}, @samp{msl}, and
@samp{msp} rather than @samp{mnc}, @samp{mnl}, and @samp{mnp}.
Due to shortcomings in SICStus Prolog
(in particular, the lack of backslash escapes in character strings),
you need to use @samp{sicstus_conv} to convert Mercury @samp{.m} files
to the @samp{.pl} files that SICStus Prolog expects
before you can load them into the interpreter.
The command to use is just
@example
sicstus_conv @var{module}.m
@end example
By default, @samp{msc} compiles files to machine code using
SICStus Prolog's @samp{fastcode} mode. If space is more important
than speed, you can use the @samp{--mode compactcode} option,
which instructs @code{msc} to use SICStus Prolog's @samp{compactcode}
mode, which compiles files to a bytecode format.
@node Prolog hazards
@section Hazards of using Prolog
There are some Mercury programs which are not valid Prolog programs.
In particular, Mercury will always reorder goals
to ensure that they are mode-correct
(or report a mode error if it cannot do so),
but Prolog systems will not always do so,
and will sometimes just silently give the wrong result.
For example, in Mercury the following predicate will usually succeed,
whereas in Prolog it will always fail.
@example
:- pred p(list(int)::in, list(int)::out) is semidet.
p(L0, L) :-
L \= [],
q(L0, L).
:- pred q(list(int)::in, list(int)::out) is det.
@end example
The reason is that in Mercury,
the test @samp{L \= []} is reordered to after the call to @code{q/2},
but in Prolog, it executes even though @code{L} is not bound,
and consequently the test always fails.
NU-Prolog has logical alternatives
to the non-logical Prolog operations,
and since Mercury supports both syntaxes,
you can use NU-Prolog's logical alternatives to avoid this problem.
However, during the development of the Mercury compiler
we had to abandon their use for efficiency reasons.
Another hazard is that NU-Prolog does not have a garbage collector.
@node Using Mmake
@chapter Using Mmake
Mmake, short for ``Mercury Make'',
is a tool for building Mercury programs
that is built on top of ordinary or GNU Make @footnote{
We aim to eventually add support for ordinary ``Make'' programs,
but currently only GNU Make is supported.}.
With Mmake, building even a complicated Mercury program
consisting of a number of modules is as simple as
@example
mmake @var{main-module}.depend
mmake @var{main-module}
@end example
Mmake only recompiles those files that need to be recompiled,
based on automatically generated dependency information.
Most of the dependencies are stored in @file{.d} files that are
automatically recomputed every time you recompile,
so they are never out-of-date.
A little bit of the dependency information is stored in @file{.dep} files
which are more expensive to recompute.
The @samp{mmake @var{main-module}.depend} command
which recreates the @file{@var{main-module}.dep} file
needs to be repeated only when you add or remove a module from your program,
and there is no danger of getting an inconsistent executable
if you forget this step --- instead you will get a compile or link error.
@samp{mmake} allows you to build more than one program in the same directory.
Each program must have its own @file{.dep} file,
and therefore you must run @samp{mmake @var{program}.depend}
for each program.
If there is a file called @samp{Mmake} in the current directory,
Mmake will include that file in its automatically-generated Makefile.
The @samp{Mmake} file can override the default values of
various variables used by Mmake's builtin rules,
or it can add additional rules, dependencies, and actions.
Mmake's builtin rules are defined by the file
@file{@var{prefix}/lib/mercury/mmake/Mmake.rules}
(where @var{prefix} is @file{/usr/local/mercury-@var{version}} by default,
and @var{version} is the version number, e.g. @samp{0.6}),
as well as the rules in the automatically-generated @file{.dep} files.
These rules define the following targets:
@table @file
@item @var{main-module}.depend
Creates the file @file{@var{main-module}.dep} from @file{@var{main-module}.m}
and the modules it imports.
This step must be performed first.
@item @var{main-module}.ints
Ensure that the interface files for @var{main-module}
and its imported modules are up-to-date.
(If the underlying @samp{make} program does not handle transitive dependencies,
this step may be necessary before
attempting to make @file{@var{main-module}} or @file{@var{main-module}.check};
if the underlying @samp{make} is GNU Make, this step should not be necessary.)
@item @var{main-module}.check
Perform semantic checking on @var{main-module} and its imported modules.
Error messages are placed in @file{.err} files.
@item @var{main-module}
Compiles and links @var{main-module} using the Mercury compiler.
Error messages are placed in @file{.err2} files.
@item @var{main-module}.split
Compiles and links @var{main-module} using the Mercury compiler,
with the @samp{--split-c-files} option enabled.
@xref{Output-level (LLDS->C) optimization options} for more information
about @samp{--split-c-files}.
@item @var{main-module}.nu
Compiles and links @var{main-module} using NU-Prolog.
@item @var{main-module}.nu.debug
Compiles and links @var{main-module} using NU-Prolog.
The resulting executable will start up in the NU-Prolog interpreter
rather than calling main/2.
@item @var{main-module}.sicstus
Compiles and links @var{main-module} using SICStus Prolog.
@item @var{main-module}.sicstus.debug
Compiles and links @var{main-module} using SICStus Prolog.
The resulting executable will start up in the SICStus Prolog interpreter
rather than calling main/2.
@item @var{main-module}.clean
Removes the automatically generated files
that contain the compiled code of the program
and the error messages produced by the compiler.
Specifically, this will remove all the @samp{.c}, @samp{.s}, @samp{.o},
@samp{.no}, @samp{.ql}, @samp{.err}, and @samp{.err2} files
belonging to the named @var{main-module} or its imported modules.
@item @var{main-module}.realclean
Removes all the automatically generated files.
In addition to the files removed by @var{main-module}.clean, this
removes the @samp{.date}, @samp{.int}, @samp{.int2},
@samp{.d}, and @samp{.dep} belonging to one of the modules of the program,
and also the various possible executables for the program ---
@samp{@var{main-module}},
@samp{@var{main-module}.nu},
@samp{@var{main-module}.nu.save},
@samp{@var{main-module}.nu.debug},
@samp{@var{main-module}.nu.debug.save},
@samp{@var{main-module}.sicstus}, and
@samp{@var{main-module}.sicstus.debug}.
@item clean
This makes @samp{@var{main-module}.clean} for every @var{main-module}
for which there is a @file{@var{main-module}.dep} file in the current
directory.
@item realclean
This makes @samp{@var{main-module}.realclean} for every @var{main-module}
for which there is a @file{@var{main-module}.dep} file in the current
directory.
@end table
The variables used by the builtin rules are defined in
@file{@var{prefix}/lib/mercury/mmake/Mmake.vars}.
@c ---------------------------------------------------------------------------
@node Profiling
@chapter Profiling
@menu
* Profiling introduction:: What is profiling useful for?
* Building profiled applications:: How to enable profiling.
* Creating the profile:: How to create profile data.
* Displaying the profile:: How to display the profile data.
* Analysis of results:: How to interpret the output.
@end menu
@node Profiling introduction
@section Introduction
The Mercury profiler @samp{mprof} is a tool which can be used to
analyze a Mercury program's performance, so that the programmer can
determine which predicates are taking up a disproportionate amount of
the execution time.
To obtain the best trade-off between productivity and efficiency,
programmers should not spend too much time optimizing their code
until they know which parts of the code are really taking up most
of the time. Only once the code has been profiled should the
programmer consider making optimizations that would improve
efficiency at the expense of readability or ease of maintenance.
A good profiler is a tool that should be part of every software
engineer's toolkit.
@node Building profiled applications
@section Building profiled applications
To enable profiling, your program must be built in a @samp{.prof} grade.
For example, if the default grade on your system is @samp{asm_fast.gc}, then
to compile with profiling enabled, use grade @samp{asm_fast.gc.prof}. You
can do this by setting the @samp{GRADE} variable in your Mmake file, e.g. by
adding the line @samp{GRADE=asm_fast.gc.prof}.
@xref{Compilation model options} for more information about the
different grades.
Enabling profiling has several effects. Firstly, it causes the
compiler to generate slightly modified code which counts the number
of times each predicate is called, and for every predicate call,
records the caller and callee. Secondly, your program will be linked
with versions of the library and runtime that were compiled with
profiling enabled. (It also has the effect for each source file the compiler
generates the static call graph for that file in @samp{@var{module}.prof}.)
@node Creating the profile
@section Creating the profile
The next step is to run your program. The profiling version of your
program will collect profiling information during execution, and
save this information in the files @samp{Prof.Counts}, @samp{Prof.Decls},
and @samp{Prof.CallPair}.
(@samp{Prof.Decl} contains the names of the predicates and their
associated addresses, @samp{Prof.CallPair} records the number of times
each predicate was called by each different caller, and @samp{Prof.Counts}
records the number of times that execution was in each predicate
when a profiling interrupt occurred.)
It would be nice if there was a way to combine profiling data from
multiple runs of the same program, but unfortunately there is not yet
any way to do that.
Due to a known timing-related bug in our code, you may occasionally get
segmentation violations when running your program with profiling enabled.
If this happens, just run it again --- the problem does not occur frequently.
This bug will hopefully be fixed soon.
@node Displaying the profile
@section Displaying the profile
To display the profile, just type @samp{mprof}. This will read the
@file{Prof.*} files and display the flat profile in a nice human-readable
format. If you also want to see the call graph profile, which takes a lot
longer to generate, type @samp{mprof -c}.
Note that @samp{mprof} can take quite a while to execute, and will
usually produce quite a lot of output, so you will usually want to
redirect the output into a file with a command such as
@samp{mprof > mprof.out}.
@node Analysis of results
@section Analysis of results
The profile output consists of three major sections. These are
named the call graph profile, the flat profile and the alphabetic listing.
The call graph profile presents the local call graph of each
predicate. For each predicate it shows the parents (callers) and
children (callees) of that predicate, and shows the execution time and
call counts for each parent and child. It is sorted on the total
amount of time spent in the predicate and all of its descendents (i.e.
all of the predicates that it calls, directly or indirectly.)
The flat profile presents the just execution time spent in each predicate.
It does not count the time spent in descendents of a predicate.
The alphabetic listing just lists the predicates in alphabetical order,
along with their index number in the call graph profile, so that you can
quickly find the entry for a particular predicate in the call graph profile.
The profiler works by interrupting the program at frequent intervals,
and each time recording the currently active predicate and its caller.
It uses these counts to determine the proportion of the total time spent in
each predicate. This means that the figures calculated for these times
are only a statistical approximation to the real values, and so they
should be treated with some caution.
The time spent in a predicate and its descendents is calculated by
propagating the times up the call graph, assuming that each call to a
predicate from a particular caller takes the same amount of time.
This assumption is usually reasonable, but again the results should
be treated with caution.
Note that any time spent in a C function (e.g. time spent in
@samp{GC_malloc()}, which does memory allocation and garbage collection) is
credited to the predicate that called that C function.
Here is a small portion of the call graph profile from an example program.
@example
called/total parents
index %time self descendents called+self name index
called/total children
<spontaneous>
[1] 100.0 0.00 0.75 0 call_engine_label [1]
0.00 0.75 1/1 do_interpreter [3]
-----------------------------------------------
0.00 0.75 1/1 do_interpreter [3]
[2] 100.0 0.00 0.75 1 io__run/0(0) [2]
0.00 0.00 1/1 io__init_state/2(0) [11]
0.00 0.74 1/1 main/2(0) [4]
-----------------------------------------------
0.00 0.75 1/1 call_engine_label [1]
[3] 100.0 0.00 0.75 1 do_interpreter [3]
0.00 0.75 1/1 io__run/0(0) [2]
-----------------------------------------------
0.00 0.74 1/1 io__run/0(0) [2]
[4] 99.9 0.00 0.74 1 main/2(0) [4]
0.00 0.74 1/1 sort/2(0) [5]
0.00 0.00 1/1 print_list/3(0) [16]
0.00 0.00 1/10 io__write_string/3(0) [18]
-----------------------------------------------
0.00 0.74 1/1 main/2(0) [4]
[5] 99.9 0.00 0.74 1 sort/2(0) [5]
0.05 0.65 1/1 list__perm/2(0) [6]
0.00 0.09 40320/40320 sorted/1(0) [10]
-----------------------------------------------
8 list__perm/2(0) [6]
0.05 0.65 1/1 sort/2(0) [5]
[6] 86.6 0.05 0.65 1+8 list__perm/2(0) [6]
0.00 0.60 5914/5914 list__insert/3(2) [7]
8 list__perm/2(0) [6]
-----------------------------------------------
0.00 0.60 5914/5914 list__perm/2(0) [6]
[7] 80.0 0.00 0.60 5914 list__insert/3(2) [7]
0.60 0.60 5914/5914 list__delete/3(3) [8]
-----------------------------------------------
40319 list__delete/3(3) [8]
0.60 0.60 5914/5914 list__insert/3(2) [7]
[8] 80.0 0.60 0.60 5914+40319 list__delete/3(3) [8]
40319 list__delete/3(3) [8]
-----------------------------------------------
0.00 0.00 3/69283 tree234__set/4(0) [15]
0.09 0.09 69280/69283 sorted/1(0) [10]
[9] 13.3 0.10 0.10 69283 compare/3(0) [9]
0.00 0.00 3/3 __Compare___io__stream/0(0) [20]
0.00 0.00 69280/69280 builtin_compare_int/3(0) [27]
-----------------------------------------------
0.00 0.09 40320/40320 sort/2(0) [5]
[10] 13.3 0.00 0.09 40320 sorted/1(0) [10]
0.09 0.09 69280/69283 compare/3(0) [9]
-----------------------------------------------
@end example
The first entry is @samp{call_engine_label} and its parent is
@samp{<spontaneous>}, meaning that it is the root of the call graph.
(The first three entries, @samp{call_engine_label}, @samp{do_interpreter},
and @samp{io__run/0} are all part of the Mercury runtime;
@samp{main/2} is the entry point to the user's program.)
Each entry of the call graph profile consists of three sections, the parent
predicates, the current predicate and the children predicates.
Reading across from the left, for the current predicate the fields are:
@itemize @bullet
@item
The unique index number for the current predicate.
(The index numbers are used only to make it easier to find
a particular entry in the call graph.)
@item
The percentage of total execution time spent in the current predicate
and all its descendents.
As noted above, this is only a statistical approximation.
@item
The ``self'' time: the time spent executing code that is
part of current predicate.
As noted above, this is only a statistical approximation.
@item
The descendent time: the time spent in the
current predicate and all its descendents.
As noted above, this is only a statistical approximation.
@item
The number of times a predicate is called.
If a predicate is (directly) recursive, this column
will contain the number of calls from other predicates,
a plus sign, and then the number of recursive calls.
These numbers are exact, not approximate.
@item
The name of the predicate followed by its index number.
@end itemize
The predicate names are not just followed by their arity but also by
their mode in brackets. A mode of zero corresponds to the first mode
declaration of that predicate in the source code. For example,
@samp{list__delete/3(3)} corresponds to the @samp{(out, out, in)} mode
of @samp{list__delete/3}.
Now for the parent and child predicates the self and descendent time have
slightly different meanings. For the parent predicates the self and descendent
time represent the proportion of the current predicate's self and descendent
time due to that parent. These times are obtained using the assumption that
each call contributes equally to the total time of the current predicate.
@c XXX is that really true? Do we really make that assumption?
@node Invocation
@chapter Invocation
This section contains a brief description of all the options
available for @samp{mc}, the Mercury compiler.
Sometimes this list is a little out-of-date;
use @samp{mc --help} to get the most up-to-date list.
@menu
* Invocation overview::
* Verbosity options::
* Warning options::
* Output options::
* Auxiliary output options::
* Compilation model options::
* Code generation options::
* Optimization options::
* Link options::
* Miscellaneous options::
@end menu
@node Invocation overview
@section Invocation overview
@code{mc} is invoked as
@example
mc [@var{options}] @var{modules}
@end example
For module names, the trailing @file{.m} is optional.
Options are either short (single-letter) options preceded by a single @samp{-},
or long options preceded by @samp{--}.
Options are case-sensitive.
We call options that do not take arguments @dfn{flags}.
Single-letter flags may be grouped with a single @samp{-}, e.g. @samp{-vVc}.
Single-letter flags may be negated
by appending another trailing @samp{-}, e.g. @samp{-v-}.
Long flags may be negated by preceding them with @samp{no-},
e.g. @samp{--no-verbose}.
@node Warning options
@section Warning options
@table @code
@item -w
@itemx --inhibit-warnings
Disable all warning messages.
@sp 1
@item --halt-at-warn.
This option causes the compiler to treat all
warnings as if they were errors. This means that
if any warning is issued, the compiler will not
generate code --- instead, it will return a
non-zero exit status.
@sp 1
@item --halt-at-syntax-error.
This option causes the compiler to halt immediately
after syntax checking and not do any semantic checking
if it finds any syntax errors in the program.
@sp 1
@item --no-warn-singleton-variables
Don't warn about variables which only occur once.
@sp 1
@item --no-warn-missing-det-decls
For predicates that are local to a module (those that
are not exported), don't issue a warning if the @samp{pred}
or @samp{mode} declaration does not have a determinism annotation.
Use this option if you want the compiler to perform automatic
determinism inference for non-exported predicates.
@sp 1
@item --no-warn-det-decls-too-lax
Don't warn about determinism declarations
which could have been stricter.
@sp 1
@item --no-warn-nothing-exported
Don't warn about modules whose interface sections have no
exported predicates, modes or types.
@sp 1
@item --warn-unused-args
Warn about predicate arguments which are not used.
@end table
@node Verbosity options
@section Verbosity options
@table @code
@item -v
@itemx --verbose
Output progress messages at each stage in the compilation.
@sp 1
@item -V
@itemx --very_verbose
Output very verbose progress messages.
@sp 1
@item -E
@itemx --verbose-error-messages
Explain error messages. Asks the compiler to give you a more
detailed explanation of any errors it finds in your program.
@sp 1
@item -S
@itemx --statistics
Output messages about the compiler's time/space usage.
At the moment this option implies @samp{--no-trad-passes},
so you get information at the boundaries between phases of the compiler.
@sp 1
@item -T
@itemx --debug-types
Output detailed debugging traces of the type checking.
@sp 1
@item -N
@itemx --debug-modes
Output detailed debugging traces of the mode checking.
@sp 1
@item --vndebug <n>
Output detailed debugging traces of the value numbering optimization pass.
The different bits in the number argument of this option control the
printing of different types of tracing messages.
@end table
@node Output options
@section Output options
These options are mutually exclusive.
If more than one of these options is specified, only the first in
this list will apply.
If none of these options are specified, the default action is to
compile and link the modules named on the command line to produce
an executable.
@table @code
@item -M
@itemx --generate-dependencies
Output ``Make''-style dependencies for the module
and all of its dependencies to @file{@var{module}.dep}.
@sp 1
@item -i
@itemx --make-interface
Write the module interface to @file{@var{module}.int}.
Also write the short interface to @file{@var{module}.int2}.
@sp 1
@item -G
@itemx --convert-to-goedel
Convert the Mercury code to Goedel. Output to file @file{@var{module}.loc}.
The translation is not perfect; some Mercury constructs cannot be easily
translated into Goedel.
@sp 1
@item -P
@itemx --pretty-print
@itemx --convert-to-mercury
Convert to Mercury. Output to file @file{@var{module}.ugly}.
This option acts as a Mercury ugly-printer.
(It would be a pretty-printer, except that comments are stripped
and nested if-then-elses are indented too much --- so the result
is rather ugly.)
@sp 1
@item --typecheck-only
Just check the syntax and type-correctness of the code.
Don't invoke the mode analysis and later passes of the compiler.
When converting Prolog code to Mercury,
it can sometimes be useful to get the types right first
and worry about modes second;
this option supports that approach.
@sp 1
@item -e
@itemx --errorcheck-only
Check the module for errors, but do not generate any code.
@sp 1
@item -C
@itemx --compile-to-c
@itemx --compile-to-C
Generate C code in @file{@var{module}.c}, but not object code.
@sp 1
@item -c
@itemx --compile-only
Generate C code in @file{@var{module}.c}
and object code in @file{@var{module}.o}
but do not attempt to link the named modules.
@end table
@node Auxiliary output options
@section Auxiliary output options
@table @code
@item --auto-comments
Output comments in the @file{@var{module}.c} file.
This is primarily useful for trying to understand
how the generated C code relates to the source code,
e.g. in order to debug the compiler.
The code may be easier to understand if you also use the
@samp{--no-llds-optimize} option.
@sp 1
@item -n
@itemx --line-numbers
Output source line numbers in the generated code.
Only works with the @samp{--convert-to-goedel} and @samp{--convert-to-mercury}
options.
@sp 1
@item --show-dependency-graph
Write out the dependency graph to @var{module}.dependency_graph.
@sp 1
@item -d @var{stage}
@itemx --dump-hlds @var{stage}
Dump the HLDS (intermediate representation) after
the specified stage number or stage name to
@file{@var{module}.hlds_dump.@var{num}-@var{name}}.
Stage numbers range from 1 to 99; not all stage numbers are valid.
The special stage name @samp{all} causes the dumping of all stages.
Multiple dump options accumulate.
@sp 1
@item -D
@itemx --verbose-dump-hlds
With @samp{--dump-hlds}, dumps some additional info.
@end table
@node Compilation model options
@section Compilation model options
The following compilation options affect the generated
code in such a way that the entire program must be
compiled with the same setting of these options,
and it must be linked to a version of the Mercury
library which has been compiled with the same setting.
Rather than setting them individually, you must
specify them all at once by selecting a particular
compilation model ("grade").
@table @asis
@item @code{-s @var{grade}}
@itemx @code{--grade @var{grade}}
Select the compilation model. The @var{grade} should be one of
@samp{debug}, @samp{none}, @samp{reg}, @samp{jump}, @samp{asm_jump},
@samp{fast}, @samp{asm_fast},
or one of those with @samp{.gc}, @samp{.prof} or @samp{.gc.prof} appended.
The default grade is system-dependent; it is chosen at installation time
by @samp{configure}, the auto-configuration script, but can be overridden
with the @samp{MERCURY_DEFAULT_GRADE} environment variable if desired.
Depending on your particular installation, only a subset
of these possible grades will have been installed.
Attempting to use a grade which has not been installed
will result in an error at link time.
The following table shows the options that are selected by each grade; it
is followed by descriptions of those options.
@table @asis
@item @var{Grade}
@var{Options implied}.
@item @samp{debug}
@code{--debug --no-c-optimize}.
@item @samp{none}
None.
@item @samp{reg}
@code{--gcc-global-registers}.
@item @samp{jump}
@code{--gcc-nonlocal gotos}.
@item @samp{fast}
@code{--gcc-global-registers --asm-labels}.
@item @samp{asm_jump}
@code{--gcc-global-registers --asm-labels}.
@item @samp{asm_fast}
@code{--gcc-global-registers --gcc-nonlocal_gotos --asm-labels}.
@item Any of the above followed by @samp{.gc}
As above, plus @code{--conservative-gc}.
@item Any of the above followed by @samp{.prof}
As above, plus @code{--profiling}.
@end table
@sp 1
Reminder: the following optimizations should not be set individually.
Instead, you should use the @samp{--grade} option to change the setting of
these options.
@sp 1
@item @code{--gcc-global-registers} (grades: reg, fast, asm_fast)
@itemx @code{--no-gcc-global-registers} (grades: debug, none, jump, asm_jump)
Specify whether or not to use GNU C's global register variables extension.
@sp 1
@item @code{--gcc-non-local-gotos} (grades: jump, fast, asm_jump, asm_fast)
@item @code{--no-gcc-non-local-gotos} (grades: debug, none, reg)
Specify whether or not to use GNU C's ``labels as values'' extension.
@sp 1
@item @code{--asm-labels} (grades: asm_jump, asm_fast)
@itemx @code{--no-asm-labels} (grades: debug, none, reg, jump, fast)
Specify whether or not to use GNU C's asm extensions for inline assembler
labels.
@sp 1
@item @code{--gc @{none, conservative, accurate@}}
@itemx @code{--garbage-collection @{none, conservative, accurate@}}
Specify which method of garbage collection to use.
@samp{.gc} grades use @samp{--gc conservative}, other grades use
@samp{--gc none}.
@samp{accurate} is not yet implemented.
@sp 1
@item @code{--tags @{none, low, high@}}
(This option is not intended for general use.)
Specify whether to use the low bits or the high bits of
each word as tag bits (default: low).
@sp 1
@item @code{--num-tag-bits @var{n}}
(This option is not intended for general use.)
Use @var{n} tag bits. This option is required if you specify
@samp{--tags high}.
With @samp{--tags low}, the default number of tag bits to use
is determined by the auto-configuration script.
@sp 1
@item @code{--num-real-regs @var{n}}
(This option is not intended for general use.)
Assume r1 up to r@var{n} are real machine registers.
@sp 1
@item @code{--profiling} (grades: any grade ending in @samp{.prof})
Enable profiling. Insert profiling hooks in the
generated code, and also output some profiling
information (the static call graph) to the file
@samp{@var{module}.prof}. @xref{Profiling}.
@item @code{--debug} (grades: debug)
Enable debugging.
Pass the @samp{-g} flag to the C compiler, instead of @samp{-DSPEED},
to enable debugging of the generated C code.
This option also implies @samp{--no-c-optimize}.
Debugging support is currently extremely primitive -
this option is probably not useful to anyone except the Mercury implementors.
We recommend that you use instead use `mnp' or `msp'.
@xref{Using Prolog} for details.
@sp 1
@item @code{--args @{old, compact@}}
@item @code{--arg-convention @{old, compact@}}
(This option is not intended for general use.)
Use the specified argument passing convention
in the generated low-level C code.
With the @samp{simple} convention,
the @var{n}th argument is passed in or out using register r@var{n}.
With the @samp{compact} convention,
the @var{n}th input argument is passed using register r@var{n}
and the @var{n}th output argument is returned using register r@var{n}.
The @samp{compact} convention generally leads to more efficient code.
However, currently only the @samp{simple} convention is supported.
@end table
@node Code generation options
@section Code generation options
@table @code
@item --no-trad-passes
The default @samp{--trad-passes} completely processes each predicate
before going on to the next predicate.
This option tells the compiler
to complete each phase of code generation on all predicates
before going on the next phase on all predicates.
@c @sp 1
@c @item --no-polymorphism
@c Don't handle polymorphic types.
@c (Generates slightly more efficient code, but stops
@c polymorphism from working except in special cases.)
@sp 1
@item --no-reclaim-heap-on-nondet-failure
Don't reclaim heap on backtracking in nondet code.
@sp 1
@item --no-reclaim-heap-on-semidet-failure
Don't reclaim heap on backtracking in semidet code.
@sp 1
@item --use-macro-for-redo-fail
Emit the fail or redo macro instead of a branch
to the fail or redo code in the runtime system.
@sp 1
@item --cc @var{compiler-name}
Specify which C compiler to use.
@sp 1
@item --c-include-directory @var{dir}
Specify the directory containing the Mercury C header files.
@sp 1
@item --cflags @var{options}
Specify options to be passed to the C compiler.
@end table
@node Optimization options
@section Optimization options
@menu
* Selecting an optimization level::
* High-level (HLDS->HLDS) optimization options::
* Medium-level (HLDS->LLDS) optimization options::
* Low-level (LLDS->LLDS) optimization options::
* Output-level (LLDS->C) optimization options::
@end menu
@node Selecting an optimization level
@subsection Selecting an optimization level
@table @code
@item -O @var{n}
@itemx --opt-level @var{n}
@itemx --optimization-level @var{n}
Set optimization level to @var{n}.
Optimimization level 0 means no optimization
while optimimization level 5 means full optimization.
@item --opt-space
@item --optimize-space
Turn on optimizations that reduce code size
and turn off optimizations that significantly increase code size.
@end table
@node High-level (HLDS->HLDS) optimization options
@subsection High-level (HLDS->HLDS) optimization options
These optimizations are high-level transformations on our HLDS (high-level
data structure).
@table @code
@item --no-inlining
Disable the inlining of simple procedures.
@sp 1
@item --no-common-struct
Disable optimization of common term structures.
@sp 1
@item --no-common-goal
Disable optimization of common goals.
At the moment this optimization
detects only common deconstruction unifications.
Disabling this optimization reduces the class of predicates
that the compiler considers to be deterministic.
@c @item --constraint-propagation
@c Enable the constraint propagation transformation.
@c @sp 1
@c @item --prev-code
@c Migrate into the start of branched goals.
@sp 1
@item --no-follow-code
Don't migrate builtin goals into branched goals.
@sp 1
@item --no-optimize-unused-args
Don't remove unused predicate arguments. The compiler will
generate less efficient code for polymorphic predicates.
@sp 1
@item --no-optimize-higher-order
Don't specialize calls to higher-order predicates where
the higher-order arguments are known.
@sp 1
@item --optimize-dead-procs.
Enable dead predicate elimination.
@sp 1
@item --excess-assign
Remove excess assignment unifications.
@c @sp 1
@c @item --no-specialize
@c Disable the specialization of procedures.
@end table
@node Medium-level (HLDS->LLDS) optimization options
@subsection Medium-level (HLDS->LLDS) optimization options
These optimizations are applied during the process of generating
low-level intermediate code from our high-level data structure.
@table @code
@item --no-static-ground-terms
Disable the optimization of constructing constant ground terms
at compile time and storing them as static constants.
@sp 1
@item --no-smart-indexing
Generate switches as a simple if-then-else chains;
disable string hashing and integer table-lookup indexing.
@sp 1
@item --dense-switch-req-density @var{percentage}
The jump table generated for an atomic switch
must have at least this percentage of full slots (default: 25).
@sp 1
@item --dense-switch-size @var{size}
The jump table generated for an atomic switch
must have at least this many entries (default: 4).
@sp 1
@item --lookup-switch-req-density @var{percentage}
The lookup tables generated for an atomic switch
in which all the outputs are constant terms
must have at least this percentage of full slots (default: 25).
@sp 1
@item --lookup-switch-size @var{size}
The lookup tables generated for an atomic switch
in which all the outputs are constant terms
must have at least this many entries (default: 4).
@sp 1
@item --string-switch-size @var{size}
The hash table generated for a string switch
must have at least this many entries (default: 8).
@sp 1
@item --tag-switch-size @var{size}
The number of alternatives in a tag switch
must be at least this number (default: 8).
@sp 1
@item --no-middle-rec
Disable the middle recursion optimization.
@sp 1
@item --no-simple-neg
Don't generate simplified code for simple negations.
@sp 1
@item --no-follow-vars
Don't optimize the assignment of registers in branched goals.
@end table
@node Low-level (LLDS->LLDS) optimization options
@subsection Low-level (LLDS->LLDS) optimization options
These optimizations are transformations that are applied to our
low-level intermediate code before emitting C code.
@table @code
@item --no-llds-optimize
Disable the low-level optimization passes.
@sp 1
@item --no-optimize-peep
Disable local peephole optimizations.
@sp 1
@item --no-optimize-jumps
Disable elimination of jumps to jumps.
@sp 1
@item --no-optimize-fulljumps
Disable elimination of jumps to ordinary code.
@sp 1
@item --no-optimize-labels
Disable elimination of dead labels and code.
@sp 1
@item --optimize-dups
Enable elimination of duplicate code.
@c @sp 1
@c @item --optimize-copyprop
@c Enable the copy propagation optimization.
@sp 1
@item --optimize-value-number
Perform value numbering on extended basic blocks.
@c @sp 1
@c @item --pred-value-number
@c Extend value numbering to whole predicates, rather than just basic blocks.
@sp 1
@item --no-optimize-frames
Disable stack frame optimizations.
@sp 1
@item --no-optimize-delay-slot
Disable branch delay slot optimizations.
@sp 1
@item --optimize-repeat @var{n}
Iterate most optimizations at most @var{n} times (default: 3).
@sp 1
@item --optimize-vnrepeat @var{n}
Iterate value numbering at most @var{n} times (default: 1).
@end table
@node Output-level (LLDS->C) optimization options
@subsection Output-level (LLDS->C) optimization options
These optimizations are applied during the process of generating
C intermediate code from our low-level data structure.
@table @code
@item --no-emit-c-loops
Use only gotos -- don't emit C loop constructs.
@sp 1
@item --procs-per-c-function @var{n}
Don't put the code for more than @var{n} Mercury
procedures in a single C function. The default
value of @var{n} is one. Increasing @var{n} can produce
slightly more efficient code, but makes compilation slower.
Setting @var{n} to the special value zero has the effect of
putting all the procedures in a single function,
which produces the most efficient code
but tends to severely stress the C compiler.
@sp 1
@item --split-c-files
Generate each C function in its own C file,
so that the linker will optimize away unused code.
This has the same effect as @samp{--optimize-dead-procs},
except that it works globally at link time, rather than
over a single module, so it does a much better job of
eliminating unused procedures.
This option significantly increases compilation time,
link time, and intermediate disk space requirements,
but in return reduces the size of the final
executable, typically by about 10-20%.
This option is only useful with @samp{--procs-per-c-function 1}.
@sp 1
@item --no-c-optimize
Don't enable the C compiler's optimizations.
@end table
@node Miscellaneous options
@section Miscellaneous options
@table @code
@item -b @var{builtin}
@itemx --builtin-module @var{builtin}
Use @var{builtin} instead of @samp{mercury_builtin} as the
module which always gets automatically imported.
@sp 1
@item -I @var{dir}
@itemx --search-directory @var{dir}
Append @var{dir} to the list of directories to be searched for
imported modules.
@sp 1
@item -?
@item -h
@itemx --help
Print a usage message.
@c @item -H @var{n}
@c @itemx --heap-space @var{n}
@c Pre-allocate @var{n} kilobytes of heap space.
@c This option is now obsolete. In the past it was used to avoid
@c NU-Prolog's "Panic: growing stacks has required shifting the heap"
@c message.
@end table
@node Link options
@section Link options
@table @code
@sp 1
@item -o @var{filename}
@itemx --output-file @var{filename}
Specify the name of the final executable.
(The default executable name is the same as the name of the
first module on the command line, but without the @samp{.m} extension.)
@sp 1
@item --link-flags @var{options}
Specify options to be passed to @samp{ml}, the Mercury linker.
@sp 1
@item -L @var{directory}
@item --library-directory @var{directory}
Append @var{dir} to the list of directories in which to search for libraries.
@item -l @var{library}
@item --library @var{library}
Link with the specified library.
@item --link-object @var{object}
Link with the specified object file.
@end table
@node Environment
@chapter Environment variables
The shell scripts in the Mercury compilation environment
will use the following environment variables if they are set.
There should be little need to use these, because the default
values will generally work fine.
@table @code
@item MERCURY_DEFAULT_GRADE
The default grade to use if no @samp{--grade} option is specified.
@sp 1
@item MERCURY_OPTIONS
A list of options for the Mercury runtime that gets
linked into every Mercury program.
Options are available to set the size of the different memory
areas, and to enable various debugging traces.
Set @samp{MERCURY_OPTIONS} to @samp{-h} for help.
@sp 1
@item MERCURY_C_INCL_DIR
Directory for the Mercury C header files (@file{*.h}).
@sp 1
@item MERCURY_INT_DIR
Directory for the Mercury library interface
files (@file{*.int} and @file{*.int2}).
@sp 1
@item MERCURY_NC_BUILTIN
Filename of the Mercury `nc'-compatibility file (nc_builtin.nl).
@sp 1
@item MERCURY_C_LIB_DIR
Base directory containing the Mercury libraries (@file{libmer.a} and
possibly @file{libmer.so}) for each configuration and grade.
The libraries for each configuration and grade should
be in the subdirectory @var{config}/@var{grade} of @code{$MERCURY_C_LIB_DIR}.
@sp 1
@item MERCURY_NONSHARED_LIB_DIR
For IRIX 5, this environment variable can be used to specify a
directory containing a version of libgcc.a which has been compiled with
@samp{-mno-abicalls}. See the file @samp{README.IRIX-5} in the Mercury
source distribution.
@sp 1
@item MERCURY_MOD_LIB_DIR
The directory containing the .init files in the Mercury library.
They are used to create the initialization file @file{*_init.c}.
@sp 1
@item MERCURY_MOD_LIB_MODS
The names of the .init files in the Mercury library.
@sp 1
@item MERCURY_NU_LIB_DIR
Directory for the NU-Prolog object files (@file{*.no}) for the
NU-Prolog Mercury library.
@sp 1
@item MERCURY_NU_LIB_OBJS
List of the NU-Prolog object files (@file{*.no}) for the Mercury
library.
@sp 1
@item MERCURY_COMPILER
Filename of the Mercury Compiler.
@sp 1
@item MERCURY_INTERPRETER
Filename of the Mercury Interpreter.
@sp 1
@item MERCURY_MKINIT
Filename of the program to create the @file{*_init.c} file.
@end table
@node C compilers
@chapter Using a different C compiler
The Mercury compiler takes special advantage of certain extensions
provided by GNU C to generate much more efficient code. We therefore
recommend that you use GNU C for compiling Mercury programs.
However, if for some reason you wish to use another compiler,
it is possible to do so. Here's what you need to do.
@itemize @bullet
@item You must specify the name of the new compiler.
You can do this either by setting the @samp{MERCURY_C_COMPILER}
environment variable, by adding a line
@samp{MGNUC=MERCURY_C_COMPILER=@dots{} mgnuc} to your @samp{Mmake} file,
or by using the @samp{--cc} option to @samp{mc}.
You may need to specify some option(s) to the C compiler
to ensure that it uses an ANSI preprocessor (e.g. if you
are using the DEC Alpha/OSF 3.2 C compiler, you would need to
pass @samp{--cc="cc -std"} to @samp{mc} so that it will pass the
@samp{-std} option to @samp{cc}).
@item
You must use the grade @samp{none} or @samp{none.gc}.
You can specify the grade in one of three ways: by setting the
@samp{MERCURY_DEFAULT_GRADE} environment variable, by adding a line
@samp{GRADE=@dots{}} to your @samp{Mmake} file, or by using the
@samp{--grade} option to @samp{mc}. (You will also need to install
those grades of the Mercury library, if you have not already done so.)
@item
If your compiler is particularly strict in
enforcing ANSI compliance, you may also need to compile the Mercury
code with @samp{--no-static-ground-terms}.
@end itemize
@contents
@bye