Files
mercury/doc/user_guide.texi
Julien Fischer 3a290b340b Bump the year in the copyright messages.
.README.in:
bindist/bindist.README:
trace/mercury_trace_internal.c:
compiler/handle_options.m:
doc/*.texi:
profiler/mercury_profile.m:

*/.gitignore:
   Convert and / or update .gitignore files.
2013-01-03 13:12:22 +11:00

10927 lines
394 KiB
Plaintext

\input texinfo
@setfilename mercury_user_guide.info
@settitle The Mercury User's Guide
@dircategory The Mercury Programming Language
@direntry
* Mercury User's Guide: (mercury_user_guide). The Mercury User's Guide.
@end direntry
@c @smallbook
@c @cropmarks
@finalout
@setchapternewpage off
@c ----------------------------------------------------------------------------
@c We use the following indices in this document:
@c
@c The "@cindex" / "cp" (concept) index:
@c for general concepts (e.g. "Determinism", "Debugging", etc.)
@c The "@pindex" / "pg" (program) index:
@c for programs or shell scripts (mmc, mgnuc, etc.).
@c The "@findex" / "fn" (function) index:
@c for command-line options.
@c The "@kindex" / "ky" (keystroke) index:
@c for mdb commands.
@c The "@vindex" / "vr" (variable) index:
@c for environment variables and Mmake variables.
@c
@c That is in case we ever want to produce separate indices for the
@c different categories. Currently, however, we merge them all into
@c a single index, via the commands below.
@syncodeindex fn cp
@syncodeindex ky cp
@syncodeindex vr cp
@syncodeindex pg cp
@c ----------------------------------------------------------------------------
@ifnottex
This file documents the Mercury implementation, version <VERSION>.
Copyright (C) 1995-2013 The University of Melbourne.
Permission is granted to make and distribute verbatim copies of
this manual provided the copyright notice and this permission notice
are preserved on all copies.
@ignore
Permission is granted to process this file through Tex and print the
results, provided the printed document carries copying permission
notice identical to this one except for the removal of this paragraph
(this paragraph not being relevant to the printed manual).
@end ignore
Permission is granted to copy and distribute modified versions of this
manual under the conditions for verbatim copying, provided also that
the entire resulting derived work is distributed under the terms of a
permission notice identical to this one.
Permission is granted to copy and distribute translations of this manual
into another language, under the above conditions for modified versions.
@end ifnottex
@titlepage
@title The Mercury User's Guide
@subtitle Version <VERSION>
@author Fergus Henderson
@author Thomas Conway
@author Zoltan Somogyi
@author Peter Ross
@author Tyson Dowd
@author Mark Brown
@author Ian MacLarty
@author Paul Bone
@page
@vskip 0pt plus 1filll
Copyright @copyright{} 1995--2012 The University of Melbourne.
Permission is granted to make and distribute verbatim copies of
this manual provided the copyright notice and this permission notice
are preserved on all copies.
Permission is granted to copy and distribute modified versions of this
manual under the conditions for verbatim copying, provided also that
the entire resulting derived work is distributed under the terms of a
permission notice identical to this one.
Permission is granted to copy and distribute translations of this manual
into another language, under the above conditions for modified versions.
@end titlepage
@c ----------------------------------------------------------------------------
@contents
@page
@c ----------------------------------------------------------------------------
@ifnottex
@node Top,,, (mercury)
@top The Mercury User's Guide, version <VERSION>
This guide describes the compilation environment of Mercury ---
how to build and debug Mercury programs.
@menu
* Introduction:: General overview.
* Filenames:: File naming conventions.
* Using mmc:: Compiling and linking programs with the Mercury compiler.
* Running:: Execution of programs built with the Mercury compiler.
* Using Mmake:: ``Mercury Make'', a tool for building Mercury programs.
* Libraries:: Creating and using libraries of Mercury modules.
* Debugging:: The Mercury debugger @samp{mdb}.
* Profiling:: Analyzing the performance of Mercury programs.
* Invocation:: List of options for the Mercury compiler.
* Environment:: Environment variables used by the compiler and utilities.
* C compilers:: How to use a C compiler other than GNU C.
* Foreign language interface:: Interfacing to other programming
languages from Mercury.
* Stand-alone Interfaces:: Calling procedures in Mercury libraries from
programs written in other languages.
* Index::
@end menu
@end ifnottex
@c ----------------------------------------------------------------------------
@node Introduction
@chapter Introduction
This document describes the compilation environment of Mercury.
It describes how to use @samp{mmc}, the Mercury compiler;
a build tool integrated into the compiler called @samp{mmc --make};
an older tool, @samp{mmake}, built on top of ordinary or GNU make
to simplify the handling of Mercury programs;
how to use @samp{mdb}, the Mercury debugger;
and how to use @samp{mprof}, the Mercury profiler.
We strongly recommend that programmers use @samp{mmc --make} rather
than invoking @samp{mmc} directly, because @samp{mmc --make} is generally
easier to use and avoids unnecessary recompilation.
@c ----------------------------------------------------------------------------
@node Filenames
@chapter File naming conventions
@cindex File extensions
@cindex File names
Mercury source files must be named @file{*.m}.
Each Mercury source file should contain a single Mercury module
whose module name should be the same as the filename without
the @samp{.m} extension.
The Mercury implementation uses a variety of intermediate files, which
are described below. But all you really need to know is how to name
source files. For historical reasons, the default behaviour is for
intermediate files to be created in the current directory, but if you
use the @samp{--use-subdirs} option to @samp{mmc} or @samp{mmake}, all
@findex --use-subdirs
these intermediate files will be created in a @file{Mercury}
subdirectory, where you can happily ignore them.
Thus you may wish to skip the rest of this chapter.
In cases where the source file name and module name don't match,
the names for intermediate files are based on the name of the
module from which they are derived, not on the source file name.
Files ending in @file{.int}, @file{.int0}, @file{.int2} and @file{.int3}
are interface files; these are generated automatically by the compiler,
using the @samp{--make-interface} (or @samp{--make-int}),
@samp{--make-private-interface} (or @samp{--make-priv-int}),
@samp{--make-short-interface} (or @samp{--make-short-int}) options.
@findex --make-int
@findex --make-interface
@findex --make-short-int
@findex --make-short-interface
@findex --make-priv-interface
@findex --make-private-interface
@findex --make-optimization-interface
@findex --make-transitive-optimization-interface
@findex --make-trans-opt-int
Files ending in @file{.opt} are
interface files used in inter-module optimization,
and are created using the @samp{--make-optimization-interface}
(or @samp{--make-opt-int}) option.
Similarly, files ending in @file{.trans_opt} are interface files used in
transitive inter-module optimization, and are created using the
@samp{--make-transitive-optimization-interface}
(or @samp{--make-trans-opt-int}) option.
Since the interface of a module changes less often than its implementation,
the @file{.int}, @file{.int0}, @file{.int2}, @file{.int3}, @file{.opt},
and @file{.trans_opt} files will remain unchanged on many compilations.
To avoid unnecessary recompilations of the clients of the module,
the timestamps on the these files are updated only if their contents change.
@file{.date}, @file{.date0}, @file{.date3}, @file{.optdate},
and @file{.trans_opt_date}
files associated with the module are used as timestamp files;
they are used when deciding whether the interface files need to be regenerated.
@file{.c_date}, @file{.il_date}, @file{.cs_date},
@file{.java_date}, @file{.erl_date},
@file{.s_date} and @file{.pic_s_date} files
perform a similar function for @file{.c}, @file{.il}, @file{.cs},
@file{.java}, @file{.erl},
@file{.s} and @file{.pic_s} files respectively. When smart recompilation
(@pxref{Auxiliary output options}) works out that a module
does not need to be recompiled, the timestamp file for the
target file is updated, and the timestamp of the target file
is left unchanged.
@findex --smart-recompilation
@file{.used} files contain dependency information for smart recompilation
(@pxref{Auxiliary output options}).
@findex --smart-recompilation
Files ending in @file{.d} are automatically-generated Makefile fragments
which contain the dependencies for a module.
Files ending in @file{.dep} are automatically-generated Makefile fragments
which contain the rules for an entire program.
Files ending in @file{.dv} are automatically-generated Makefile fragments
which contain variable definitions for an entire program.
As usual, @file{.c} files are C source code,
and @file{.o} files are object code.
In addition, @file{.pic_o} files are object code files
that contain position-independent code (PIC).
@file{.lpic_o} files are object code files that can be
linked with shared libraries, but don't necessarily
contain position-independent code themselves.
@file{.mh} and @file{.mih} files are C header files generated
by the Mercury compiler. The non-standard extensions are necessary
to avoid conflicts with system header files.
@file{.s} files and @file{.pic_s} files are assembly language.
@file{.java}, @file{.class} and @file{.jar} files are Java source code,
Java bytecode and Java archives respectively.
@file{.il} files are Intermediate Language (IL) files
for the .NET Common Language Runtime.
@file{.cs} files are C# source code.
@c XXX mention .dll and .exe?
@file{.erl} and @file{.beam} files are Erlang source code and
bytecode (object) files respectively.
@file{.beams} directories are collections of @file{.beam} files which
act like a library archive.
@c ----------------------------------------------------------------------------
@node Using mmc
@chapter Using the Mercury compiler
Following a long Unix tradition,
the Mercury compiler is called @samp{mmc}
(for ``Melbourne Mercury Compiler'').
@pindex mmc
Some of its options (e.g.@: @samp{-c}, @samp{-o}, and @samp{-I})
have a similar meaning to that in other Unix compilers.
Arguments to @samp{mmc} may be either file names (ending in @samp{.m}),
or module names, with @samp{.} (rather than @samp{__} or @samp{:})
as the module qualifier. For a module name such as @samp{foo.bar.baz},
the compiler will look for the source in files @file{foo.bar.baz.m},
@file{bar.baz.m}, and @file{baz.m}, in that order.
Note that if the file name does not include all the module
qualifiers (e.g.@: if it is @file{bar.baz.m} or @file{baz.m}
rather than @file{foo.bar.baz.m}), then the module name in the
@samp{:- module} declaration for that module must be fully qualified.
To make the compiler look in another file for a module, use
@samp{mmc -f @var{sources-files}} to generate a mapping from module name
to file name, where @var{sources-files} is the list of source files in
the directory (@pxref{Output options}).
Arguments to @samp{mmc} may also be in @samp{@@file}.
The @samp{@@file} argument is replaced with arguments
representing the contents of the file.
This argument processing is done recursively.
The contents of the @samp{@@file} is split into arguments
one per line in the file.
To compile a program which consists of just a single source file,
use the command
@example
mmc @var{filename}.m
@end example
Unlike traditional Unix compilers, however,
@samp{mmc} will put the executable into a file called @file{@var{filename}},
not @file{a.out}.
For programs that consist of more than one source file, you can use Mmake
(@pxref{Using Mmake}) or the @samp{--make} option to @samp{mmc}. Currently, the
use of @samp{mmc --make} is recommended:
@example
mmc --make @var{filename}
@end example
@noindent
If you use Mmake or @samp{mmc --make}, then you don't need to understand the
details
of how the Mercury implementation goes about building programs.
Thus you may wish to skip the rest of this chapter.
To compile a source file to object code without creating an executable,
use the command
@example
mmc -c @var{filename}.m
@end example
@samp{mmc} will put the object code into a file called @file{@var{module}.o},
where @var{module} is the name of the Mercury module defined in
@file{@var{filename}.m}.
It also will leave the intermediate C code in a file called
@file{@var{module}.c}.
If the source file contains nested modules, then each sub-module will get
compiled to separate C and object files.
Before you can compile a module,
you must make the interface files
for the modules that it imports (directly or indirectly).
You can create the interface files for one or more source files
using the following commands:
@example
mmc --make-short-int @var{filename1}.m @var{filename2}.m ...
mmc --make-priv-int @var{filename1}.m @var{filename2}.m ...
mmc --make-int @var{filename1}.m @var{filename2}.m ...
@end example
@findex --make-short-int
@findex --make-priv-int
@findex --make-int
If you are going to compile with @samp{--intermodule-optimization} enabled,
then you also need to create the optimization interface files.
@example
mmc --make-opt-int @var{filename1}.m @var{filename2}.m ...
@end example
@findex --make-opt-int
If you are going to compile with @samp{--transitive-intermodule-optimization}
enabled, then you also need to create the transitive optimization files.
@findex --transitive-intermodule-optimization
@example
mmc --make-trans-opt @var{filename1}.m @var{filename2}.m ...
@end example
@findex --make-trans-opt
Given that you have made all the interface files,
one way to create an executable for a multi-module program
is to compile all the modules at the same time
using the command
@example
mmc @var{filename1}.m @var{filename2}.m ...
@end example
This will by default put the resulting executable in @file{@var{filename1}},
but you can use the @samp{-o @var{filename}} option to specify a different
name for the output file, if you so desire.
@findex -o
The other way to create an executable for a multi-module program
is to compile each module separately using @samp{mmc -c},
@findex -c
and then link the resulting object files together.
The linking is a two stage process.
First, you must create and compile an @emph{initialization file},
which is a C source file
containing calls to automatically generated initialization functions
contained in the C code of the modules of the program:
@example
c2init @var{module1}.c @var{module2}.c ... > @var{main-module}_init.c,
mgnuc -c @var{main-module}_init.c
@end example
@pindex c2init
@pindex mgnuc
The @samp{c2init} command line must contain
the name of the C file of every module in the program.
The order of the arguments is not important.
The @samp{mgnuc} command is the Mercury GNU C compiler;
it is a shell script that invokes the GNU C compiler @samp{gcc}
@c (or some other C compiler if GNU C is not available)
with the options appropriate for compiling
the C programs generated by Mercury.
You then link the object code of each module
with the object code of the initialization file to yield the executable:
@example
ml -o @var{main-module} @var{module1}.o @var{module2}.o ... @var{main_module}_init.o
@end example
@pindex ml
@samp{ml}, the Mercury linker, is another shell script
that invokes a C compiler with options appropriate for Mercury,
this time for linking. @samp{ml} also pipes any error messages
from the linker through @samp{mdemangle}, the Mercury symbol demangler,
so that error messages refer to predicate and function names from
the Mercury source code rather than to the names used in the intermediate
C code.
The above command puts the executable in the file @file{@var{main-module}}.
The same command line without the @samp{-o} option
would put the executable into the file @file{a.out}.
@samp{mmc} and @samp{ml} both accept a @samp{-v} (verbose) option.
You can use that option to see what is actually going on.
For the full set of options of @samp{mmc}, see @ref{Invocation}.
@c ----------------------------------------------------------------------------
@node Running
@chapter Running programs
Once you have created an executable for a Mercury program,
you can go ahead and execute it. You may however wish to specify
certain options to the Mercury runtime system.
The Mercury runtime accepts
options via the @samp{MERCURY_OPTIONS} environment variable.
@vindex MERCURY_OPTIONS
The most useful of these are the options that set the size of the stacks.
(For the full list of available options, see @ref{Environment}.)
@c XXX FIXME This is wrong for the case when --high-level-code is enabled.
@c Note: The definition for these defaults is in runtime/mercury_wrapper.c
The det stack and the nondet stack
are allocated fixed sizes at program start-up.
The default size is 4096k times the word size (in bytes) for the det stack
and 64k times the word size (in bytes) for the nondet stack,
but these can be overridden with the
@samp{--detstack-size} and @samp{--nondetstack-size} options,
@findex --detstack-size
@findex --nondetstack-size
whose arguments are the desired sizes of the det and nondet stacks
respectively, in units of kilobytes.
On operating systems that provide the appropriate support,
the Mercury runtime will ensure that stack overflow
is trapped by the virtual memory system.
@cindex Stack size
@cindex Stack overflow
With conservative garbage collection (the default),
the heap will start out with a zero size,
and will be dynamically expanded as needed,
When not using conservative garbage collection,
the heap has a fixed size like the stacks.
The default size is 8Mb times the word size (in bytes),
but this can be overridden with the @samp{--heap-size} option.
@cindex Heap size
@cindex Heap overflow
@c ----------------------------------------------------------------------------
@node Using Mmake
@chapter Using Mmake
@pindex mmake
@pindex make --- see Mmake
@cindex Building programs
@cindex Recompiling
Mmake, short for ``Mercury Make'',
is a tool for building Mercury programs.
The same functionality is now provided in @samp{mmc} directly by using the
@samp{--make} option:
@example
mmc --make @var{main-module}
@end example
@noindent
The usage of Mmake is discouraged.
Mmake is built on top of ordinary or GNU Make @footnote{
We might eventually add support for ordinary ``Make'' programs,
but currently only GNU Make is supported.}.
With Mmake, building even a complicated Mercury program
consisting of a number of modules is as simple as
@example
mmc -f @var{source-files}
mmake @var{main-module}.depend
mmake @var{main-module}
@end example
Mmake only recompiles those files that need to be recompiled,
based on automatically generated dependency information.
Most of the dependencies are stored in @file{.d} files that are
automatically recomputed every time you recompile,
so they are never out-of-date.
A little bit of the dependency information is stored in @file{.dep}
and @file{.dv} files which are more expensive to recompute.
The @samp{mmake @var{main-module}.depend} command which recreates the
@file{@var{main-module}.dep} and @file{@var{main-module}.dv} files needs
to be repeated only when you add or remove a module from your program,
and there is no danger of getting an inconsistent executable if you forget
this step --- instead you will get a compile or link error.
The @samp{mmc -f} step above is only required if there are any source
files for which the file name does not match the module name.
@samp{mmc -f} generates a file @file{Mercury.modules} containing
a mapping from module name to source file. The @file{Mercury.modules}
file must be updated when a source file for which the file name does
not match the module name is added to or removed from the directory.
@samp{mmake} allows you to build more than one program in the same directory.
Each program must have its own @file{.dep} and @file{.dv} files,
and therefore you must run @samp{mmake @var{program}.depend}
for each program. The @samp{Mercury.modules} file is used for
all programs in the directory.
If there is a file called @samp{Mmake} or @samp{Mmakefile} in the
current directory,
Mmake will include that file in its automatically-generated Makefile.
The @samp{Mmake} file can override the default values of
various variables used by Mmake's builtin rules,
or it can add additional rules, dependencies, and actions.
Mmake's builtin rules are defined by the file
@file{@var{prefix}/lib/mercury/mmake/Mmake.rules}
(where @var{prefix} is @file{/usr/local/mercury-@var{version}} by default,
and @var{version} is the version number, e.g.@: @samp{0.6}),
as well as the rules and variables in the automatically-generated
@file{.dep} and @file{.dv} files.
These rules define the following targets:
@table @file
@item @var{main-module}.depend
Creates the files @file{@var{main-module}.dep} and
@file{@var{main-module}.dv} from @file{@var{main-module}.m}
and the modules it imports.
This step must be performed first.
It is also required whenever you wish to change the level of
inter-module optimization performed (@pxref{Overall optimization options}).
@item @var{main-module}.all_ints
Ensure that the interface files for @var{main-module}
and its imported modules are up-to-date.
(If the underlying @samp{make} program does not handle transitive dependencies,
this step may be necessary before
attempting to make @file{@var{main-module}} or @file{@var{main-module}.check};
if the underlying @samp{make} is GNU Make, this step should not be necessary.)
@item @var{main-module}.check
Perform semantic checking on @var{main-module} and its imported modules.
Error messages are placed in @file{.err} files.
@item @var{main-module}
Compiles and links @var{main-module} using the Mercury compiler.
Error messages are placed in @file{.err} files.
@c XXX mention .dlls and .exe targets?
@item lib@var{main-module}
Builds a library whose top-level module is @var{main-module}.
This will build a static object library, a shared object library
(for platforms that support it), and the necessary interface files.
For more information, see @ref{Libraries}.
@item lib@var{main-module}.install
Builds and installs a library whose top-level module is @var{main-module}.
This target will build and install a static object library and
(for platforms that support it) a shared object library,
for the default grade and also for the additional grades specified
in the @code{LIBGRADES} variable. It will also build and install the
necessary interface files. The variable @code{INSTALL} specifies
the name of the command to use to install each file, by default
@samp{cp}. The variable @code{INSTALL_MKDIR} specifies the command to use
to create directories, by default @samp{mkdir -p}.
@vindex LIBGRADES
@vindex LIB_LINKAGES
@vindex INSTALL
@vindex INSTALL_MKDIR
@item @var{main-module}.clean
Removes the automatically generated files
that contain the compiled code of the program
and the error messages produced by the compiler.
Specifically, this will remove all the @samp{.c}, @samp{.s}, @samp{.o},
@samp{.pic_o}, @samp{.prof}, @samp{.no}, @samp{.ql}, @samp{.used},
@samp{.mih},
and @samp{.err} files
belonging to the named @var{main-module} or its imported modules.
Use this target whenever you wish to change compilation model
(@pxref{Compilation model options}).
This target is also recommended whenever you wish to change the level
of inter-module optimization performed (@pxref{Overall optimization
options}) in addition to the mandatory @var{main-module}.depend.
@item @var{main-module}.realclean
Removes all the automatically generated files.
In addition to the files removed by @var{main-module}.clean, this
removes the @samp{.int}, @samp{.int0}, @samp{.int2},
@samp{.int3}, @samp{.opt}, @samp{.trans_opt},
@samp{.date}, @samp{.date0}, @samp{.date3}, @samp{.optdate},
@samp{.trans_opt_date},
@samp{.mh} and @samp{.d} files
belonging to one of the modules of the program,
and also the various possible executables, libraries and dependency files
for the program as a whole ---
@samp{@var{main-module}},
@samp{lib@var{main-module}.a},
@samp{lib@var{main-module}.so},
@samp{@var{main-module}.init},
@samp{@var{main-module}.dep}
and
@samp{@var{main-module}.dv}.
@item clean
This makes @samp{@var{main-module}.clean} for every @var{main-module}
for which there is a @file{@var{main-module}.dep} file in the current
directory, as well as deleting the profiling files
@samp{Prof.CallPair},
@samp{Prof.Counts},
@samp{Prof.Decl},
@samp{Prof.MemWords}
and
@samp{Prof.MemCells}.
@item realclean
This makes @samp{@var{main-module}.realclean} for every @var{main-module}
for which there is a @file{@var{main-module}.dep} file in the current
directory, as well as deleting the profiling files as per the @samp{clean}
target.
@end table
@cindex Variables, Mmake
@cindex Mmake variables
The variables used by the builtin rules (and their default values) are
defined in the file @file{@var{prefix}/lib/mercury/mmake/Mmake.vars}, however
these may be overridden by user @samp{Mmake} files. Some of the more
useful variables are:
@table @code
@item MAIN_TARGET
@vindex MAIN_TARGET
The name of the default target to create if @samp{mmake} is invoked with
any target explicitly named on the command line.
@item MC
@vindex MC
The executable that invokes the Mercury compiler.
@item GRADEFLAGS and EXTRA_GRADEFLAGS
@vindex GRADEFLAGS
@vindex EXTRA_GRADEFLAGS
Compilation model options (@pxref{Compilation model options})
to pass to the Mercury compiler, linker, and other tools
(in particular @code{mmc}, @code{mgnuc}, @code{ml}, and @code{c2init}).
@item MCFLAGS and EXTRA_MCFLAGS
@vindex MCFLAGS
@vindex EXTRA_MCFLAGS
Options to pass to the Mercury compiler.
(Note that compilation model options should be
specified in @code{GRADEFLAGS}, not in @code{MCFLAGS}.)
@item MGNUC
@vindex MGNUC
The executable that invokes the C compiler.
@item MGNUCFLAGS and EXTRA_MGNUCFLAGS
@vindex MGNUCFLAGS
@vindex EXTRA_MGNUCFLAGS
Options to pass to the mgnuc script.
@item CFLAGS and EXTRA_CFLAGS
@vindex CFLAGS
@vindex EXTRA_CFLAGS
Options to pass to the C compiler.
@item JAVACFLAGS and EXTRA_JAVACFLAGS
@vindex JAVACFLAGS
@vindex EXTRA_JAVACFLAGS
Options to pass to the Java compiler (if you are using it).
@item ERLANG_FLAGS and EXTRA_ERLANG_FLAGS
@vindex ERLANG_FLAGS
@vindex EXTRA_ERLANG_FLAGS
Options to pass to the Erlang compiler (if you are using it).
@item ML
@vindex ML
The executable that invokes the linker.
@item LINKAGE
@vindex LINKAGE
Can be set to either @samp{shared} to link with shared libraries,
or @samp{static} to always link statically. The default is @samp{shared}.
This variable only has an effect with @samp{mmc --make}.
@item MERCURY_LINKAGE
@vindex MERCURY_LINKAGE
Can be set to either @samp{shared} to link with shared Mercury libraries,
or @samp{static} to always link with the static versions of Mercury libraries.
The default is system dependent.
This variable only has an effect with @samp{mmc --make}.
@xref{Using installed libraries with mmc --make}.
@item MLFLAGS and EXTRA_MLFLAGS
@vindex MLFLAGS
@vindex EXTRA_MLFLAGS
Options to pass to the ml and c2init scripts.
(Note that compilation model options should be
specified in @code{GRADEFLAGS}, not in @code{MLFLAGS}.)
These variables have no effect with @samp{mmc --make}.
@item LDFLAGS and EXTRA_LDFLAGS
@vindex LDFLAGS
@vindex EXTRA_LDFLAGS
Options to pass to the command used by the ml script to link
executables (use @code{ml --print-link-command} to find out
what command is used, usually the C compiler).
@item LD_LIBFLAGS and EXTRA_LD_LIBFLAGS
@vindex LD_LIBFLAGS
@vindex EXTRA_LD_LIBFLAGS
Options to pass to the command used to by the ml script to link
shared libraries (use @code{ml --print-shared-lib-link-command}
to find out what command is used, usually the C compiler
or the system linker, depending on the platform).
@item MLLIBS and EXTRA_MLLIBS
@vindex MLLIBS
@vindex EXTRA_MLLIBS
A list of @samp{-l} options specifying libraries used by the program
(or library) that you are building. @xref{Using libraries with Mmake}.
@xref{Using installed libraries with mmc --make}.
@item MLOBJS and EXTRA_MLOBJS
@vindex MLOBJS
@vindex EXTRA_MLOBJS
A list of extra object files to link into the program (or library)
that you are building.
@item C2INITFLAGS and EXTRA_C2INITFLAGS
@vindex C2INITFLAGS
@vindex EXTRA_C2INITFLAGS
Options to pass to the linker and the c2init program.
@code{C2INITFLAGS} and @code{EXTRA_C2INITFLAGS} are obsolete synonyms
for @code{MLFLAGS} and @code{EXTRA_MLFLAGS} (@code{ml} and @code{c2init}
take the same set of options).
(Note that compilation model options and extra files to be processed by
c2init should not be specified in @code{C2INITFLAGS} --- they should be
specified in @code{GRADEFLAGS} and @code{C2INITARGS}, respectively.)
@item C2INITARGS and EXTRA_C2INITARGS
@vindex C2INITARGS
@vindex EXTRA_C2INITARGS
Extra files to be processed by c2init. These variables should not be
used for specifying flags to c2init (those should be specified in
@code{MLFLAGS}) since they are also used to derive extra dependency
information.
@item EXTRA_LIBRARIES
@vindex EXTRA_LIBRARIES
A list of extra Mercury libraries to link into any programs or libraries
that you are building.
Libraries should be specified using their base name; that is, without any
@samp{lib} prefix or extension.
For example the library including the files @file{libfoo.a} and
@file{foo.init} would be referred to as just @samp{foo}.
@xref{Using libraries with Mmake}.
@xref{Using installed libraries with mmc --make}.
@item EXTRA_LIB_DIRS
@vindex EXTRA_LIB_DIRS
A list of extra Mercury library directory hierarchies to search when
looking for extra libraries. @xref{Using libraries with Mmake}.
@xref{Using installed libraries with mmc --make}.
@item INSTALL_PREFIX
@vindex INSTALL_PREFIX
The path to the root of the directory hierarchy where the libraries,
etc.@: you are building should be installed. The default is to install in
the same location as the Mercury compiler being used to do the install.
@item INSTALL
@vindex INSTALL
The command used to install each file in a library. The command should
take a list of files to install and the location to install them.
The default command is @samp{cp}.
@item INSTALL_MKDIR
@vindex INSTALL_MKDIR
The command used to create each directory in the directory hierarchy
where the libraries are to be installed. The default command is
@samp{mkdir -p}.
@item LIBGRADES
@vindex LIBGRADES
A list of additional grades which should be built when installing libraries.
The default is to install the Mercury compiler's default set of grades.
Note that this may not be the set of grades in which the standard libraries
were actually installed.
@vindex GRADEFLAGS
Note also that any @code{GRADEFLAGS} settings will also be applied when
the library is built in each of the listed grades, so you may not get what
you expect if those options are not subsumed by each of the grades listed.
@item LIB_LINKAGES
@vindex LIB_LINKAGES
A list of linkage styles (@samp{shared} or @samp{static}) for which libraries
should be built and installed. The default is to install libraries for both
static and shared linking. This variable only has an effect with
@samp{mmc --make}.
@end table
Other variables also exist --- see
@file{@var{prefix}/lib/mercury/mmake/Mmake.vars} for a complete list.
If you wish to temporarily change the flags passed to an executable,
rather than setting the various @samp{FLAGS} variables directly, you can
set an @samp{EXTRA_} variable. This is particularly intended for
use where a shell script needs to call mmake and add an extra parameter,
without interfering with the flag settings in the @samp{Mmakefile}.
For each of the variables for which there is version with an @samp{EXTRA_}
prefix, there is also a version with an @samp{ALL_} prefix that
is defined to include both the ordinary and the @samp{EXTRA_} version.
If you wish to @emph{use} the values any of these variables
in your Mmakefile (as opposed to @emph{setting} the values),
then you should use the @samp{ALL_} version.
It is also possible to override these variables on a per-file basis.
For example, if you have a module called say @file{bad_style.m}
which triggers lots of compiler warnings, and you want to disable
the warnings just for that file, but keep them for all the other modules,
then you can override @code{MCFLAGS} just for that file. This is done by
setting the variable @samp{MCFLAGS-bad_style}, as shown here:
@example
MCFLAGS-bad_style = --inhibit-warnings
@end example
Mmake has a few options, including @samp{--use-subdirs}, @samp{--use-mmc-make},
@samp{--save-makefile}, @samp{--verbose}, and @samp{--no-warn-undefined-vars}.
For details about these options, see the man page or type @samp{mmake --help}.
Finally, since Mmake is built on top of Make or GNU Make, you can also
make use of the features and options supported by the underlying Make.
In particular, GNU Make has support for running jobs in parallel, which
is very useful if you have a machine with more than one CPU.
As an alternative to Mmake, the Mercury compiler now contains a
significant part of the functionality of Mmake, using @samp{mmc}'s
@samp{--make} option.
@findex --make
The advantages of the @samp{mmc --make} over Mmake are that there
is no @samp{mmake depend} step and the dependencies are more accurate.
Note that @samp{--use-subdirs} is automatically enabled if you specify
@samp{mmc --make}.
@cindex Options files
@cindex Mercury.options
The Mmake variables above can be used by @samp{mmc --make} if they
are set in a file called @file{Mercury.options}. The @file{Mercury.options}
file has the same syntax as an Mmakefile, but only variable assignments and
@samp{include} directives are allowed.
All variables in @file{Mercury.options} are treated as if they are
assigned using @samp{:=}.
Variables may also be set in the environment, overriding settings in
options files.
@samp{mmc --make} can be used in conjunction with Mmake. This is useful
for projects which include source code written in languages other than
Mercury. The @samp{--use-mmc-make} Mmake option disables Mmake's
Mercury-specific rules. Mmake will then process source files written in
other languages, but all Mercury compilation will be done by
@samp{mmc --make}. The following variables can be set in the Mmakefile
to control the use of @samp{mmc --make}.
@table @code
@item MERCURY_MAIN_MODULES
@vindex MERCURY_MAIN_MODULES
The top-level modules of the programs or libraries being built in
the directory. This must be set to tell Mmake to use @samp{mmc --make}
to rebuild the targets for the main modules even if those files already
exist.
@item MC_BUILD_FILES
@vindex MC_BUILD_FILES
Other files which should be built with @samp{mmc --make}.
This should only be necessary for header files generated by the
Mercury compiler which are included by the user's C source files.
@item MC_MAKE_FLAGS and EXTRA_MC_MAKE_FLAGS
@vindex MC_MAKE_FLAGS
@vindex EXTRA_MC_MAKE_FLAGS
Options to pass to the Mercury compiler only when using @samp{mmc --make}.
@end table
The following variables can also appear in options files but are
@emph{only} supported by @samp{mmc --make}.
@table @code
@item GCC_FLAGS
@vindex GCC_FLAGS
Options to pass to the C compiler, but only if the C compiler is GCC.
If the C compiler is not GCC then this variable is ignored.
These options will be passed @emph{after} any options given by the
@samp{CFLAGS} variable.
@item CLANG_FLAGS
@vindex CLANG_FLAGS
Options to pass to the C compiler, but only if the C compiler is clang.
If the C compiler is not clang then this variable is ignored.
These options will be passed @emph{after} any options given by the
@samp{CFLAGS} variable.
@item MSVC_FLAGS
@vindex MSVC_FLAGS
Options to pass to the C compiler, but on if the C compiler is
Microsoft Visual C.
If the C compiler is not Visual C then this variable is ignored.
These options will be passed @emph{after} any options given by the
@samp{CFLAGS} variable.
@end table
@c ----------------------------------------------------------------------------
@node Libraries
@chapter Libraries
@cindex Libraries
Often you will want to use a particular set of Mercury modules
in more than one program. The Mercury implementation
includes support for developing libraries, i.e.@: sets of Mercury modules
intended for reuse. It allows separate compilation of libraries
and, on many platforms, it supports shared object libraries.
@menu
* Writing libraries::
* Building with mmc --make::
* Building with Mmake::
* Libraries and the Java grade::
* Libraries and the Erlang grade::
@end menu
@node Writing libraries
@section Writing libraries
A Mercury library is identified by a top-level module,
which should contain all of the modules in that library as sub-modules.
It may be as simple as this @file{mypackage.m} file:
@example
:- module mypackage.
:- interface.
:- include_module foo, bar, baz.
@end example
@noindent
This defines a module @samp{mypackage} containing
sub-modules @samp{mypackage:foo}, @samp{mypackage:bar},
and @samp{mypackage:baz}.
It is also possible to build libraries of unrelated
modules, so long as the top-level module imports all
the necessary modules. For example:
@example
:- module blah.
:- import_module fee, fie, foe, fum.
@end example
@noindent
This example defines a module @samp{blah}, which has
no functionality of its own, and which is just used
for grouping the unrelated modules @samp{fee},
@samp{fie}, @samp{foe}, and @samp{fum}.
Generally it is better style for each library to consist
of a single module which encapsulates its sub-modules,
as in the first example, rather than just a group of
unrelated modules, as in the second example.
@node Building with mmc --make
@section Building with mmc --make
@menu
* Building and installing libraries with mmc --make::
* Using installed libraries with mmc --make::
* Using non-installed libraries with mmc --make::
@end menu
@node Building and installing libraries with mmc --make
@subsection Building and installing libraries with mmc --make
To build a library from the source @samp{mypackage.m} (and other included
modules), run @samp{mmc} with the following arguments:
@example
mmc --make libmypackage
@end example
@noindent
@samp{mmc} will create static (non-shared) object libraries
and, on most platforms, shared object libraries;
however, we do not yet support the creation of dynamic link
libraries (DLLs) on Windows.
Use the @samp{mmc} option @samp{--lib-linkage} to specify which versions of the
library should be created: @samp{shared} or @samp{static}. The
@samp{--lib-linkage} option can be specified multiple times.
In our example, the files @samp{libmypackage.a} and @samp{libmypackage.so}
should appear in the current directory.
Other programs can more easily use a library that is installed.
To install the library, issue the following command:
@example
mmc --make --install-prefix <dir> libmypackage.install
@end example
@noindent
@samp{mmc} will create the directory @samp{<dir>/lib/mercury} and install the
library there.
The library will be compiled in all valid grades and with all interface files.
Because several grades are usually compiled, installing the library can be a lengthy
process.
You can specify the set of installed grades using the option
@samp{--no-libgrade} followed by @samp{--libgrade <grade>} for all grades you
wish to install.
If no @samp{--install-prefix <dir>} is specified, the library will be installed
in the standard location, next to the Mercury standard library.
@node Using installed libraries with mmc --make
@subsection Using installed libraries with mmc --make
@cindex Libraries, linking with
@findex --mld
@findex --mercury-library-directory
@findex --ml
@findex --mercury-library
Once a library is installed, it can be used by running @samp{mmc} with the
following options:
@example
mmc ... --ml mypackage ... --ml myotherlib ... --ml my_yet_another_lib ...
@end example
@noindent
If a library was installed in a different place (using @samp{--install-prefix
<dir>}), you will also need to add this option:
@example
mmc ... --mld <dir>/lib/mercury ...
@end example
@noindent
Note that @samp{/lib/mercury} has to be added to the searched path. The
@samp{--mld} option can be used several times to add more directories to the
library search path.
You can also specify whether to link executables with the shared or static
versions of Mercury libraries using @samp{--mercury-linkage shared} or
@samp{--mercury-linkage static}.
@node Using non-installed libraries with mmc --make
@subsection Using non-installed libraries with mmc --make
@cindex Libraries, linking with
@findex --search-lib-files-dir
@findex --init-file
@findex --link-object
Suppose the user wants to link against library @samp{mypackage} without
installing the library. The source of the library is stored in the directory
@samp{<dir>} and that the library has been properly built using @samp{mmc
--make libmypackage}. To link against the library, the following options have
to be added to @samp{mmc}:
@example
mmc ... --search-lib-files-dir <dir> \\
--init-file <dir>/mypackage.init \\
--link-object <dir>/libmypackage.a \\
...
@end example
@noindent
Note that the option @samp{--ml} is not used.
You need to make sure the library @samp{libmypackage.a} and the main program
were compiled in the same grade.
If you need to experiment with more grades, be sure to build the library in all
the grades (building several times using @samp{mmc --grade <grade> --make
libmypackage}) and use the @samp{libmypackage.a} that is compatible with your
main program's grade:
@example
mmc ... --use-grade-subdirs \\
--grade <grade> \\
--search-lib-files-dir <dir> \\
--init-file <dir>/mypackage.init \\
--link-object <dir>/Mercury/<grade>/*/Mercury/lib/libmypackage.a \\
...
@end example
@node Building with Mmake
@section Building with Mmake
@menu
* Building libraries with Mmake::
* Installing libraries with Mmake::
* Using libraries with Mmake::
@end menu
@node Building libraries with Mmake
@subsection Building libraries with Mmake
@cindex Shared objects
@cindex Shared libraries
@cindex Static libraries
@cindex Position independent code
@cindex PIC (position independent code)
Generally Mmake will do most of the work of building
libraries automatically. Here's a sample @code{Mmakefile} for
creating a library.
@example
MAIN_TARGET = libmypackage
depend: mypackage.depend
@end example
The Mmake target @samp{lib@var{foo}} is a built-in target for
creating a library whose top-level module is @samp{@var{foo}.m}.
The automatically generated Mmake rules for the target @samp{lib@var{foo}}
will create all the files needed to use the library.
(You will need to run @samp{mmake @var{foo}.depend} first
to generate the module dependency information.)
Mmake will create static (non-shared) object libraries
and, on most platforms, shared object libraries;
however, we do not yet support the creation of dynamic link
libraries (DLLs) on Windows.
Static libraries are created using the standard tools @samp{ar}
and @samp{ranlib}.
Shared libraries are created using the @samp{--make-shared-lib}
option to @samp{ml}.
The automatically-generated Make rules for @samp{libmypackage}
will look something like this:
@example
libmypackage: libmypackage.a libmypackage.so \
$(mypackage.ints) $(mypackage.int3s) \
$(mypackage.opts) $(mypackage.trans_opts) mypackage.init
libmypackage.a: $(mypackage.os)
rm -f libmypackage.a
$(AR) $(ARFLAGS) libmypackage.a $(mypackage.os) $(MLOBJS)
$(RANLIB) $(RANLIBFLAGS) mypackage.a
libmypackage.so: $(mypackage.pic_os)
$(ML) $(MLFLAGS) --make-shared-lib -o libmypackage.so \
$(mypackage.pic_os) $(MLPICOBJS) $(MLLIBS)
libmypackage.init:
...
clean:
rm -f libmypackage.a libmypackage.so
@end example
@vindex AR
@vindex ARFLAGS
@vindex MLOBJS
@vindex RANLIB
@vindex RANLIBFLAGS
@vindex ML
@vindex MLFLAGS
@vindex MLPICOBJS
@vindex MLLIBS
If necessary, you can override the default definitions of the variables
such as @samp{ML}, @samp{MLFLAGS}, @samp{MLPICOBJS}, and @samp{MLLIBS}
to customize the way shared libraries are built. Similarly @samp{AR},
@samp{ARFLAGS}, @samp{MLOBJS}, @samp{RANLIB}, and @samp{RANLIBFLAGS}
control the way static libraries are built. (The @samp{MLOBJS} variable
is supposed to contain a list of additional object files to link into
the library, while the @samp{MLLIBS} variable should contain a list of
@samp{-l} options naming other libraries used by this library.
@samp{MLPICOBJS} is described below.)
Note that to use a library, as well as the shared or static object library,
you also need the interface files. That's why the
@samp{libmypackage} target builds @samp{$(mypackage.ints)} and
@samp{$(mypackage.int3s)}.
If the people using the library are going to use intermodule
optimization, you will also need the intermodule optimization interfaces.
The @samp{libmypackage} target will build @samp{$(mypackage.opts)} if
@samp{--intermodule-optimization} is specified in your @samp{MCFLAGS}
variable (this is recommended).
@findex --intermodule-optimization
Similarly, if the people using the library are going to use transitive
intermodule optimization, you will also need the transitive intermodule
optimization interfaces (@samp{$(mypackage.trans_opt)}).
These will be built if @samp{--trans-intermod-opt} is specified in your
@samp{MCFLAGS} variable.
@findex --trans-intermod-opt
In addition, with certain compilation grades, programs will need to
execute some startup code to initialize the library; the
@samp{mypackage.init} file contains information about initialization
code for the library. The @samp{libmypackage} target will build this file.
On some platforms, shared objects must be created using position independent
code (PIC), which requires passing some special options to the C compiler.
On these platforms, @code{Mmake} will create @file{.pic_o} files,
and @samp{$(mypackage.pic_os)} will contain a list of the @file{.pic_o} files
for the library whose top-level module is @samp{mypackage}.
In addition, @samp{$(MLPICOBJS)} will be set to @samp{$MLOBJS} with
all occurrences of @samp{.o} replaced with @samp{.pic_o}.
On other platforms, position independent code is the default,
so @samp{$(mypackage.pic_os)} will just be the same as @samp{$(mypackage.os)},
which contains a list of the @file{.o} files for that module,
and @samp{$(MLPICOBJS)} will be the same as @samp{$(MLOBJS)}.
@node Installing libraries with Mmake
@subsection Installing libraries with Mmake
@samp{mmake} has support for alternative library directory hierarchies.
These have the same structure as the @file{@var{prefix}/lib/mercury} tree,
including the different subdirectories for different grades and different
machine architectures.
In order to support the installation of a library into such a tree, you
simply need to specify (e.g.@: in your @file{Mmakefile}) the path prefix
and the list of grades to install:
@example
INSTALL_PREFIX = /my/install/dir
LIBGRADES = asm_fast asm_fast.gc.tr.debug
@end example
@vindex INSTALL_PREFIX
@vindex LIBGRADES
This specifies that libraries should be installed in
@file{/my/install/dir/lib/mercury}, in the default grade plus
@samp{asm_fast} and @samp{asm_fast.gc.tr.debug}.
If @samp{INSTALL_PREFIX} is not specified, @samp{mmake} will attempt to
install the library in the same place as the standard Mercury libraries.
If @samp{LIBGRADES} is not specified, @samp{mmake} will use the Mercury
compiler's default set of grades, which may or may not correspond to the
actual set of grades in which the standard Mercury libraries were installed.
To actually install a library @samp{lib@var{foo}}, use the @samp{mmake}
target @samp{lib@var{foo}.install}.
This also installs all the needed interface files, and (if intermodule
optimisation is enabled) the relevant intermodule optimisation files.
One can override the list of grades to install for a given library
@samp{lib@var{foo}} by setting the @samp{LIBGRADES-@var{foo}} variable,
or add to it by setting @samp{EXTRA_LIBGRADES-@var{foo}}.
The command used to install each file is specified by @samp{INSTALL}.
If @samp{INSTALL} is not specified, @samp{cp} will be used.
@vindex INSTALL
The command used to create directories is specified by @samp{INSTALL_MKDIR}.
If @samp{INSTALL_MKDIR} is not specified, @samp{mkdir -p} will be used.
@vindex INSTALL_MKDIR
Note that currently it is not possible to set the installation prefix
on a library-by-library basis.
@node Using libraries with Mmake
@subsection Using libraries with Mmake
@cindex Libraries, linking with
Once a library is installed, using it is easy.
Suppose the user wishes to use the library @samp{mypackage} (installed
in the tree rooted at @samp{/some/directory/mypackage}) and the library
@samp{myotherlib} (installed in the tree rooted at
@samp{/some/directory/myotherlib}).
The user need only set the following Mmake variables:
@example
EXTRA_LIB_DIRS = /some/directory/mypackage/lib/mercury \
/some/directory/myotherlib/lib/mercury
EXTRA_LIBRARIES = mypackage myotherlib
@end example
@vindex EXTRA_LIBRARIES
@vindex EXTRA_LIB_DIRS
@findex --intermodule-optimization
When using @samp{--intermodule-optimization} with a library which
uses the C interface, it may be necessary to add @samp{-I} options to
@samp{MGNUCFLAGS} so that the C compiler can find any header files
used by the library's C code.
Mmake will ensure that the appropriate directories are searched for
the relevant interface files, module initialisation files, compiled
libraries, etc.
Beware that the directory name that you must use in @samp{EXTRA_LIB_DIRS}
or as the argument of the @samp{--mld} option is not quite the same as
the name that was specified in the @samp{INSTALL_PREFIX} when the library
was installed --- the name needs to have @samp{/lib/mercury} appended.
One can specify extra libraries to be used on a program-by-program
basis. For instance, if the program @samp{foo} also uses the library
@samp{mylib4foo}, but the other programs governed by the Mmakefile don't,
then one can declare:
@example
EXTRA_LIBRARIES-foo = mylib4foo
@end example
@node Libraries and the Java grade
@section Libraries and the Java grade
@cindex jar files
@cindex Java libraries
Libraries are handled a little differently for the Java grade. Instead of
compiling object code into a static or shared library, the class files are
added to a jar (Java ARchive) file of the form @file{@var{library-name}.jar}.
To create or install a Java library, simply specify that you want to use the
java grade, either by setting @samp{GRADE=java} in your Mmakefile, or by
including @samp{--java} or @samp{--grade java} in your @samp{GRADEFLAGS}, then
follow the instructions as above.
Java libraries are installed to the directory
@file{@var{prefix}/lib/mercury/lib/java}. To include them in a program, in
addition to the instructions above, you will need to include the installed jar
file in your @samp{CLASSPATH}, which you can set using
@samp{--java-classpath @var{jarfile}} in @samp{MCFLAGS}.
@node Libraries and the Erlang grade
@section Libraries and the Erlang grade
@cindex .beams directories
@cindex Erlang libraries
Since the Erlang implementation does not have library files, the Mercury
compiler puts all the @file{.beam} files for a single Mercury library into a
directory named @file{lib@var{library-name}.beams}.
To create or install an Erlang "library", specify that you want to use
the erlang grade and use @samp{mmc --make}. Mmake does not, at present,
support Erlang targets.
@c ----------------------------------------------------------------------------
@node Debugging
@chapter Debugging
@cindex Debugging
@cindex Tracing
@pindex mdb
@menu
* Quick overview::
* GNU Emacs interface::
* Tracing of Mercury programs::
* Preparing a program for debugging::
* Tracing optimized code::
* Mercury debugger invocation::
* Mercury debugger concepts::
* User defined events::
* I/O tabling::
* Debugger commands::
* Declarative debugging::
* Trace counts::
@end menu
@node Quick overview
@section Quick overview
This section gives a quick and simple guide to getting
started with the debugger. The remainder of this chapter
contains more detailed documentation.
To use the debugger, you must
first compile your program with debugging enabled.
You can do this by using
one of the @samp{--debug} or @samp{--decl-debug} options
when invoking @samp{mmc},
or by including @samp{GRADEFLAGS = --debug}
or @samp{GRADEFLAGS = --decl-debug}
in your @file{Mmakefile}.
@findex --debug
@example
bash$ mmc --debug hello.m
@end example
Once you've compiled with debugging enabled, you can use the @samp{mdb}
command to invoke your program under the debugger:
@pindex mdb
@example
bash$ mdb ./hello arg1 arg2 ...
@end example
Any arguments (such as @samp{arg1 arg2 ...} in this example)
that you pass after the program name will be given as arguments
to the program.
The debugger will print a start-up message
and will then show you the first trace event,
namely the call to @code{main/2}:
@example
1: 1 1 CALL pred hello:main/2-0 (det)
hello.m:13
mdb>
@end example
By hitting enter at the @samp{mdb>} prompt, you can step through
the execution of your program to the next trace event:
@example
2: 2 2 CALL pred io:write_string/3-0 (det)
io.m:2837 (hello.m:14)
mdb>
Hello, world
3: 2 2 EXIT pred io:write_string/3-0 (det)
io.m:2837 (hello.m:14)
mdb>
@end example
For each trace event, the debugger prints out several pieces of information.
The three numbers at the start of the display are
the event number, the call sequence number, and the call depth.
(You don't really need to pay too much attention to those.)
They are followed by the event type (e.g.@: @samp{CALL} or @samp{EXIT}).
After that comes the identification of the procedure
in which the event occurred, consisting of the module-qualified name
of the predicate or function to which the procedure belongs,
followed by its arity, mode number and determinism.
This may sometimes be followed by a ``path''
(@pxref{Tracing of Mercury programs}).
At the end is the file name and line number of the
called procedure and (if available) also the file name
and line number of the call.
The most useful @code{mdb} commands have single-letter abbreviations.
The @samp{alias} command will show these abbreviations:
@example
mdb> alias
? => help
EMPTY => step
NUMBER => step
P => print *
b => break
c => continue
d => stack
f => finish
g => goto
h => help
p => print
r => retry
s => step
v => vars
@end example
The @samp{P} or @samp{print *} command will display the values
of any live variables in scope.
The @samp{f} or @samp{finish} command can be used if you want
to skip over a call.
The @samp{b} or @samp{break} command can be used to set break-points.
The @samp{d} or @samp{stack} command will display the call stack.
The @samp{quit} command will exit the debugger.
That should be enough to get you started.
But if you have GNU Emacs installed, you should strongly
consider using the Emacs interface to @samp{mdb} --- see
the following section.
For more information about the available commands,
use the @samp{?} or @samp{help} command, or see @ref{Debugger commands}.
@node GNU Emacs interface
@section GNU Emacs interface
@cindex GNU Emacs
@cindex Emacs
As well as the command-line debugger, mdb, there is also an Emacs
interface to this debugger. Note that the Emacs interface only
works with GNU Emacs, not with XEmacs.
With the Emacs interface, the debugger will display your source code
as you trace through it, marking the line that is currently being
executed, and allowing you to easily set breakpoints on particular
lines in your source code. You can have separate windows for
the debugger prompt, the source code being executed, and for
the output of the program being executed.
In addition, most of the mdb commands are accessible via menus.
To start the Emacs interface, you first need to put the following
text in the file @file{.emacs} in your home directory,
replacing ``/usr/local/mercury-1.0'' with the directory
that your Mercury implementation was installed in.
@example
(setq load-path (cons (expand-file-name
"/usr/local/mercury-1.0/lib/mercury/elisp")
load-path))
(autoload 'mdb "gud" "Invoke the Mercury debugger" t)
@end example
Build your program with debugging enabled, as described
in @ref{Quick overview} or @ref{Preparing a program for debugging}.
Then start up Emacs, e.g.@: using the command @samp{emacs},
and type @kbd{M-x mdb @key{RET}}. Emacs will then prompt you for
the mdb command to invoke
@example
Run mdb (like this): mdb
@end example
@noindent
and you should type in the name of the program that you want to debug
and any arguments that you want to pass to it:
@example
Run mdb (like this): mdb ./hello arg1 arg2 ...
@end example
Emacs will then create several ``buffers'': one for the debugger prompt,
one for the input and output of the program being executed, and one or more
for the source files. By default, Emacs will split the display into two
parts, called ``windows'', so that two of these buffers will be visible.
You can use the command @kbd{C-x o} to switch between windows,
and you can use the command @kbd{C-x 2} to split a window into two
windows. You can use the ``Buffers'' menu to select which buffer is
displayed in each window.
If you're using X-Windows, then it is a good idea
to set the Emacs variable @samp{pop-up-frames} to @samp{t}
before starting mdb, since this will cause each buffer to be
displayed in a new ``frame'' (i.e.@: a new X window).
You can set this variable interactively using the
@samp{set-variable} command, i.e.@:
@kbd{M-x set-variable @key{RET} pop-up-frames @key{RET} t @key{RET}}.
Or you can put @samp{(setq pop-up-frames t)} in the @file{.emacs}
file in your home directory.
For more information on buffers, windows, and frames,
see the Emacs documentation.
Another useful Emacs variable is @samp{gud-mdb-directories}.
This specifies the list of directories to search for source files.
You can use a command such as
@example
M-x set-variable @key{RET}
gud-mdb-directories @key{RET}
(list "/foo/bar" "../other" "/home/guest") @key{RET}
@end example
@noindent
to set it interactively, or you can put a command like
@example
(setq gud-mdb-directories
(list "/foo/bar" "../other" "/home/guest"))
@end example
@noindent
in your @file{.emacs} file.
At each trace event, the debugger will search for the
source file corresponding to that event, first in the
same directory as the program, and then in the directories
specified by the @samp{gud-mdb-directories} variable.
It will display the source file, with the line number
corresponding to that trace event marked by
an arrow (@samp{=>}) at the start of the line.
Several of the debugger features can be accessed by moving
the cursor to the relevant part of the source code and then
selecting a command from the menu.
You can set a break point on a line by moving the cursor to the
appropriate line in your source code (e.g.@: with the arrow keys,
or by clicking the mouse there), and then selecting
the ``Set breakpoint on line'' command from the ``Breakpoints''
sub-menu of the ``MDB'' menu. You can set a breakpoint on
a procedure by moving the cursor over the procedure name
and then selecting the ``Set breakpoint on procedure''
command from the same menu. And you can display the value of
a variable by moving the cursor over the variable name
and then selecting the ``Print variable'' command from the
``Data browsing'' sub-menu of the ``MDB'' menu.
Most of the menu commands also have keyboard short-cuts,
which are displayed on the menu.
Note that mdb's @samp{context} and @samp{user_event_context} commands
should not be used if you are using the Emacs interface,
otherwise the Emacs interface won't be able to parse
the file names and line numbers that mdb outputs,
and so it won't be able to highlight the correct location in the source code.
@node Tracing of Mercury programs
@section Tracing of Mercury programs
@cindex Tracing
The Mercury debugger is based on a modified version of the box model
on which the four-port debuggers of most Prolog systems are based.
Such debuggers abstract the execution of a program into a sequence,
also called a @emph{trace}, of execution events of various kinds.
The four kinds of events supported by most Prolog systems (their @emph{ports})
are
@cindex debugger trace events
@cindex trace events
@cindex call (trace event)
@cindex exit (trace event)
@cindex redo (trace event)
@cindex fail (trace event)
@table @emph
@item call
A call event occurs just after a procedure has been called,
and control has just reached the start of the body of the procedure.
@item exit
An exit event occurs when a procedure call has succeeded,
and control is about to return to its caller.
@item redo
A redo event occurs when all computations
to the right of a procedure call have failed,
and control is about to return to this call
to try to find alternative solutions.
@item fail
A fail event occurs when a procedure call has run out of alternatives,
and control is about to return to the rightmost computation to its left
that still has possibly successful alternatives left.
@end table
Mercury also supports these four kinds of events,
but not all events can occur for every procedure call.
Which events can occur for a procedure call, and in what order,
depend on the determinism of the procedure.
The possible event sequences for procedures of the various determinisms
are as follows.
@table @emph
@item nondet procedures
a call event, zero or more repeats of (exit event, redo event), and a fail event
@item multi procedures
a call event, one or more repeats of (exit event, redo event), and a fail event
@item semidet and cc_nondet procedures
a call event, and either an exit event or a fail event
@item det and cc_multi procedures
a call event and an exit event
@item failure procedures
a call event and a fail event
@item erroneous procedures
a call event
@end table
In addition to these four event types,
Mercury supports @emph{exception} events.
An exception event occurs
when an exception has been thrown inside a procedure,
and control is about to propagate this exception to the caller.
An exception event can replace the final exit or fail event
in the event sequences above
or, in the case of erroneous procedures,
can come after the call event.
Besides the event types call, exit, redo, fail and exception,
which describe the @emph{interface} of a call,
Mercury also supports several types of events
that report on what is happening @emph{internal} to a call.
Each of these internal event types has an associated parameter called a path.
The internal event types are:
@cindex cond (trace event)
@cindex then (trace event)
@cindex else (trace event)
@cindex disj (trace event)
@cindex switch (trace event)
@cindex neg_enter (trace event)
@cindex neg_fail (trace event)
@cindex neg_success (trace event)
@table @emph
@item cond
A cond event occurs when execution reaches
the start of the condition of an if-then-else.
The path associated with the event specifies which if-then-else this is.
@item then
A then event occurs when execution reaches
the start of the then part of an if-then-else.
The path associated with the event specifies which if-then-else this is.
@item else
An else event occurs when execution reaches
the start of the else part of an if-then-else.
The path associated with the event specifies which if-then-else this is.
@item disj
A disj event occurs when execution reaches
the start of a disjunct in a disjunction.
The path associated with the event specifies
which disjunct of which disjunction this is.
@item switch
A switch event occurs when execution reaches
the start of one arm of a switch
(a disjunction in which each disjunct unifies a bound variable
with different function symbol).
The path associated with the event specifies
which arm of which switch this is.
@item neg_enter
A neg_enter event occurs when execution reaches
the start of a negated goal.
The path associated with the event specifies which negation goal this is.
@item neg_fail
A neg_fail event occurs when
a goal inside a negation succeeds,
which means that its negation fails.
The path associated with the event specifies which negation goal this is.
@item neg_success
A neg_success event occurs when
a goal inside a negation fails,
which means that its negation succeeds.
The path associated with the event specifies which negation goal this is.
@c @item pragma_first
@c @item pragma_later
@end table
@cindex path
@cindex goal path
A goal path is a sequence of path components separated by semicolons.
Each path component is one of the following:
@table @code
@item c@var{num}
The @var{num}'th conjunct of a conjunction.
@item d@var{num}
The @var{num}'th disjunct of a disjunction.
@item s@var{num}
The @var{num}'th arm of a switch.
@item ?
The condition of an if-then-else.
@item t
The then part of an if-then-else.
@item e
The else part of an if-then-else.
@item ~
The goal inside a negation.
@item q!
The goal inside an existential quantification or other scope
that changes the determinism of the goal.
@item q
The goal inside an existential quantification or other scope
that doesn't change the determinism of the goal.
@end table
A goal path describes the position of a goal
inside the body of a procedure definition.
For example, if the procedure body is a disjunction
in which each disjunct is a conjunction,
then the goal path @samp{d2;c3;} denotes
the third conjunct within the second disjunct.
If the third conjunct within the second disjunct is an atomic goal
such as a call or a unification,
then this will be the only goal with whose path has @samp{d2;c3;} as a prefix.
If it is a compound goal,
then its components will all have paths that have @samp{d2;c3;} as a prefix,
e.g.@: if it is an if-then-else,
then its three components will have the paths
@samp{d2;c3;?;}, @samp{d2;c3;t;} and @samp{d2;c3;e;}.
Goal paths refer to the internal form of the procedure definition.
When debugging is enabled
(and the option @samp{--trace-optimized} is not given),
the compiler will try to keep this form
as close as possible to the source form of the procedure,
in order to make event paths as useful as possible to the programmer.
Due to the compiler's flattening of terms,
and its introduction of extra unifications to implement calls in implied modes,
the number of conjuncts in a conjunction will frequently differ
between the source and internal form of a procedure.
This is rarely a problem, however, as long as you know about it.
Mode reordering can be a bit more of a problem,
but it can be avoided by writing single-mode predicates and functions
so that producers come before consumers.
The compiler transformation that
potentially causes the most trouble in the interpretation of goal paths
is the conversion of disjunctions into switches.
In most cases, a disjunction is transformed into a single switch,
and it is usually easy to guess, just from the events within a switch arm,
just which disjunct the switch arm corresponds to.
Some cases are more complex;
for example, it is possible for a single disjunction
can be transformed into several switches,
possibly with other, smaller disjunctions inside them.
In such cases, making sense of goal paths
may require a look at the internal form of the procedure.
You can ask the compiler to generate a file
with the internal forms of the procedures in a given module
by including the options @samp{-dfinal -Dpaths} on the command line
when compiling that module.
@node Preparing a program for debugging
@section Preparing a program for debugging
When you compile a Mercury program, you can specify
whether you want to be able to run the Mercury debugger on the program or not.
If you do, the compiler embeds calls to the Mercury debugging system
into the executable code of the program,
at the execution points that represent trace events.
At each event, the debugging system decides
whether to give control back to the executable immediately,
or whether to first give control to you,
allowing you to examine the state of the computation and issue commands.
Mercury supports two broad ways of preparing a program for debugging.
The simpler way is to compile a program in a debugging grade,
which you can do directly by specifying a grade
that includes the word ``debug'' or ``decldebug''
(e.g.@: @samp{asm_fast.gc.debug}, or @samp{asm_fast.gc.decldebug}),
or indirectly by specifying one of the @samp{--debug} or @samp{--decl-debug}
grade options to the compiler, linker, and other tools
(in particular @code{mmc}, @code{mgnuc}, @code{ml}, and @code{c2init}).
If you follow this way,
and accept the default settings of the various compiler options
that control the selection of trace events (which are described below),
you will be assured of being able to get control
at every execution point that represents a potential trace event,
which is very convenient.
The ``decldebug'' grades improve declarative debugging by allowing the user
to track the source of subterms (see @ref{Improving the search}).
Doing this increases the size of executables,
so these grades should only be used when you need
the subterm dependency tracking feature of the declarative debugger.
Note that declarative debugging,
with the exception of the subterm dependency tracking features,
also works in the .debug grades.
@c XXX mention ssdebug grades when ready
The two drawbacks of using a debugging grade
are the large size of the resulting executables,
and the fact that often you discover that you need to debug a big program
only after having built it in a non-debugging grade.
This is why Mercury also supports another way
to prepare a program for debugging,
one that does not require the use of a debugging grade.
With this way, you can decide, individually for each module,
which of four trace levels,
@samp{none}, @samp{shallow}, @samp{deep}, and @samp{rep}
you want to compile them with:
@cindex debugger trace level
@cindex trace level
@cindex shallow tracing
@cindex deep tracing
@table @samp
@item none
A procedure compiled with trace level @samp{none}
will never generate any events.
@item deep
A procedure compiled with trace level @samp{deep}
will always generate all the events requested by the user.
By default, this is all possible events,
but you can tell the compiler that
you are not interested in some kinds of events
via compiler options (see below).
However, declarative debugging requires all events to be generated
if it is to operate properly,
so do not disable the generation of any event types
if you want to use declarative debugging.
For more details see @ref{Declarative debugging}.
@item rep
This trace level is the same as trace level @samp{deep},
except that a representation of the module is stored in the executable
along with the usual debugging information.
The declarative debugger can use this extra information
to help it avoid asking unnecessary questions,
so this trace level has the effect of better declarative debugging
at the cost of increased executable size.
For more details see @ref{Declarative debugging}.
@item shallow
A procedure compiled with trace level @samp{shallow}
will generate interface events
if it is called from a procedure compiled with trace level @samp{deep},
but it will never generate any internal events,
and it will not generate any interface events either
if it is called from a procedure compiled with trace level @samp{shallow}.
If it is called from a procedure compiled with trace level @samp{none},
the way it will behave is dictated by whether
its nearest ancestor whose trace level is not @samp{none}
has trace level @samp{deep} or @samp{shallow}.
@end table
The intended uses of these trace levels are as follows.
@table @samp
@item deep
You should compile a module with trace level @samp{deep}
if you suspect there may be a bug in the module,
or if you think that being able to examine what happens inside that module
can help you locate a bug.
@item rep
You should compile a module with trace level @samp{rep}
if you suspect there may be a bug in the module,
you wish to use the full power of the declarative debugger,
and you are not concerned about the size of the executable.
@item shallow
You should compile a module with trace level @samp{shallow}
if you believe the code of the module is reliable and unlikely to have bugs,
but you still want to be able to get control at calls to and returns from
any predicates and functions defined in the module,
and if you want to be able to see the arguments of those calls.
@item none
You should compile a module with trace level @samp{none}
only if you are reasonably confident that the module is reliable,
and if you believe that knowing what calls other modules make to this module
would not significantly benefit you in your debugging.
@end table
In general, it is a good idea for most or all modules
that can be called from modules compiled with trace level
@samp{deep} or @samp{rep}
to be compiled with at least trace level @samp{shallow}.
You can control what trace level a module is compiled with
by giving one of the following compiler options:
@table @samp
@item --trace shallow
This always sets the trace level to @samp{shallow}.
@item --trace deep
This always sets the trace level to @samp{deep}.
@item --trace rep
This always sets the trace level to @samp{rep}.
@item --trace minimum
In debugging grades, this sets the trace level to @samp{shallow};
in non-debugging grades, it sets the trace level to @samp{none}.
@item --trace default
In debugging grades, this sets the trace level to @samp{deep};
in non-debugging grades, it sets the trace level to @samp{none}.
@end table
As the name implies, the last alternative is the default,
which is why by default you get
no debugging capability in non-debugging grades
and full debugging capability in debugging grades.
The table also shows that in a debugging grade,
no module can be compiled with trace level @samp{none}.
@strong{Important note}:
If you are not using a debugging grade, but you compile some modules with
a trace level other than none,
then you must also pass the @samp{--trace} (or @samp{-t}) option
to c2init and to the Mercury linker.
If you're using Mmake, then you can do this by including @samp{--trace}
in the @samp{MLFLAGS} variable.
If you're using Mmake, then you can also set the compilation options
for a single module named @var{Module} by setting the Mmake variable
@samp{MCFLAGS-@var{Module}}. For example, to compile the file
@file{foo.m} with deep tracing, @file{bar.m} with shallow tracing,
and everything else with no tracing, you could use the following:
@example
MLFLAGS = --trace
MCFLAGS-foo = --trace deep
MCFLAGS-bar = --trace shallow
@end example
@node Tracing optimized code
@section Tracing optimized code
@c XXX we should consider where to document --suppress-trace
@c and --stack-trace-higher-order
By default, all trace levels other than @samp{none}
turn off all compiler optimizations
that can affect the sequence of trace events generated by the program,
such as inlining.
If you are specifically interested in
how the compiler's optimizations affect the trace event sequence,
you can specify the option @samp{--trace-optimized},
which tells the compiler that it does not have to disable those optimizations.
(A small number of low-level optimizations
have not yet been enhanced to work properly in the presence of tracing,
so compiler disables these even if @samp{--trace-optimized} is given.)
@node Mercury debugger invocation
@section Mercury debugger invocation
The executables of Mercury programs
by default do not invoke the Mercury debugger
even if some or all of their modules were compiled with some form of tracing,
and even if the grade of the executable is a debugging grade.
This is similar to the behaviour of executables
created by the implementations of other languages;
for example the executable of a C program compiled with @samp{-g}
does not automatically invoke gdb or dbx etc when it is executed.
Unlike those other language implementations,
when you invoke the Mercury debugger @samp{mdb},
you invoke it not just with the name of an executable
but with the command line you want to debug.
If something goes wrong when you execute the command
@example
@var{prog} @var{arg1} @var{arg2} ...
@end example
and you want to find the cause of the problem,
you must execute the command
@example
mdb [@var{mdb-options}] @var{prog} @var{arg1} @var{arg2} ...
@end example
because you do not get a chance
to specify the command line of the program later.
When the debugger starts up, as part of its initialization
it executes commands from the following three sources, in order:
@enumerate
@item
@vindex MERCURY_DEBUGGER_INIT
The file named by the @samp{MERCURY_DEBUGGER_INIT} environment variable.
Usually, @samp{mdb} sets this variable to point to a file
that provides documentation for all the debugger commands
and defines a small set of aliases.
However, if @samp{MERCURY_DEBUGGER_INIT} is already defined
when @samp{mdb} is invoked, it will leave its value unchanged.
You can use this override ability to provide alternate documentation.
If the file named by @samp{MERCURY_DEBUGGER_INIT} cannot be read,
@samp{mdb} will print a warning,
since in that case, that usual online documentation will not be available.
@item
@cindex mdbrc
@cindex .mdbrc
The file named @samp{.mdbrc} in your home directory.
You can put your usual aliases and settings here.
@item
The file named @samp{.mdbrc} in the current working directory.
You can put program-specific aliases and settings here.
@end enumerate
mdb will ignore any lines starting with the character #
in any of the above mentioned files.
mdb accepts the following options from the command line. The options
should be given to mdb before the name of the executable to be debugged.
@table @code
@item -t @var{file-name}, --tty @var{file-name}
@findex --tty (mdb option)
Redirect all of the I/O for the debugger to the device
specified by @var{file-name}. The I/O for the program
being debugged will not be redirected.
This option allows the contents of a file to be piped to the program being
debugged and not to mdb. For example on Linux the command
@samp{mdb -t /dev/tty ./myprog < myinput} will cause the contents of myinput
to be piped to the program myprog, but mdb will read its input from
the terminal.
@sp 1
@item -w, --window, --mdb-in-window
@findex --mdb-in-window (mdb option)
@findex --window (mdb option)
Run mdb in a new window, with mdb's I/O going to that
window, but with the program's I/O going to the current
terminal. Note that this will not work on all systems.
@sp 1
@item --program-in-window
@findex --program-in-window (mdb option)
Run the program in a new window, with the program's I/O
going to that window, but with mdb's I/O going to the
current terminal. Note that input and output redirection
will not work with the @samp{--program-in-window} option.
@samp{--program-in-window} will work on most UNIX systems
running the X Window System, even those for which
@samp{--mdb-in-window} is not supported.
@sp 1
@item -c @var{window-command}, --window-command @var{window-command}
@findex --window-command (mdb option)
Specify the command used by the @samp{--program-in-window}
option for executing a command in a new window.
The default such command is @samp{xterm -e}.
@end table
@node Mercury debugger concepts
@section Mercury debugger concepts
The operation of the Mercury debugger @samp{mdb}
is based on the following concepts.
@table @emph
@item break points
@cindex debugger break points
@cindex break points
@cindex spy points
The user may associate a break point
with some events that occur inside a procedure;
the invocation condition of the break point says which events these are.
The four possible invocation conditions (also called scopes) are:
@sp 1
@itemize @bullet
@item
the call event,
@item
all interface events,
@item
all events, and
@item
the event at a specific point in the procedure.
@end itemize
@sp 1
The effect of a break point depends on the state of the break point.
@sp 1
@itemize @bullet
@item
If the state of the break point is @samp{stop},
execution will stop and user interaction will start
at any event within the procedure that matches the invocation conditions,
unless the current debugger command has specifically disabled this behaviour
(see the concept @samp{strict commands} below).
@sp 1
@item
If the state of the break point is @samp{print},
the debugger will print any event within the procedure
that matches the invocation conditions,
unless the current debugger command has specifically disabled this behaviour
(see the concept @samp{print level} below).
@end itemize
@sp 1
Neither of these will happen if the break point is disabled.
@sp 1
Every break point has a print list.
Every time execution stops at an event that matches the breakpoint,
mdb implicitly executes a print command for each element
in the breakpoint's print list.
A print list element can be the word @samp{goal},
which causes the goal to the printed as if by @samp{print goal};
it can be the word @samp{*},
which causes all the variables to the printed as if by @samp{print *};
or it can be the name or number of a variable,
possibly followed (without white space) by term path,
which causes the specified variable or part thereof to the printed
as if the element were given as an argument to the @samp{print} command.
@sp 1
@item strict commands
@cindex strict debugger commands
When a debugger command steps over some events
without user interaction at those events,
the @emph{strictness} of the command controls whether
the debugger will stop execution and resume user interaction
at events to which a break point with state @samp{stop} applies.
By default, the debugger will stop at such events.
However, if the debugger is executing a strict command,
it will not stop at an event
just because a break point in the stop state applies to it.
@cindex debugger interrupt
@cindex interrupt, in debugger
@cindex control-C
If the debugger receives an interrupt (e.g.@: if the user presses control-C),
it will stop at the next event regardless of what command it is executing
at the time.
@sp 1
@item print level
@cindex print level
@cindex debugger print level
When a debugger command steps over some events
without user interaction at those events,
the @emph{print level} controls under what circumstances
the stepped over events will be printed.
@sp 1
@itemize @bullet
@item
When the print level is @samp{none},
none of the stepped over events will be printed.
@sp 1
@item
When the print level is @samp{all},
all the stepped over events will be printed.
@sp 1
@item
When the print level is @samp{some},
the debugger will print the event only if a break point applies to the event.
@end itemize
@sp 1
Regardless of the print level, the debugger will print
any event that causes execution to stop and user interaction to start.
@sp 1
@item default print level
The debugger maintains a default print level.
The initial value of this variable is @samp{some},
but this value can be overridden by the user.
@sp 1
@item current environment
Whenever execution stops at an event, the current environment
is reset to refer to the stack frame of the call specified by the event.
However, the @samp{up}, @samp{down} and @samp{level} commands
can set the current environment
to refer to one of the ancestors of the current call.
This will then be the current environment until another of these commands
changes the environment yet again or execution continues to another event.
@sp 1
@item paths in terms
@cindex paths in terms
@cindex term navigation
When browsing or printing a term,
you can use "^@samp{n}" to refer to the @var{n}th subterm of that term.
If the term's type has named fields,
you can use "^@samp{fname}" to refer to
the subterm of the field named @samp{fname}.
You can use several of subterm specifications in a row
to refer to subterms deep within the original term.
For example, when applied to a list,
"^2" refers to the tail of the list
(the second argument of the list constructor),
"^2^2" refers to the tail of the tail of the list,
and "^2^2^1" refers to the head of the tail of the tail,
i.e. to the third element of the list.
You can think of terms as Unix directories,
with constants (function symbols of arity zero) being plain files
and function symbols of arity greater than zero being directories themselves.
Each subterm specification such as "^2" goes one level down in the hierarchy.
The exception is the subterm specification "^..",
which goes one level up, to the parent of the current directory.
@sp 1
@item held variables
@cindex held variables (in mdb)
Normally, the only variables from the program accessible in the debugger
are the variables in the current environment at the current program point.
However, the user can @emph{hold} variables,
causing their values -or selected parts of their values-
to stay available for the rest of the debugger session.
@c XXX Document the relationship of this command
@c to partially instantiated data structures and solver types
@c when the debugger is extended to handle these constructs.
All the commands that accept variable names
also accept the names of held variables;
users can ask for a held variable
by prefixing the name of the held variable with a dollar sign.
@sp 1
@item user defined events
@cindex user defined events (in mdb)
Besides the built-in set of events,
the Mercury debugger also supports events defined by the user.
Each event appears in the source code of the Mercury program
as a call prefixed by the keyword @samp{event},
with each argument of the call giving the value of an event @emph{attribute}.
Users can specify the set of user defined events that can appear in a program,
and the names, types and order of the attributes
of each kind of user defined event,
by giving the name of an event set specification file to the compiler
when compiling that program.
For more details, see @ref{User defined events}.
@sp 1
@item user defined event attributes
@cindex user defined event attributes (in mdb)
Normally, the only variables from the program accessible in the debugger
are the variables in the current environment at the current program point.
However, if the current event is a user defined event,
then the attributes of that event are also available.
All the commands that accept variable names
also accept the names of attributes;
users can ask for an attribute
by prefixing the name of the attribute with an exclamation point.
@sp 1
@item procedure specification
@cindex procedure specification (in mdb)
@cindex debugger procedure specification
Some debugger commands, e.g.@: @samp{break},
require a parameter that specifies a procedure.
The procedure may or may not be a compiler-generated
unify, compare, index or init procedure of a type constructor.
If it is, the procedure specification has
the following components in the following order:
@itemize @bullet
@item
An optional prefix of the form
@samp{unif*}, @samp{comp*}, @samp{indx*} or @samp{init*},
that specifies whether the procedure belongs
to a unify, compare, index or init predicate.
@item
An optional prefix of the form @samp{@var{module}.} or @samp{@var{module}__}
that specifies the name of the module that defines the predicate or function
to which the procedure belongs.
@item
The name of the type constructor.
@item
An optional suffix of the form @samp{/@var{arity}}
that specifies the arity of the type constructor.
@item
An optional suffix of the form @samp{-@var{modenum}}
that specifies the mode number of the procedure
within the predicate or function to which the procedure belongs.
@end itemize
For other procedures, the procedure specification has
the following components in the following order:
@itemize @bullet
@item
An optional prefix of the form @samp{pred*} or @samp{func*}
that specifies whether the procedure belongs to a predicate or a function.
@item
An optional prefix of the form @samp{@var{module}:}, @samp{@var{module}.}
or @samp{@var{module}__} that specifies the name of the module that defines
the predicate or function to which the procedure belongs.
@item
The name of the predicate or function to which the procedure belongs.
@item
An optional suffix of the form @samp{/@var{arity}}
that specifies the arity of the predicate or function
to which the procedure belongs.
@item
An optional suffix of the form @samp{-@var{modenum}}
that specifies the mode number of the procedure
within the predicate or function to which the procedure belongs.
@end itemize
@end table
@node User defined events
@section User defined events
Besides the built-in set of events,
the Mercury debugger also supports events defined by the user.
The intention is that users can define one kind of event
for each semantically important event in the program
that isn't captured by the standard builtin events,
and can then generate those events at the appropriate point in the source code.
Each event appears in the source code
as a call prefixed by the keyword @samp{event},
with each argument of the call giving the value of an event @emph{attribute}.
Users can specify the set of user defined events that can appear in a program,
and the names, types and order of the attributes
of each kind of user defined event,
by giving the name of an event set specification file to the compiler
when compiling that program
as the argument of the @samp{event-set-file-name} option.
This file should contain a header giving the event set's name,
followed by a sequence of one or more event specifications,
like this:
@c XXX replace with more realistic example
@example
event set queens
event nodiag_fail(
test_failed: string,
arg_b: int,
arg_d: int,
arg_list_len: int synthesized by list_len_func(sorted_list),
sorted_list: list(int) synthesized by list_sort_func(arg_list),
list_len_func: function,
list_sort_func: function,
arg_list: list(int)
)
event safe_test(
test_list: listint
)
event noargs
@end example
The header consists of the keywords @samp{event set}
and an identifier giving the event set name.
Each event specification consists of the keyword @samp{event},
the name of the event, and,
if the event has any attributes, a parenthesized list of those attributes.
Each attribute's specification consists of
a name, a colon and information about the attribute.
There are three kinds of attributes.
@itemize
@item
For ordinary attributes, like @samp{arg_b},
the information about the attribute is the Mercury type of that attribute.
@item
For function attributes, like @samp{list_sort_func},
the information about the attribute is just the keyword @samp{function}.
@item
For synthesized attributes, like @samp{sorted_list},
the information about the attribute is the type of the attribute,
the keywords @samp{synthesized by},
and a description of the Mercury function call
required to synthesize the value of the attribute.
The synthesis call consists of the name of a function attribute
and a list of the names of one or more argument attributes.
Argument attributes cannot be function attributes;
they may be either ordinary attributes, or previously synthesized attributes.
A synthesized attribute is not allowed
to depend on itself directly or indirectly,
but there are no restrictions on the positions of synthesized attributes
compared to the positions of the function attributes computing them
or of the argument attributes of the synthesis functions.
@end itemize
The result types of function attributes
are given by the types of the synthesized attributes they compute.
The argument types of function attributes (and the number of those arguments)
are given by the types of the arguments they are applied to.
Each function attribute must be used
to compute at least one synthesized attribute,
otherwise there would be no way to compute its type.
If it is used to compute more than one synthesized attribute,
the result and argument types must be consistent.
Each event goal in the program must use
the name of one of the events defined here as the predicate name of the call,
and the call's arguments must match
the types of that event's non-synthesized attributes.
Given that B and N are integers and L is a list of integers,
these event goals are fine,
@example
event nodiag_fail("N - B", B, N, list.length, list.sort, [N | L]),
event safe_test([1, 2, 3])
@end example
but these goals
@example
event nodiag_fail("N - B", B, N, list.sort, list.length, [N | L]),
event nodiag_fail("N - B", B, list.length, N, list.sort, [N | L]),
event safe_test([1], [2])
event safe_test(42)
event nonexistent_event(42)
@end example
will all generate errors.
The attributes of event calls are always input,
and the event goal is always @samp{det}.
@node I/O tabling
@section I/O tabling
In Mercury, predicates that want to do I/O
must take a di/uo pair of I/O state arguments.
Some of these predicates call other predicates to do I/O for them,
but some are @emph{I/O primitives}, i.e. they perform the I/O themselves.
The Mercury standard library provides a large set of these primitives,
and programmers can write their own through the foreign language interface.
An I/O action is the execution of one call to an I/O primitive.
In debugging grades, the Mercury implementation has the ability
to automatically record, for every I/O action,
the identity of the I/O primitive involved in the action
and the values of all its arguments.
The size of the table storing this information
is proportional to the number of @emph{tabled} I/O actions,
which are the I/O actions whose details are entered into the table.
Therefore the tabling of I/O actions is never turned on automatically;
instead, users must ask for I/O tabling to start
with the @samp{table_io start} command in mdb.
The purpose of I/O tabling is to enable transparent retries across I/O actions.
(The mdb @samp{retry} command
restores the computation to a state it had earlier,
allowing the programmer to explore code that the program has already executed;
see its documentation in the @ref{Debugger commands} section below.)
In the absence of I/O tabling,
retries across I/O actions can have bad consequences.
Retry of a goal that reads some input requires that input to be provided twice;
retry of a goal that writes some output generates duplicate output.
Retry of a goal that opens a file leads to a file descriptor leak;
retry of a goal that closes a file can lead to errors
(duplicate closes, reads from and writes to closed files).
I/O tabling avoids these problems by making I/O primitives @emph{idempotent}.
This means that they will generate their desired effect
when they are first executed,
but reexecuting them after a retry won't have any further effect.
The Mercury implementation achieves this
by looking up the action (which is identified by a I/O action number)
in the table and returning the output arguments stored in the table
for the given action @emph{without} executing the code of the primitive.
Starting I/O tabling when the program starts execution
and leaving it enabled for the entire program run
will work well for program runs that don't do lots of I/O.
For program runs that @emph{do} lots of I/O,
the table can fill up all available memory.
In such cases, the programmer may enable I/O tabling with @samp{table_io start}
just before the program enters the part they wish to debug
and in which they wish to be able to perform
transparent retries across I/O actions,
and turn it off with @samp{table_io stop} after execution leaves that part.
The commands @samp{table_io start} and @samp{table_io stop}
can each be given only once during an mdb session.
They divide the execution of the program into three phases:
before @samp{table_io start},
between @samp{table_io start} and @samp{table_io stop},
and after @samp{table_io stop}.
Retries across I/O will be transparent only in the middle phase.
@node Debugger commands
@section Debugger commands
When the debugger (as opposed to the program being debugged) is interacting
with the user, the debugger prints a prompt and reads in a line of text,
which it will interpret as its next command line.
A command line consists of a single command,
or several commands separated by semicolons.
Each command consists of several words separated by white space.
The first word is the name of the command,
while any other words give options and/or parameters to the command.
A word may itself contain semicolons or whitespace if it is
enclosed in single quotes (').
This is useful for commands that have other commands as parameters,
for example @samp{view -w 'xterm -e'}.
Characters that have special meaning to @samp{mdb} will be treated like
ordinary characters if they are escaped with a backslash (\).
It is possible to escape single quotes, whitespace, semicolons, newlines
and the escape character itself.
Some commands take a number as their first parameter.
For such commands, users can type `@var{number} @var{command}'
as well as `@var{command} @var{number}'.
The debugger will treat the former as the latter,
even if the number and the command are not separated by white space.
@menu
* Interactive query commands::
* Forward movement commands::
* Backward movement commands::
* Browsing commands::
* Breakpoint commands::
* I/O tabling commands::
* Parameter commands::
* Help commands::
* Declarative debugging mdb commands::
* Miscellaneous commands::
* Experimental commands::
* Developer commands::
@end menu
@node Interactive query commands
@subsection Interactive query commands
@table @code
@item query @var{module1} @var{module2} @dots{}
@itemx cc_query @var{module1} @var{module2} @dots{}
@itemx io_query @var{module1} @var{module2} @dots{}
@kindex query (mdb command)
@kindex cc_query (mdb command)
@kindex io_query (mdb command)
@c This documentation has been duplicated in
@c Opium-M/source/interactive_queries.op so please tell me (jahier@irisa.fr)
@c to update my documentation on interactive queries if you update this
@c subsection.
These commands allow you to type in queries (goals) interactively
in the debugger. When you use one of these commands, the debugger
will respond with a query prompt (@samp{?-} or @samp{run <--}),
at which you can type in a goal; the debugger will then compile
and execute the goal and display the answer(s).
You can return from the query prompt to the @samp{mdb>} prompt
by typing the end-of-file indicator (typically control-D or control-Z),
or by typing @samp{quit.}.
@sp 1
The module names @var{module1}, @var{module2}, @dots{} specify
which modules will be imported. Note that you can also
add new modules to the list of imports directly at the query prompt,
by using a command of the form @samp{[@var{module}]}, e.g.@: @samp{[int]}.
You need to import all the modules that define symbols used in your query.
Queries can only use symbols that are exported from a module;
entities which are declared in a module's implementation section
only cannot be used.
@sp 1
The three variants differ in what kind of goals they allow.
For goals which perform I/O, you need to use @samp{io_query};
this lets you type in the goal using DCG syntax.
For goals which don't do I/O, but which have determinism
@samp{cc_nondet} or @samp{cc_multi}, you need to use @samp{cc_query};
this finds only one solution to the specified goal.
For all other goals, you can use plain @samp{query}, which
finds all the solutions to the goal.
@sp 1
For @samp{query} and @samp{cc_query}, the debugger will print
out all the variables in the goal using @samp{io.write}.
The goal must bind all of its variables to ground terms,
otherwise you will get a mode error.
@sp 1
The current implementation works by compiling the queries on-the-fly
and then dynamically linking them into the program being debugged.
Thus it may take a little while for your query to be executed.
Each query will be written to a file named @file{mdb_query.m} in the current
directory, so make sure you don't name your source file @file{mdb_query.m}.
Note that dynamic linking may not be supported on some systems;
if you are using a system for which dynamic linking is not supported,
you will get an error message when you try to run these commands.
@sp 1
You may also need to build your program using shared libraries
for interactive queries to work.
With Linux on the Intel x86 architecture, the default is for
executables to be statically linked, which means that dynamic
linking won't work, and hence interactive queries won't work either
(the error message is rather obscure: the dynamic linker complains
about the symbol @samp{__data_start} being undefined).
To build with shared libraries, you can use
@samp{MGNUCFLAGS=--pic-reg} and @samp{MLFLAGS=--shared} in your
Mmakefile. See the @file{README.Linux} file in the Mercury
distribution for more details.
@end table
@sp 1
@node Forward movement commands
@subsection Forward movement commands
@sp 1
@table @code
@item step [-NSans] [@var{num}]
@kindex step (mdb command)
Steps forward @var{num} events.
If this command is given at event @var{cur}, continues execution until
event @var{cur} + @var{num}. The default value of @var{num} is 1.
@sp 1
The options @samp{-n} or @samp{--none}, @samp{-s} or @samp{--some},
@samp{-a} or @samp{--all} specify the print level to use
for the duration of the command,
while the options @samp{-S} or @samp{--strict}
and @samp{-N} or @samp{--nostrict} specify
the strictness of the command.
@sp 1
By default, this command is not strict, and it uses the default print level.
@sp 1
A command line containing only a number @var{num} is interpreted as
if it were `step @var{num}'.
@sp 1
An empty command line is interpreted as `step 1'.
@sp 1
@item goto [-NSans] @var{num}
@c The @var{generatorname} option is enabled
@c only in own stack minimal model grades.
@c @item goto [-NSans] @var{num} [@var{generatorname}]
@kindex goto (mdb command)
Continues execution until the program reaches event number @var{num}.
If the current event number is larger than @var{num}, it reports an error.
@sp 1
The options @samp{-n} or @samp{--none}, @samp{-s} or @samp{--some},
@samp{-a} or @samp{--all} specify the print level to use
for the duration of the command,
while the options @samp{-S} or @samp{--strict}
and @samp{-N} or @samp{--nostrict} specify
the strictness of the command.
@c @sp 1
@c If given, @var{generatorname} specifies
@c the generator in which to reach the given event number.
@c In this case, the command does not check
@c whether the event has already passed.
@sp 1
By default, this command is strict, and it uses the default print level.
@sp 1
@item next [-NSans] [@var{num}]
@kindex next (mdb command)
Continues execution until it reaches the next event of
the @var{num}'th ancestor of the call to which the current event refers.
The default value of @var{num} is zero,
which means skipping to the next event of the current call.
Reports an error if execution is already at the end of the specified call.
@sp 1
The options @samp{-n} or @samp{--none}, @samp{-s} or @samp{--some},
@samp{-a} or @samp{--all} specify the print level to use
for the duration of the command,
while the options @samp{-S} or @samp{--strict}
and @samp{-N} or @samp{--nostrict} specify
the strictness of the command.
@sp 1
By default, this command is strict, and it uses the default print level.
@sp 1
@item finish [-NSans]
@item finish [-NSans] @var{num}
@item finish [-NSans] (@samp{clentry}|@samp{clique})
@item finish [-NSans] @samp{clparent}
@kindex finish (mdb command)
If invoked without arguments,
continues execution until it reaches a final (EXIT, FAIL or EXCP) port
of the current call.
If invoked with the number @var{num} as argument,
continues execution until it reaches a final port
of the @var{num}'th ancestor of the call to which the current event refers.
If invoked with the argument @samp{clentry} or @samp{clique},
continues execution until it reaches a final port of the call
that first entered into the clique of recursive calls
of which the current call is a part.
(If the current call is not recursive or mutually recursive
with any other currently active call,
it will skip to the end of the current call.)
If the command is given the argument @samp{clparent},
it skips to the end of the first call outside the current call's clique.
This will be the parent of the call that @samp{finish clentry} would finish.
@sp 1
If invoked as
@samp{finish clentry}, @samp{finish clique} or @samp{finish clparent},
this command will report an error
unless we have stack trace information
about all of the current call's ancestors.
@sp 1
Also reports an error if execution is already at the desired port.
@sp 1
The options @samp{-n} or @samp{--none}, @samp{-s} or @samp{--some},
@samp{-a} or @samp{--all} specify the print level to use
for the duration of the command,
while the options @samp{-S} or @samp{--strict}
and @samp{-N} or @samp{--nostrict} specify
the strictness of the command.
@sp 1
By default, this command is strict, and it uses the default print level.
@c The documentation of fail is commented out, because the implementation does
@c not yet do this right thing when the procedure call we want to get to the
@c FAIL port of is inside a commit.
@c @sp 1
@c @item fail [-NSans] [@var{num}]
@c Continues execution until it reaches a FAIL or EXCP port
@c of the @var{num}'th ancestor of the call to which the current event refers.
@c The default value of @var{num} is zero,
@c which means skipping to the end of the current call.
@c Reports an error if execution is already at the desired port,
@c or if the determinism of the selected call
@c does not guarantee that it will eventually fail.
@c @sp 1
@c The options @samp{-n} or @samp{--none}, @samp{-s} or @samp{--some},
@c @samp{-a} or @samp{--all} specify the print level to use
@c for the duration of the command,
@c while the options @samp{-S} or @samp{--strict}
@c and @samp{-N} or @samp{--nostrict} specify
@c the strictness of the command.
@c @sp 1
@c By default, this command is strict, and it uses the default print level.
@sp 1
@item exception [-NSans]
@kindex exception (mdb command)
Continues the program until execution reaches an exception event.
Reports an error if the current event is already an exception event.
@sp 1
The options @samp{-n} or @samp{--none}, @samp{-s} or @samp{--some},
@samp{-a} or @samp{--all} specify the print level to use
for the duration of the command,
while the options @samp{-S} or @samp{--strict}
and @samp{-N} or @samp{--nostrict} specify
the strictness of the command.
@sp 1
By default, this command is strict, and it uses the default print level.
@sp 1
@item return [-NSans]
@kindex return (mdb command)
Continues the program until the program finished returning,
i.e.@: until it reaches a port other than EXIT.
Reports an error if the current event already refers to such a port.
@sp 1
The options @samp{-n} or @samp{--none}, @samp{-s} or @samp{--some},
@samp{-a} or @samp{--all} specify the print level to use
for the duration of the command,
while the options @samp{-S} or @samp{--strict}
and @samp{-N} or @samp{--nostrict} specify
the strictness of the command.
@sp 1
By default, this command is strict, and it uses the default print level.
@sp 1
@item user [-NSans]
@kindex return (mdb command)
Continues the program until the next user defined event.
@sp 1
The options @samp{-n} or @samp{--none}, @samp{-s} or @samp{--some},
@samp{-a} or @samp{--all} specify the print level to use
for the duration of the command,
while the options @samp{-S} or @samp{--strict}
and @samp{-N} or @samp{--nostrict} specify
the strictness of the command.
@sp 1
By default, this command is strict, and it uses the default print level.
@sp 1
@item forward [-NSans]
@kindex forward (mdb command)
Continues the program until the program resumes forward execution,
i.e.@: until it reaches a port other than REDO or FAIL.
Reports an error if the current event already refers to such a port.
@sp 1
The options @samp{-n} or @samp{--none}, @samp{-s} or @samp{--some},
@samp{-a} or @samp{--all} specify the print level to use
for the duration of the command,
while the options @samp{-S} or @samp{--strict}
and @samp{-N} or @samp{--nostrict} specify
the strictness of the command.
@sp 1
By default, this command is strict, and it uses the default print level.
@sp 1
@item mindepth [-NSans] @var{depth}
@kindex mindepth (mdb command)
Continues the program until the program reaches an event
whose depth is at least @var{depth}.
Reports an error if the current event already refers to such a port.
@sp 1
The options @samp{-n} or @samp{--none}, @samp{-s} or @samp{--some},
@samp{-a} or @samp{--all} specify the print level to use
for the duration of the command,
while the options @samp{-S} or @samp{--strict}
and @samp{-N} or @samp{--nostrict} specify
the strictness of the command.
@sp 1
By default, this command is strict, and it uses the default print level.
@sp 1
@item maxdepth [-NSans] @var{depth}
@kindex maxdepth (mdb command)
Continues the program until the program reaches an event
whose depth is at most @var{depth}.
Reports an error if the current event already refers to such a port.
@sp 1
The options @samp{-n} or @samp{--none}, @samp{-s} or @samp{--some},
@samp{-a} or @samp{--all} specify the print level to use
for the duration of the command,
while the options @samp{-S} or @samp{--strict}
and @samp{-N} or @samp{--nostrict} specify
the strictness of the command.
@sp 1
By default, this command is strict, and it uses the default print level.
@sp 1
@item continue [-NSans]
@kindex continue (mdb command)
Continues execution until it reaches the end of the program.
@sp 1
The options @samp{-n} or @samp{--none}, @samp{-s} or @samp{--some},
@samp{-a} or @samp{--all} specify the print level to use
for the duration of the command,
while the options @samp{-S} or @samp{--strict}
and @samp{-N} or @samp{--nostrict} specify
the strictness of the command.
@sp 1
By default, this command is not strict. The print level used
by the command by default depends on the final strictness level:
if the command is strict, it is @samp{none}, otherwise it is @samp{some}.
@end table
@sp 1
@node Backward movement commands
@subsection Backward movement commands
@sp 1
@table @code
@item retry [-fio]
@item retry [-fio] @var{num}
@item retry [-fio] (@samp{clentry}|@samp{clique})
@item retry [-fio] @samp{clparent}
@c @item retry [-afio] [@var{num}]
@kindex retry (mdb command)
If the command is given no arguments,
restarts execution at the call port
of the call corresponding to the current event.
If the command is given the number @var{num} as argument,
restarts execution at the call port of the call corresponding to
the @var{num}'th ancestor of the call to which the current event belongs.
For example, if @var{num} is 1, it restarts the parent of the current call.
If the command is given the argument @samp{clentry} or @samp{clique},
restarts execution at the call port of the call
that first entered into the clique of recursive calls
of which the current call is a part.
(If the current call is not (mutually) recursive
with any other currently active call,
the restarted call will be the current call.)
If the command is given the argument @samp{clparent},
restarts execution at the call port
of the first call outside the current call's clique.
This will be the parent of the call that @samp{retry clentry} would restart.
@sp 1
If invoked as
@samp{retry clentry}, @samp{retry clique} or @samp{retry clparent},
this command will report an error
unless we have stack trace information
about all of the current call's ancestors.
@sp 1
The command will also report an error unless
the values of all the input arguments of the selected call are available
at the return site at which control would reenter the selected call.
(The compiler will keep the values
of the input arguments of traced predicates as long as possible,
but it cannot keep them beyond the point where they are destructively updated.)
The exception is values of type `io.state';
the debugger can perform a retry if the only missing value is of
type `io.state' (there can be only one io.state at any given time).
@sp 1
Retries over I/O actions are guaranteed to be safe
only if the events at which the retry starts and ends
are both within the I/O tabled region of the program's execution.
If the retry is not guaranteed to be safe,
the debugger will normally ask the user if they really want to do this.
The option @samp{-f} or @samp{--force} suppresses the question,
telling the debugger that retrying over I/O is OK;
the option @samp{-o} or @samp{--only-if-safe} suppresses the question,
telling the debugger that retrying over I/O is not OK;
the option @samp{-i} or @samp{--interactive} restores the question
if a previous option suppressed it.
@c The --assume-all-io-is-tabled option is for developers only. Specifying it
@c makes an assertion, and if the assertion is incorrect, the resulting
@c behaviour would be hard for non-developers to understand. The option is
@c therefore deliberately not documented.
@c
@c @sp 1
@c A retry in which the values of all input arguments are available
@c works fine, provided that the predicates defined in C code that are
@c called inside the repeated computation do not pose any problems.
@c A retry in which a value of type `io.state' is missing has the
@c following effects:
@c @sp 1
@c @itemize @bullet
@c @item
@c Any input and/or output actions in the repeated code will be repeated.
@c @item
@c Any file close actions in the repeated code
@c for which the corresponding file open action is not also in the repeated code
@c may cause later I/O actions referring to the file to fail.
@c @item
@c Any file open actions in the repeated code
@c for which the corresponding file close action
@c is not also in the repeated code
@c may cause later file open actions to fail due to file descriptor leak.
@c @end itemize
@c @sp 1
@c XXX the following limitation applies only in minimal model grades,
@c which are not officially supported:
@c The debugger can perform a retry only from an exit or fail port;
@c only at these ports does the debugger have enough information
@c to figure out how to reset the stacks.
@c If the debugger is not at such a port when a retry command is given,
@c the debugger will continue forward execution
@c until it reaches an exit or fail port of the call to be retried
@c before it performs the retry.
@c This may require a noticeable amount of time,
@c and may result in the execution of I/O and/or other side-effects.
@c If the predicate being retried does I/O, the indirect retry will fail.
@end table
@sp 1
@table @code
@item track @var{num} [@var{termpath}]
@kindex track (mdb command)
Goto the EXIT event of the procedure in which the subterm in argument
@var{num} at term path @var{termpath} was bound,
and display information about where the term was bound.
@sp 1
Note that this command just invokes a script that is equivalent to running
the following sequence of commands:
@example
dd
browse @var{num}
cd @var{termpath}
track
info
pd
@end example
@end table
@sp 1
@node Browsing commands
@subsection Browsing commands
@sp 1
@table @code
@item vars
@kindex vars (mdb command)
Prints the names of all the known variables in the current environment,
together with an ordinal number for each variable.
@item held_vars
@kindex held_vars (mdb command)
Prints the names of all the held variables.
@sp 1
@item print [-fpv] @var{name}[@var{termpath}]
@itemx print [-fpv] @var{num}[@var{termpath}]
@kindex print (mdb command)
Prints the value of the variable in the current environment
with the given name, or with the given ordinal number.
If the name or number is followed by a term path such as "^2",
then only the specified subterm of the given variable is printed.
This is a non-interactive version of the @samp{browse} command (see below).
Various settings which affect the way that terms are printed out
(including e.g.@: the maximum term depth)
can be set using the @samp{format_param} command.
@sp 1
The options @samp{-f} or @samp{--flat}, @samp{-p} or @samp{--pretty},
and @samp{-v} or @samp{--verbose} specify the format to use for printing.
@sp 1
@item print [-fpv] *
Prints the values of all the known variables in the current environment.
@sp 1
The options @samp{-f} or @samp{--flat}, @samp{-p} or @samp{--pretty},
and @samp{-v} or @samp{--verbose} specify the format to use for printing.
@sp 1
@item print [-fpv]
@item print [-fpv] goal
Prints the goal of the current call in its present state of instantiation.
@sp 1
The options @samp{-f} or @samp{--flat}, @samp{-p} or @samp{--pretty},
and @samp{-v} or @samp{--verbose} specify the format to use for printing.
@sp 1
@item print [-fpv] exception
Prints the value of the exception at an EXCP port.
Reports an error if the current event does not refer to such a port.
@sp 1
The options @samp{-f} or @samp{--flat}, @samp{-p} or @samp{--pretty},
and @samp{-v} or @samp{--verbose} specify the format to use for printing.
@sp 1
@item print [-fpv] action @var{num}
Prints a representation
of the @var{num}'th I/O action executed by the program.
@sp 1
The options @samp{-f} or @samp{--flat}, @samp{-p} or @samp{--pretty},
and @samp{-v} or @samp{--verbose} specify the format to use for printing.
@c @sp 1
@c @item print [-fpv] proc_body
@c Prints a representation of the body of the current procedure,
@c if it is available.
@c @sp 1
@c The options @samp{-f} or @samp{--flat}, @samp{-p} or @samp{--pretty},
@c and @samp{-v} or @samp{--verbose} specify the format to use for printing.
@sp 1
@item browse [-fpvx] @var{name}[@var{termpath}]
@itemx browse [-fpvx] @var{num}[@var{termpath}]
@kindex browse (mdb command)
Invokes an interactive term browser to browse
the value of the variable in the current environment
with the given ordinal number or with the given name.
If the name or number is followed by a term path such as "^2",
then only the specified subterm of the given variable is given to the browser.
@sp 1
The interactive term browser allows you
to selectively examine particular subterms.
The depth and size of printed terms may be controlled.
The displayed terms may also be clipped to fit within a single screen.
@sp 1
The options @samp{-f} or @samp{--flat}, @samp{-p} or @samp{--pretty},
and @samp{-v} or @samp{--verbose} specify the format to use for browsing.
The @samp{-x} or @samp{--xml} option tells mdb to dump the value of the
variable to an XML file and then invoke an XML browser on the file.
The XML filename as well as the command to invoke the XML browser can
be set using the @samp{set} command. See the documentation for @samp{set}
for more details.
@sp 1
For further documentation on the interactive term browser,
invoke the @samp{browse} command from within @samp{mdb} and then
type @samp{help} at the @samp{browser>} prompt.
@sp 1
@item browse [-fpvx]
@itemx browse [-fpvx] goal
Invokes the interactive term browser to browse
the goal of the current call in its present state of instantiation.
@sp 1
The options @samp{-f} or @samp{--flat}, @samp{-p} or @samp{--pretty},
and @samp{-v} or @samp{--verbose} specify the format to use for browsing.
The @samp{-x} or @samp{--xml} option tells mdb to dump the goal to an XML file
and then invoke an XML browser on the file. The XML filename as well as the
command to invoke the XML browser can be set using the @samp{set} command. See
the documentation for @samp{set} for more details.
@sp 1
@item browse [-fpvx] exception
Invokes the interactive term browser to browse
the value of the exception at an EXCP port.
Reports an error if the current event does not refer to such a port.
@sp 1
The options @samp{-f} or @samp{--flat}, @samp{-p} or @samp{--pretty},
and @samp{-v} or @samp{--verbose} specify the format to use for browsing.
The @samp{-x} or @samp{--xml} option tells mdb to dump the exception to an
XML file and then invoke an XML browser on the file. The XML filename as well
as the command to invoke the XML browser can be set using the @samp{set}
command. See the documentation for @samp{set} for more details.
@sp 1
@item browse [-fpvx] action @var{num}
Invokes an interactive term browser to browse a representation
of the @var{num}'th I/O action executed by the program.
@sp 1
The options @samp{-f} or @samp{--flat}, @samp{-p} or @samp{--pretty},
and @samp{-v} or @samp{--verbose} specify the format to use for browsing.
The @samp{-x} or @samp{--xml} option tells mdb to dump the io action
representation to an XML file and then invoke an XML browser on the file. The
XML filename as well as the command to invoke the XML browser can be set using
the @samp{set} command. See the documentation for @samp{set} for more details.
@c @sp 1
@c @item browse [-fpvx] proc_body
@c Invokes an interactive term browser to browse a representation
@c of the body of the current procedure, if it is available.
@c @sp 1
@c The options @samp{-f} or @samp{--flat}, @samp{-p} or @samp{--pretty},
@c and @samp{-v} or @samp{--verbose} specify the format to use for browsing.
@c The @samp{-x} or @samp{--xml} option tells mdb to dump the procedure
@c body representation to an XML file and then invoke an XML browser on the
@c file. The XML filename as well as the command to invoke the XML
@c browser can be set using the @samp{set} command. See the documentation
@c for @samp{set} for more details.
@sp 1
@item stack [-a] [-d] [-c@var{cliquelines}] [-f@var{numframes}] [@var{numlines}]
@kindex stack (mdb command)
Prints the names of the ancestors of the call
specified by the current event.
If two or more consecutive ancestor calls are for the same procedure,
the procedure identification will be printed once
with the appropriate multiplicity annotation.
@sp 1
The option @samp{-d} or @samp{--detailed}
specifies that for each ancestor call,
the call's event number, sequence number and depth should also be printed
if the call is to a procedure that is being execution traced.
@sp 1
If the @samp{-f} option, if present, specifies that
only the topmost @var{numframes} stack frames should be printed.
@sp 1
The optional number @var{numlines}, if present,
specifies that only the topmost @var{numlines} lines should be printed.
The default value is 100;
the special value 0 asks for all the lines to be printed.
@sp 1
By default, this command will look for cliques of mutually recursive ancestors.
It will identify them as such in the output,
and it will print at most 10 lines from any clique.
The @samp{-c} option can be used to specify
the maximum number of lines to print for a clique,
with the special value 0 asking for all of them to be printed.
The option @samp{-a} asks for all lines to be printed
@emph{without} cliques being detected or marked.
@sp 1
This command will report an error if there is no stack trace
information available about any ancestor.
@sp 1
@item up [-d] [@var{num}]
@kindex up (mdb command)
Sets the current environment to the stack frame
of the @var{num}'th level ancestor of the current environment
(the immediate caller is the first-level ancestor).
@sp 1
If @var{num} is not specified, the default value is one.
@sp 1
This command will report an error
if the current environment doesn't have the required number of ancestors,
or if there is no execution trace information about the requested ancestor,
or if there is no stack trace information about any of the ancestors
between the current environment and the requested ancestor.
@sp 1
The option @samp{-d} or @samp{--detailed}
specifies that for each ancestor call,
the call's event number, sequence number and depth should also be printed
if the call is to a procedure that is being execution traced.
@sp 1
@item down [-d] [@var{num}]
@kindex down (mdb command)
Sets the current environment to the stack frame
of the @var{num}'th level descendant of the current environment
(the procedure called by the current environment
is the first-level descendant).
@sp 1
If @var{num} is not specified, the default value is one.
@sp 1
This command will report an error
if there is no execution trace information about the requested descendant.
@sp 1
The option @samp{-d} or @samp{--detailed}
specifies that for each ancestor call,
the call's event number, sequence number and depth should also be printed
if the call is to a procedure that is being execution traced.
@sp 1
@item level [-d]
@item level [-d] @var{num}
@item level [-d] (@samp{clentry}|@samp{clique})
@item level [-d] @samp{clparent}
@kindex level (mdb command)
If the command is given no arguments,
it sets the current environment
to the stack frame that belongs to the current event.
If invoked with the number @var{num} as argument,
it sets the current environment
to the stack frame of the @var{num}'th level ancestor
of the call to which the current event belongs.
If invoked with the argument @samp{clentry} or @samp{clique},
it sets the current environment to the stack frame of the call
that first entered into the clique of recursive calls
of which the current call is a part.
(If the current call is not (mutually) recursive
with any other currently active call,
it sets the current environment to the stack frame of the current event.)
If the command is given the argument @samp{clparent},
it sets the current environment to the stack frame of the first call
outside the current call's clique.
This will be the parent of the stack frame
that @samp{level clentry} would set the current environment to.
@sp 1
This command will report an error
if the current environment doesn't have the required number of ancestors,
or if there is no execution trace information about the requested ancestor,
or if there is no stack trace information about any of the ancestors
between the current environment and the requested ancestor.
@sp 1
The option @samp{-d} or @samp{--detailed}
specifies that for each ancestor call,
the call's event number, sequence number and depth should also be printed
if the call is to a procedure that is being execution traced.
@sp 1
@item current
@kindex current (mdb command)
Prints the current event.
This is useful if the details of the event,
which were printed when control arrived at the event,
have since scrolled off the screen.
@sp 1
@item view [-vf2] [-w @var{window-cmd}] [-s @var{server-cmd}] [-n @var{server-name}] [-t @var{timeout}]
@itemx view -c [-v] [-s @var{server-cmd}] [-n @var{server-name}]
@kindex view (mdb command)
Opens a new window displaying the source code,
at the location of the current event.
As mdb stops at new events,
the window is updated to track through the source code.
This requires X11 and a version of @samp{vim}
compiled with the client/server option enabled.
@sp 1
The debugger only updates one window at a time.
If you try to open a new source window when there is already one open,
this command aborts with an error message.
@sp 1
The variant with @samp{-c} (or @samp{--close})
does not open a new window but instead
attempts to close a currently open source window.
The attempt may fail if, for example,
the user has modified the source file without saving.
@sp 1
The option @samp{-v} (or @samp{--verbose})
prints the underlying system calls before running them,
and prints any output the calls produced.
This is useful to find out what is wrong if the server does not start.
@sp 1
The option @samp{-f} (or @samp{--force})
stops the command from aborting if there is already a window open.
Instead it attempts to close that window first.
@sp 1
The option @samp{-2} (or @samp{--split-screen})
starts the vim server with two windows,
which allows both the callee as well as the caller
to be displayed at interface events.
The lower window shows what would normally be seen
if the split-screen option was not used,
which at interface events is the caller.
At these events,
the upper window shows the callee definition.
At internal events,
the lower window shows the associated source,
and the view in the upper window
(which is not interesting at these events)
remains unchanged.
@sp 1
The option @samp{-w} (or @samp{--window-command}) specifies
the command to open a new window.
The default is @samp{xterm -e}.
@sp 1
The option @samp{-s} (or @samp{--server-command}) specifies
the command to start the server.
The default is @samp{vim}.
@sp 1
The option @samp{-n} (or @samp{--server-name}) specifies
the name of an existing server.
Instead of starting up a new server,
mdb will attempt to connect to the existing one.
@sp 1
The option @samp{-t} (or @samp{--timeout}) specifies
the maximum number of seconds to wait for the server to start.
@sp 1
@item hold @var{name}[@var{termpath}] [@var{heldname}]
@kindex hold (mdb command)
Holds on to the variable @var{name} of the current event,
or the part of the specified by @var{termpath},
even after execution leaves the current event.
The held value will stay accessible via the name @var{$heldname}.
If @var{heldname} is not specified, it defaults to @var{name}.
There must not already be a held variable named @var{heldname}.
@sp 1
@item diff [-s @var{start}] [-m @var{max}] @var{name1}[@var{termpath1}] @var{name2}[@var{termpath2}]
@kindex diff (mdb command)
Prints a list of some of the term paths
at which the (specified parts of) the specified terms differ.
Normally this command prints the term paths of the first 20 differences.
@sp 1
The option @samp{-s} (or @samp{--start}), if present,
specifies how many of the initial differences to skip.
@sp 1
The option @samp{-m} (or @samp{--max}), if present,
specifies how many differences to print.
@sp 1
@item dump [-qx] goal @var{filename}
@kindex dump (mdb command)
Writes the goal of the current call in its present state of instantiation
to the specified file,
and outputs a message announcing this fact
unless the option @samp{-q} (or @samp{--quiet}) was given.
The option @samp{-x} (or @samp{--xml}) causes the output to be in XML.
@sp 1
@item dump [-qx] exception @var{filename}
Writes the value of the exception at an EXCP port to the specified file,
and outputs a message announcing this fact
unless the option @samp{-q} (or @samp{--quiet}) was given.
Reports an error if the current event does not refer to such a port.
The option @samp{-x} (or @samp{--xml}) causes the output to be in XML.
@sp 1
@item dump [-qx] @var{name} @var{filename}
@itemx dump [-qx] @var{num} @var{filename}
Writes the value of the variable in the current environment
with the given ordinal number or with the given name to the specified file,
and outputs a message announcing this fact
unless the option @samp{-q} (or @samp{--quiet}) was given.
The option @samp{-x} (or @samp{--xml}) causes the output to be in XML.
@sp 1
@item open @var{term}
Save @var{term} to a temporary file and open the file in an editor.
The environment variable EDITOR is consulted to determine what editor to use.
If this environment variable is not set then @samp{vi} is used.
@var{term} may be any term that can be saved to a file with the
@samp{save_to_file} command.
@sp 1
@item grep @var{pattern} @var{term}
Saves the given term to a temporary file
and invokes grep on the file using @var{pattern}.
@var{term} may be any term that can be saved to a file
with the @samp{save_to_file} command.
The unix `grep' command must be available from the shell
for this command to work.
@c @sp 1
@c @item dump [-qx] proc_body @var{filename}
@c Writes the representation of the body of the current procedure,
@c if it is available, to the specified file,
@c and outputs a message announcing this fact
@c unless the option @samp{-q} (or @samp{--quiet}) was given.
@c The option @samp{-x} (or @samp{--xml}) causes the output to be in XML.
@sp 1
@item list [@var{num}]
@kindex list (mdb command)
Lists the source code text for the current environment, including
@var{num} preceding and following lines. If @var{num} is not provided then
the default of two is used.
@end table
@sp 1
@node Breakpoint commands
@subsection Breakpoint commands
@cindex Breakpoints
@sp 1
@table @code
@item break [-PS] [-E@var{ignore-count}] [-I@var{ignore-count}] [-n] [-p@var{print-spec}]* @var{filename}:@var{linenumber}
@kindex break (mdb command)
Puts a break point on the specified line of the specified source file,
if there is an event or a call at that position.
If the filename is omitted,
it defaults to the filename from the context of the current event.
@sp 1
The options @samp{-P} or @samp{--print}, and @samp{-S} or @samp{--stop}
specify the action to be taken at the break point.
@sp 1
The options @samp{-E@var{ignore-count}}
and @samp{--ignore-entry @var{ignore-count}}
tell the debugger to ignore the breakpoint
until after @var{ignore-count} occurrences of a call event
that matches the breakpoint.
The options @samp{-I@var{ignore-count}}
and @samp{--ignore-interface @var{ignore-count}}
tell the debugger to ignore the breakpoint
until after @var{ignore-count} occurrences of interface events
that match the breakpoint.
@sp 1
Each occurrence of the options
@samp{-p@var{printspec}} and @samp{--print-list @var{printspec}}
tells the debugger to include the specified entity
in the breakpoint's print list.
@sp 1
Normally, if a variable with the given name or number doesn't exist
when execution reaches the breakpoint, mdb will issue a warning.
The option @samp{-n} or @samp{--no-warn}, if present, suppresses this warning.
This can be useful if e.g. the name is the name of an output variable,
which of course won't be present at call events.
@sp 1
By default, the action of the break point is @samp{stop},
the ignore count is zero, and the print list is empty.
@item break [-AOPSaei] [-E@var{ignore-count}] [-I@var{ignore-count}] [-n] [-p@var{print-spec}]* @var{proc-spec}
@c <module name> <predicate name> [<arity> [<mode> [<predfunc>]]]
Puts a break point on the specified procedure.
@sp 1
The options @samp{-A} or @samp{--select-all},
and @samp{-O} or @samp{--select-one}
select the action to be taken
if the specification matches more than one procedure.
If you have specified option @samp{-A} or @samp{--select-all},
mdb will put a breakpoint on all matched procedures,
whereas if you have specified option @samp{-O} or @samp{--select-one},
mdb will report an error.
By default, mdb will ask you whether you want to put a breakpoint
on all matched procedures or just one, and if so, which one.
@sp 1
The options @samp{-P} or @samp{--print}, and @samp{-S} or @samp{--stop}
specify the action to be taken at the break point.
@sp 1
The options @samp{-a} or @samp{--all},
@samp{-e} or @samp{--entry}, and @samp{-i} or @samp{--interface}
specify the invocation conditions of the break point.
If none of these options are specified,
the default is the one indicated by the current scope
(see the @samp{scope} command below).
The initial scope is @samp{interface}.
@sp 1
The options @samp{-E@var{ignore-count}}
and @samp{--ignore-entry @var{ignore-count}}
tell the debugger to ignore the breakpoint
until after @var{ignore-count} occurrences of a call event
that matches the breakpoint.
The options @samp{-I@var{ignore-count}}
and @samp{--ignore-interface @var{ignore-count}}
tell the debugger to ignore the breakpoint
until after @var{ignore-count} occurrences of interface events
that match the breakpoint.
@sp 1
Each occurrence of the options
@samp{-p@var{printspec}} and @samp{--print-list @var{printspec}}
tells the debugger to include the specified entity
in the breakpoint's print list.
@sp 1
Normally, if a variable with the given name or number doesn't exist
when execution reaches the breakpoint, mdb will issue a warning.
The option @samp{-n} or @samp{--no-warn}, if present, suppresses this warning.
This can be useful if e.g. the name is the name of an output variable,
which of course won't be present at call events.
@sp 1
By default, the action of the break point is @samp{stop},
its invocation condition is @samp{interface},
the ignore count is zero, and the print list is empty.
@sp 1
@item break [-OPS] [-E@var{ignore-count}] [-I@var{ignore-count}] [-n] [-p@var{print-spec}]* @var{proc-spec} @var{portname}
Puts a break point on
one or more events of the specified type in the specified procedure.
Port names should be specified as they are printed at events,
e.g. @samp{CALL}, @samp{EXIT}, @samp{DISJ}, @samp{SWTC}, etc.
@sp 1
The option @samp{-O} or @samp{--select-one}
selects the action to be taken
if the specification matches more than one procedure.
If you have specified option @samp{-O} or @samp{--select-one},
mdb will report an error;
otherwise, mdb will ask you which of the matched procedures you want to select.
@sp 1
If there is only one event of the given type in the specified procedure,
mdb will put the breakpoint on it;
otherwise, it will ask you whether you want to put a breakpoint
on all matched events or just one, and if so, which one.
@sp 1
The options @samp{-P} or @samp{--print}, and @samp{-S} or @samp{--stop}
specify the action to be taken at the break point.
@sp 1
The options @samp{-E@var{ignore-count}}
and @samp{--ignore-entry @var{ignore-count}}
tell the debugger to ignore the breakpoint
until after @var{ignore-count} occurrences of a call event
that matches the breakpoint.
The options @samp{-I@var{ignore-count}}
and @samp{--ignore-interface @var{ignore-count}}
tell the debugger to ignore the breakpoint
until after @var{ignore-count} occurrences of interface events
that match the breakpoint.
@sp 1
Each occurrence of the options
@samp{-p@var{printspec}} and @samp{--print-list @var{printspec}}
tells the debugger to include the specified entity
in the breakpoint's print list.
@sp 1
Normally, if a variable with the given name or number doesn't exist
when execution reaches the breakpoint, mdb will issue a warning.
The option @samp{-n} or @samp{--no-warn}, if present, suppresses this warning.
This can be useful if e.g. the name is the name of an output variable,
which of course won't be present at call events.
@sp 1
By default, the action of the break point is @samp{stop},
the ignore count is zero, and the print list is empty.
@sp 1
@item break [-PS] [-E@var{ignore-count}] [-I@var{ignore-count}] [-n] [-p@var{print-spec}]* here
Puts a break point on the procedure referred to by the current event,
with the invocation condition being the event at the current location
in the procedure body.
@sp 1
The options @samp{-P} or @samp{--print}, and @samp{-S} or @samp{--stop}
specify the action to be taken at the break point.
@sp 1
The options @samp{-E@var{ignore-count}}
and @samp{--ignore-entry @var{ignore-count}}
tell the debugger to ignore the breakpoint
until after @var{ignore-count} occurrences of a call event
that matches the breakpoint.
The options @samp{-I@var{ignore-count}}
and @samp{--ignore-interface @var{ignore-count}}
tell the debugger to ignore the breakpoint
until after @var{ignore-count} occurrences of interface events
that match the breakpoint.
@sp 1
Each occurrence of the options
@samp{-p@var{printspec}} and @samp{--print-list @var{printspec}}
tells the debugger to include the specified entity
in the breakpoint's print list.
@sp 1
Normally, if a variable with the given name or number doesn't exist
when execution reaches the breakpoint, mdb will issue a warning.
The option @samp{-n} or @samp{--no-warn}, if present, suppresses this warning.
This can be useful if e.g. the name is the name of an output variable,
which of course won't be present at call events.
@sp 1
By default, the action of the break point is @samp{stop},
the ignore count is zero, and the print list is empty.
@sp 1
@item break [-PS] [-X@var{ignore-count}] [-n] [-p@var{print-spec}]* user_event [@var{user-event-set}] @var{user-event-name}
Puts a break point on all user events named @var{user-event-name},
or, if @var{user-event-set} is specified as well,
on the user event named @var{user-event-name} in that event set.
@sp 1
The options @samp{-P} or @samp{--print}, and @samp{-S} or @samp{--stop}
specify the action to be taken at the break point.
@sp 1
The options @samp{-X@var{ignore-count}}
and @samp{--ignore @var{ignore-count}}
tell the debugger to ignore the breakpoint
until after @var{ignore-count} occurrences of an event
that matches the breakpoint.
@sp 1
Each occurrence of the options
@samp{-p@var{printspec}} and @samp{--print-list @var{printspec}}
tells the debugger to include the specified entity
in the breakpoint's print list.
@sp 1
Normally, if a variable with the given name or number doesn't exist
when execution reaches the breakpoint, mdb will issue a warning.
The option @samp{-n} or @samp{--no-warn}, if present, suppresses this warning.
This can be useful if e.g. the name is the name of an output variable,
which of course won't be present at call events.
@sp 1
By default, the action of the break point is @samp{stop},
the ignore count is zero, and the print list is empty.
@sp 1
@item break [-PS] [-X@var{ignore-count}] [-n] [-p@var{print-spec}]* user_event_set [@var{user-event-set}]
Puts a break point either on all user events in all event sets,
or, if @var{user-event-set} is specified,
on all user events in the event set of the given name.
@sp 1
The options @samp{-P} or @samp{--print}, and @samp{-S} or @samp{--stop}
specify the action to be taken at the break point.
@sp 1
The options @samp{-X@var{ignore-count}}
and @samp{--ignore @var{ignore-count}}
tell the debugger to ignore the breakpoint
until after @var{ignore-count} occurrences of an event
that matches the breakpoint.
@sp 1
Each occurrence of the options
@samp{-p@var{printspec}} and @samp{--print-list @var{printspec}}
tells the debugger to include the specified entity
in the breakpoint's print list.
@sp 1
Normally, if a variable with the given name or number doesn't exist
when execution reaches the breakpoint, mdb will issue a warning.
The option @samp{-n} or @samp{--no-warn}, if present, suppresses this warning.
This can be useful if e.g. the name is the name of an output variable,
which of course won't be present at call events.
@sp 1
By default, the action of the break point is @samp{stop},
the ignore count is zero, and the print list is empty.
@sp 1
@item break info
Lists the details, status and print lists of all break points.
@sp 1
@item condition [-b@var{break-num}] [-p] [-v] @var{varname}[@var{pathspec}] @var{op} @var{term}
@kindex condition (mdb command)
Attaches a condition to the most recent breakpoint,
or, if the @samp{-b} or @samp{--break-num} is given,
to the breakpoint whose number is given as the argument.
Execution won't stop at the breakpoint if the condition is false.
@sp 1
The condition is a match between a variable live at the breakpoint,
or a part thereof, and @var{term}.
It is ok for @var{term} to contain spaces.
The term from the program to be matched
is specified by @var{varname};
if it is followed by @var{pathspec} (without a space),
it specifies that the match is to be
against the specified part of @var{varname}.
@sp 1
There are two kinds of values allowed for @var{op}.
If @var{op} is @samp{=} or @samp{==}, the condition is true
if the term specified by @var{varname} (and @var{pathspec}, if present)
matches @var{term}.
If @var{op} is @samp{!=} or @samp{\\=}, the condition is true
if the term specified by @var{varname} (and @var{pathspec}, if present)
doesn't match @var{term}.
@var{term} may contain integers and strings
(as long as the strings don't contain double quotes),
but floats and characters aren't supported (yet),
and neither is any special syntax for operators.
Operators can be specified in prefix form
by quoting them with escaped single quotes,
as in @samp{\'+\'(1, 2)}.
Lists can be specified using the usual syntax.
@var{term} also may not contain variables, with one exception:
any occurrence of @samp{_} in @var{term} matches any term.
@sp 1
If execution reaches a breakpoint and the condition cannot be evaluated,
execution will normally stop at that breakpoint with a message to that effect.
If the @samp{-p} or @samp{--dont-require-path} option is given,
execution won't stop at breakpoints at which
the specified part of the specified variable doesn't exist.
If the @samp{-v} or @samp{--dont-require-var} option is given,
execution won't stop at breakpoints at which
the specified variable itself doesn't exist.
The @samp{-v} or @samp{--dont-require-var} option is implicitly assumed
if the specified breakpoint is on all user events.
@sp 1
@item ignore [-E@var{ignore-count}] [-I@var{ignore-count}] @var{num}
@kindex ignore (mdb command)
The options @samp{-E@var{ignore-count}}
and @samp{--ignore-entry @var{ignore-count}}
tell the debugger to ignore the breakpoint
until after @var{ignore-count} occurrences of a call event
that matches the breakpoint with the specified number.
The options @samp{-I@var{ignore-count}}
and @samp{--ignore-interface @var{ignore-count}}
tell the debugger to ignore the breakpoint
until after @var{ignore-count} occurrences of interface events
that match the breakpoint with the specified number.
If neither option is given,
the default is to ignore one call event
that matches the breakpoint with the specified number.
Reports an error if there is no break point with the specified number.
@sp 1
@item ignore [-E@var{ignore-count}] [-I@var{ignore-count}]
The options @samp{-E@var{ignore-count}}
and @samp{--ignore-entry @var{ignore-count}}
tell the debugger to ignore the breakpoint
until after @var{ignore-count} occurrences of a call event
that matches the most recently added breakpoint.
The options @samp{-I@var{ignore-count}}
and @samp{--ignore-interface @var{ignore-count}}
tell the debugger to ignore the breakpoint
until after @var{ignore-count} occurrences of interface events
that match the most recently added breakpoint.
If neither option is given,
the default is to ignore one call event
that matches the most recently added breakpoint.
Reports an error if the most recently added breakpoint has since been deleted.
@sp 1
@item break_print [-fpv] [-e] [-n] [-b @var{num}] @var{print-spec}*
@kindex break_print (mdb command)
Adds the specified print list elements (there may be more than one)
to the print list of the breakpoint numbered @var{num}
(if the @samp{-b} or @samp{--break-num} option is given),
or to the print list of the most recent breakpoint (if it is not given).
@sp 1
Normally, if a variable with the given name or number doesn't exist
when execution reaches the breakpoint, mdb will issue a warning.
The option @samp{-n} or @samp{--no-warn}, if present, suppresses this warning.
This can be useful if e.g. the name is the name of an output variable,
which of course won't be present at call events.
@sp 1
Normally, the specified elements will be added
at the start of the breakpoint's print list.
The option @samp{-e} or @samp{--end}, if present,
causes them to be added at the end.
@sp 1
By default, the specified elements will be printed with format "flat".
The options @samp{-f} or @samp{--flat}, @samp{-p} or @samp{--pretty},
and @samp{-v} or @samp{--verbose}, if given,
explicitly specify the format to use.
@sp 1
@item break_print [-b @var{num}] none
@kindex break_print (mdb command)
Clears the print list of the breakpoint numbered @var{num}
(if the @samp{-b} or @samp{--break-num} option is given),
or the print list of the most recent breakpoint (if it is not given).
@sp 1
@item disable @var{num}
@kindex disable (mdb command)
Disables the break point with the given number.
Reports an error if there is no break point with that number.
@sp 1
@item disable *
Disables all break points.
@sp 1
@item disable
Disables the most recently added breakpoint.
Reports an error if the most recently added breakpoint has since been deleted.
@sp 1
@item enable @var{num}
Enables the break point with the given number.
Reports an error if there is no break point with that number.
@sp 1
@item enable *
@kindex enable (mdb command)
Enables all break points.
@sp 1
@item enable
Enables the most recently added breakpoint.
Reports an error if the most recently added breakpoint has since been deleted.
@sp 1
@item delete @var{num}
@kindex delete (mdb command)
Deletes the break point with the given number.
Reports an error if there is no break point with that number.
@sp 1
@item delete *
Deletes all break points.
@sp 1
@item delete
Deletes the most recently added breakpoint.
Reports an error if the most recently added breakpoint
has already been deleted.
@sp 1
@item modules
@kindex modules (mdb command)
Lists all the debuggable modules
(i.e.@: modules that have debugging information).
@sp 1
@item procedures @var{module}
@kindex procedures (mdb command)
Lists all the procedures in the debuggable module @var{module}.
@sp 1
@item register [-q]
@kindex register (mdb command)
Registers all debuggable modules with the debugger.
Has no effect if this registration has already been done.
The debugger will perform this registration when creating breakpoints
and when listing debuggable modules and/or procedures.
The command will print a message to this effect
unless the @samp{-q} or @samp{--quiet} option is given.
@end table
@sp 1
@node I/O tabling commands
@subsection I/O tabling commands
@sp 1
@table @code
@item table_io
@kindex table_io (mdb command)
Reports which phase of I/O tabling we are in at the moment.
@sp 1
@item table_io start
Tells the debugger to start tabling I/O actions.
@sp 1
@item table_io stop
Tells the debugger to stop tabling I/O actions.
@sp 1
@item table_io stats
Reports statistics about I/O tabling.
@c "table_io allow" is not documented because its use by a non-expert
@c can yield weird results.
@c @sp 1
@c @item table_io allow
@c Allow I/O tabling to be started, even in grades in which
@c not all I/O primitives are guaranteed to be tabled.
@end table
@sp 1
@node Parameter commands
@subsection Parameter commands
@sp 1
@table @code
@item mmc_options @var{option1} @var{option2} @dots{}
@kindex mmc_options (mdb command)
This command sets the options that will be passed to @samp{mmc}
to compile your query when you use one of the query commands:
@samp{query}, @samp{cc_query}, or @samp{io_query}.
For example, if a query results in a compile error,
it may sometimes be helpful to use @samp{mmc_options --verbose-error-messages}.
@sp 1
@item printlevel none
@kindex printlevel (mdb command)
Sets the default print level to @samp{none}.
@sp 1
@item printlevel some
Sets the default print level to @samp{some}.
@sp 1
@item printlevel all
Sets the default print level to @samp{all}.
@sp 1
@item printlevel
Reports the current default print level.
@sp 1
@item scroll on
@kindex scroll (mdb command)
Turns on user control over the scrolling of sequences of event reports.
This means that every screenful of event reports
will be followed by a @samp{--more--} prompt.
You may type an empty line, which allows the debugger
to continue to print the next screenful of event reports.
By typing a line that starts with @samp{a}, @samp{s} or @samp{n},
you can override the print level of the current command,
setting it to @samp{all}, @samp{some} or @samp{none} respectively.
By typing a line that starts with @samp{q},
you can abort the current debugger command
and get back control at the next event.
@sp 1
@item scroll off
Turns off user control over the scrolling of sequences of event reports.
@sp 1
@item scroll @var{size}
Sets the scroll window size to @var{size},
which tells scroll control to stop and print a @samp{--more--} prompt
after every @var{size} @minus{} 1 events.
The default value of @var{size}
is the value of the @samp{LINES} environment variable,
which should correspond to the number of lines available on the terminal.
@sp 1
@item scroll
Reports whether user scroll control is enabled and what the window size is.
@sp 1
@item stack_default_limit @var{size}
@kindex stack_default_limit (mdb command)
Set the default number of lines printed by
the @samp{stack} and @samp{nondet_stack} commands to @var{size}.
If @var{size} is zero, the limit is disabled.
@sp 1
@item goal_paths on
@kindex goal_path (mdb command)
Turns on printing of goal paths at events.
@sp 1
@item goal_paths off
Turns off printing of goal paths at events.
@sp 1
@item goal_paths
Reports whether goal paths are printed at events.
@sp 1
@item scope all
@kindex scope (mdb command)
Sets the default scope of new breakpoints to ``all'',
i.e.@: by default, new breakpoints on procedures
will stop at all events in the procedure.
@sp 1
@item scope interface
Sets the default scope of new breakpoints to ``interface'',
i.e.@: by default, new breakpoints on procedures
will stop at all interface events in the procedure.
@sp 1
@item scope entry
Sets the default scope of new breakpoints to ``entry'',
i.e.@: by default, new breakpoints on procedures
will stop only at events representing calls to the procedure.
@sp 1
@item scope
Reports the current default scope of new breakpoints.
@sp 1
@item echo on
@kindex echo (mdb command)
Turns on the echoing of commands.
@sp 1
@item echo off
Turns off the echoing of commands.
@sp 1
@item echo
Reports whether commands are being echoed or not.
@sp 1
@item context none
@kindex context (mdb command)
@cindex line numbers
@cindex file names
When reporting events or ancestor levels,
does not print contexts (filename/line number pairs).
@sp 1
@item context before
When reporting events or ancestor levels,
prints contexts (filename/line number pairs)
before the identification of the event or call to which they refer,
on the same line.
With long fully qualified predicate and function names,
this may make the line wrap around.
@sp 1
@item context after
When reporting events or ancestor levels,
prints contexts (filename/line number pairs)
after the identification of the event or call to which they refer,
on the same line.
With long fully qualified predicate and function names,
this may make the line wrap around.
@sp 1
@item context prevline
When reporting events or ancestor levels,
prints contexts (filename/line number pairs) on a separate line
before the identification of the event or call to which they refer.
@sp 1
@item context nextline
When reporting events or ancestor levels,
prints contexts (filename/line number pairs) on a separate line
after the identification of the event or call to which they refer.
@sp 1
@item context
Reports where contexts are being printed.
@sp 1
@item user_event_context none
@kindex user_event_context (mdb command)
When reporting user-defined events,
does not print either filename/line number pairs or procedure ids.
@sp 1
@item user_event_context file
When reporting user-defined events,
prints only filename/line number pairs, not procedure ids.
@sp 1
@item user_event_context proc
When reporting user-defined events,
prints only procedure ids, not filename/line number pairs.
@sp 1
@item user_event_context full
When reporting user-defined events,
prints both filename/line number pairs and procedure ids.
@sp 1
@item user_event_context
Reports what parts of the context are being printed at user events.
@sp 1
@item list_context_lines @var{num}
@kindex list_context_lines (mdb command)
Sets the number of lines to be printed by the @samp{list} command
printed before and after the target context.
@sp 1
@item list_context_lines
Prints the number of lines to be printed by the @samp{list} command
printed before and after the target context.
@sp 1
@item list_path @var{dir1} @var{dir2} ...
@kindex list_path (mdb command)
The @samp{list} command searches a list of directories
when looking for a source code file.
The @samp{list_path} command sets the search path
to the given list of directories.
@sp 1
@item list_path
When invoked without arguments, the @samp{list_path} command
prints the search path consulted by the @samp{list} command.
@sp 1
@item push_list_dir @var{dir1} @var{dir2} ...
@kindex push_list_dir (mdb command)
Pushes the given directories
on to the search path consulted by the @samp{list} command.
@sp 1
@item pop_list_dir
@kindex pop_list_dir (mdb command)
Pops the leftmost (most recently pushed) directory
from the search path consulted by the @samp{list} command.
@sp 1
@item fail_trace_counts @var{filename}
@kindex fail_trace_counts (mdb command)
The declarative debugger can exploit information
about the failing and passing test cases to ask better questions.
This command tells the @samp{dice} command
that @var{filename} contains execution trace counts from failing test cases.
The @samp{dice} command will use this file
unless this is overridden with its @samp{--fail-trace-counts} option.
@sp 1
@item fail_trace_counts
Prints the name of the file containing
execution trace counts from failing test cases,
if this has already been set.
@sp 1
@item pass_trace_counts @var{filename}
@kindex pass_trace_counts (mdb command)
The declarative debugger can exploit information
about the failing and passing test cases to ask better questions.
This command tells the @samp{dice} command
that @var{filename} contains execution trace counts from passing test cases.
The @samp{dice} command will use this file
unless this is overridden with its @samp{--pass-trace-counts} option.
@sp 1
@item pass_trace_counts
Prints the name of the file containing
execution trace counts from passing test cases,
if this has already been set.
@sp 1
@item max_io_actions @var{num}
@kindex max_io_actions (mdb command)
Set the maximum number of I/O actions to print
in questions from the declarative debugger to @var{num}.
@sp 1
@item max_io_actions
Prints the maximum number of I/O actions to print
in questions from the declarative debugger.
@sp 1
@item xml_browser_cmd @var{command}
@kindex xml_browser_cmd (mdb command)
Set the shell command used to launch an XML browser to @var{command}.
If you want a stylesheet to be applied to the XML
before the browser is invoked,
then you should do that in this command using the appropriate program
(such as xsltproc, which comes with libxslt
and is available from http://xmlsoft.org/XSLT/).
By default if xsltproc and mozilla (or firefox) are available,
xsltproc is invoked to apply the xul_tree.xsl stylesheet in
extras/xml_stylesheets, then mozilla is invoked on the resulting XUL file.
@sp 1
You can use the apostrophe character (') to quote the command string,
for example "xml_browser_cmd 'firefox file:///tmp/mdbtmp.xml'".
@sp 1
@item xml_browser_cmd
Prints the shell command used to launch an XML browser,
if this has been set.
@sp 1
@item xml_tmp_filename @var{filename}
@kindex xml_tmp_filename (mdb command)
Tells the debugger to dump XML into the named file
before invoking the XML browser.
The command named as the argument of @samp{xml_browser_cmd}
will usually refer to this file.
@sp 1
@item xml_tmp_filename
Prints the temporary filename used for XML browsing,
if this has been set.
@sp 1
@item format [-APB] @var{format}
@kindex format (mdb command)
Sets the default format of the browser to @var{format},
which should be one of @samp{flat}, @samp{pretty} and @samp{verbose}.
@sp 1
The browser maintains separate configuration parameters
for the three commands @samp{print *}, @samp{print @var{var}},
and @samp{browse @var{var}}.
A @samp{format} command applies to all three,
unless it specifies one or more of the options
@samp{-A} or @samp{--print-all},
@samp{-P} or @samp{--print},
and @samp{-B} or @samp{--browse},
in which case it will set only the selected command's default format.
@sp 1
@item format_param [-APBfpv] @var{param} @var{value}
@kindex format_param (mdb command)
@kindex depth (mdb command)
@kindex size (mdb command)
@kindex width (mdb command)
@kindex lines (mdb command)
Sets one of the parameters of the browser to the given value.
The parameter @var{param} must be one of
@samp{depth}, @samp{size}, @samp{width} and @samp{lines}.
@sp 1
@itemize @bullet
@item
@samp{depth} is the maximum depth to which subterms will be displayed.
Subterms at the depth limit may be abbreviated as functor/arity,
or (in lists) may be replaced by an ellipsis (@samp{...}).
The principal functor of any term has depth zero.
For subterms which are not lists,
the depth of any argument of the functor is one greater than the
depth of the functor.
For subterms which are lists,
the depth of each element of the list
is one greater than the depth of the list.
@sp 1
@item
@samp{size} is the suggested maximum number of functors to display.
Beyond this limit, subterms may be abbreviated as functor/arity,
or (in lists) may be replaced by an ellipsis (@samp{...}).
For the purposes of this parameter,
the size of a list is one greater than
the sum of the sizes of the elements in the list.
@sp 1
@item
@samp{width} is the width of the screen in characters.
@sp 1
@item
@samp{lines} is the preferred maximum number of lines of one term to display.
@sp 1
@end itemize
@sp 1
The browser maintains separate configuration parameters
for the three commands @samp{print *}, @samp{print @var{var}},
and @samp{browse @var{var}}.
A @samp{format_param} command applies to all three,
unless it specifies one or more of the options
@samp{-A} or @samp{--print-all},
@samp{-P} or @samp{--print},
and @samp{-B} or @samp{--browse},
in which case it will set only the selected command's parameters.
@sp 1
The browser also maintains separate configuration parameters
for the different output formats: flat, pretty and verbose.
A @samp{format_param} command applies to all of these,
unless it specifies one or more of the options
@samp{-f} or @samp{--flat},
@samp{-p} or @samp{--pretty},
and @samp{-v} or @samp{--verbose},
in which case it will set only the selected format's parameter.
@sp 1
@item alias @var{name} @var{command} [@var{command-parameter} ...]
@kindex alias (mdb command)
Introduces @var{name} as an alias
for the given command with the given parameters.
Whenever a command line has @var{name} as its first word,
the debugger will substitute the given command and parameters for this word
before executing the command line.
@sp 1
If @var{name} is the upper-case word @samp{EMPTY},
the debugger will substitute the given command and parameters
whenever the user types in an empty command line.
@sp 1
If @var{name} is the upper-case word @samp{NUMBER},
the debugger will insert the given command and parameters
before the command line
whenever the user types in a command line that consists of a single number.
@sp 1
@item unalias @var{name}
@kindex unalias (mdb command)
Removes any existing alias for @var{name}.
@end table
@sp 1
@node Help commands
@subsection Help commands
@sp 1
@table @code
@item document_category @var{slot} @var{category}
@kindex document_category (mdb command)
Create a new category of help items, named @var{category}.
The summary text for the category is given by the lines following this command,
up to but not including a line containing only the lower-case word @samp{end}.
The list of category summaries printed in response to the command @samp{help}
is ordered on the integer @var{slot} numbers of the categories involved.
@sp 1
@item document @var{category} @var{slot} @var{item}
@kindex document (mdb command)
Create a new help item named @var{item} in the help category @var{category}.
The text for the help item is given by the lines following this command,
up to but not including a line containing only the lower-case word @samp{end}.
The list of items printed in response to the command @samp{help @var{category}}
is ordered on the integer @var{slot} numbers of the items involved.
@sp 1
@item help @var{category} @var{item}
@kindex help (mdb command)
Prints help text about the item @var{item} in category @var{category}.
@sp 1
@item help @var{word}
Prints help text about @var{word},
which may be the name of a help category or a help item.
@sp 1
@item help
Prints summary information about all the available help categories.
@end table
@sp 1
@node Declarative debugging mdb commands
@subsection Declarative debugging mdb commands
@sp 1
The following commands relate to the declarative debugger. See
@ref{Declarative debugging} for details.
@sp 1
@table @code
@item dd [-r] [-R] [-n@var{nodes}] [-s@var{search-mode}] [-p@var{passfile}] [-f@var{failfile}]
@c @item dd [--assume-all-io-is-tabled] [-d@var{depth}] [-t]
@c [--debug [filename]]
@c The --assume-all-io-is-tabled option is for developers only. Specifying it
@c makes an assertion, and if the assertion is incorrect, the resulting
@c behaviour would be hard for non-developers to understand. The option is
@c therefore deliberately not documented.
@c @sp 1
@c The value of the @samp{-d} or @samp{--depth} option determines
@c how much of the annotated trace to build initially. Subsequent runs
@c will try to add @var{nodes} events to the annotated trace, but initially
@c there is not enough information available to do this. We do not document
@c this option since it requires an understanding of the internal workings of
@c the declarative debugger.
@c @sp 1
@c The @samp{-t} or @samp{--test} option causes the declarative debugger
@c to simulate a user who answers `no' to all questions, except for
@c `Is this a bug?' questions to which the simulated user answers `yes'.
@c This is useful for benchmarking the declarative debugger.
@c @sp 1
@c The @samp{--debug} option causes events generated by the declarative
@c debugger to become visible. This allows the declarative debugger to be
@c debugged.
@c If a filename is provided, the front end of the debugger is not called
@c at all. Instead a representation of the debugging tree is dumped to
@c the file.
@c @sp 1
Starts declarative debugging using the current event as the initial symptom.
@sp 1
When searching for bugs the declarative debugger needs to keep portions of the
execution trace in memory. If it requires a new portion of the trace then it
needs to rerun the program. The @samp{-n@var{nodes}} or
@samp{--nodes @var{nodes}} option tells the declarative debugger how
much of the execution trace to gather when it reruns the program. A higher
value for @var{nodes} requires more memory, but improves the
performance of the declarative debugger for long running programs since it will
not have to rerun the program as often.
@sp 1
The @samp{-s@var{search-mode}} or @samp{--search-mode @var{search-mode}}
option tells the declarative debugger which
search mode to use. Valid search modes are @samp{top_down} (or @samp{td}),
@samp{divide_and_query} (or @samp{dq}) and @samp{suspicion_divide_and_query}
(or @samp{sdq}).
@samp{top_down} is the default when this option is not given.
@sp 1
Use the @samp{-r} or @samp{--resume} option to continue your previous
declarative debugging session. If the @samp{--resume} option is given and
there were no previous declarative debugging sessions then the option will be
ignored. A @samp{dd --resume} command can be issued at any event.
The @samp{--search-mode} option may be used with the @samp{--resume} option
to change the search mode of a previously started declarative debugging
session.
@sp 1
Use the @samp{-R} or @samp{--reset-knowledge-base} option to reset the
declarative debugger's knowledge base.
The declarative debugger will forget any previous answers that
have been supplied.
It will ask previous questions again if it needs to.
This option does not affect what predicates or modules are trusted.
@sp 1
The arguments supplied to the @samp{--pass-trace-counts} (or @samp{-p}) and
@samp{--fail-trace-counts} (or @samp{-f}) options are either trace count
files or files containing a list of trace count files.
The supplied trace counts are used to assign
a suspicion to each event based on which parts of program were executed in
the failing test case(s), but not the passing test case(s).
This is used to guide the declarative debugger when
the suspicion-divide-and-query search mode is used. If the
suspicion-divide-and-query search mode is specified then either both the
@samp{-p} and @samp{-f} options must be given, or the @samp{fail_trace_counts}
and @samp{pass_trace_counts} configuration parameters must be set (using
the @samp{set} command).
@item trust @var{module-name}|@var{proc-spec}
@kindex trust (mdb command)
Tells the declarative debugger to trust the given module, predicate or
function.
@sp 1
Individual predicates or functions can be trusted by just giving the
predicate or function name. If there is more than one predicate or function
with the given name then a list of alternatives will be shown.
@sp 1
The entire Mercury standard library is trusted by default and can be
untrusted in the usual manner using the `untrust' command. To restore trusted
status to the Mercury standard library issue the command
`trust standard library' or just `trust std lib'.
@sp 1
See also `trusted' and `untrust'.
@sp 1
@item trusted
@kindex trusted (mdb command)
Lists all the trusted modules, predicates and functions. See also `trust'
and `untrust'.
@sp 1
@item untrust @var{num}
@kindex untrust (mdb command)
Removes the object from the list of trusted objects. @var{num} should
correspond with the number shown in the list produced by issuing a `trusted'
command. See also `trust' and `trusted'.
@end table
@sp 1
@node Experimental commands
@subsection Experimental commands
@sp 1
@table @code
@item histogram_all @var{filename}
@kindex histogram_all (mdb command)
Prints (to file @var{filename})
a histogram that counts all events at various depths
since the start of the program.
This histogram is available
only in some experimental versions of the Mercury runtime system.
@sp 1
@item histogram_exp @var{filename}
@kindex histogram_exp (mdb command)
Prints (to file @var{filename})
a histogram that counts all events at various depths
since the start of the program or since the histogram was last cleared.
This histogram is available
only in some experimental versions of the Mercury runtime system.
@sp 1
@item clear_histogram
@kindex clear_histogram (mdb command)
Clears the histogram printed by @samp{histogram_exp},
i.e.@: sets the counts for all depths to zero.
@sp 1
@item dice [-p@var{filename}] [-f@var{filename}] [-n@var{num}] [-s[pPfFsS]+] [-o @var{filename}] [-m @var{module}]
@kindex dice (mdb command)
Display a program dice on the screen.
@sp 1
A dice is a comparison between
some successful test runs of the program and a failing test run.
Before using the @samp{dice} command one or more passing execution summaries
and one failing execution summary need to be generated.
This can be done by compiling the program with deep tracing enabled
(either by compiling in a .debug or .decldebug grade
or with the @samp{--trace deep} or @samp{--trace rep} compiler options)
and then running the program under mtc.
This will generate a file with the prefix
@samp{.mercury_trace_counts} and a unique suffix,
that contains a summary of the program's execution
This summary is called a slice.
Copy the generated slice to a new file for each test case,
to end up with a failing slice, say @samp{fail},
and some passing slices, say @samp{pass1}, @samp{pass2}, @samp{pass3}, etc.
Union the passing slices with a command such as
@samp{mtc_union -p passes pass1 pass2 pass3}.
@sp 1
The @samp{dice} command can use these files to display a table of statistics
comparing the passing test runs to the failing run.
Here is an example of a dice displayed in an mdb session:
@sp 1
@example
mdb> dice -f fail -p passes -s S -n 4
Procedure Path/Port File:Line Pass (3) Fail Suspicion
pred s.mrg/3-0 <s2;c2;e;> s.m:74 0 (0) 1 1.00
pred s.mrg/3-0 <s2;c2;t;> s.m:67 10 (3) 4 0.29
pred s.mrg/3-0 CALL s.m:64 18 (3) 7 0.28
pred s.mrg/3-0 EXIT s.m:64 18 (3) 7 0.28
@end example
@sp 1
This example tells us that the @samp{else} in @samp{s.m} on line 74
was executed once in the failing test run,
but never in the passing test runs,
so this would be a good place to start looking for a bug.
@sp 1
Each row in the table contains statistics
about the execution of a separate goal in the program.
Six columns are displayed:
@sp 1
@itemize @bullet
@item @samp{Procedure}:
The procedure in which the goal appears.
@item @samp{Path/Port}:
The goal path and/or port of the goal. For atomic goals, statistics about the
CALL event and the corresponding EXIT, FAIL or EXCP event are displayed on
separate rows. For other types of goals the goal path is displayed, except for
NEGE, NEGS and NEGF events where the goal path and port are displayed.
@item @samp{File:Line}:
The file name and line number of the goal. This can be used to set a
breakpoint on the goal.
@item @samp{Pass (total passing test runs)}:
The total number of times the goal was executed in all the passing test runs.
This is followed by a number in parentheses which indicates the number of test
runs the goal was executed in. The heading of this column also has a number in
parentheses which is the total number of passing test cases. In the example
above we can see that 3 passing tests were run.
@item @samp{Fail}:
The number of times the goal was executed in the failing test run.
@item @samp{Suspicion}:
A number between 0 and 1 which gives an indication of how likely a
particular goal is to be buggy. The is calculated as
Suspicion = F / (P + F) where F is the number of times the goal
was executed in the failing test run and P is the number of times the goal
was executed in passing test runs.
@end itemize
@sp 1
The name of the file containing the failing slice can be specified with the
@samp{-f} or @samp{--fail-trace-counts} option or with a separate
@samp{set fail_trace_count @var{filename}} command.
@sp 1
The name of the file containing the union of the passing slices
can be given with the @samp{-p} or @samp{--pass-trace-counts} option.
Alternatively a separate @samp{set pass_trace_counts @var{filename}} command
can be given. See @ref{Trace counts} for more information about trace counts.
@sp 1
The table can be sorted on the Pass, Fail or Suspicion columns, or a
combination of these. This can be done with the @samp{-s} or @samp{--sort}
option. The argument of this option is a string made up of any combination of
the letters @samp{pPfFsS}. The letters in the string indicate how the table
should be sorted:
@sp 1
@itemize @bullet
@item @samp{p}: Pass ascending
@item @samp{P}: Pass descending
@item @samp{f}: Fail ascending
@item @samp{F}: Fail descending
@item @samp{s}: Suspicion ascending
@item @samp{S}: Suspicion descending
@end itemize
@sp 1
For example the string "SF" means sort the table by suspicion, descending, and
if any two suspicions are the same, then by number of executions in the failing
test case, descending.
@sp 1
The option @samp{-n} or @samp{--top} can be used to limit the number lines
displayed. Only the top @var{num} lines, with respect to
the ordering specified by the @samp{-s} option, will be displayed.
By default the table is limited to 50 lines.
@sp 1
If the @samp{-o} or @samp{--output-to-file} option is given then the output
will be written to the specified file instead of being displayed on the
screen. Note that the file will be overwritten without warning if it
already exists.
@sp 1
The @samp{-m} or @samp{--module} option limits the output to the given module
and its submodules, if any.
@end table
@sp 1
@node Miscellaneous commands
@subsection Miscellaneous commands
@sp 1
@table @code
@item source [-i] @var{filename} [@var{args}]
@kindex source (mdb command)
Executes the commands in the file named @var{filename}.
Optionally a list of at most nine arguments can be given.
Occurrences of the strings "$1" to "$9" in the sourced file
will be replaced by the corresponding arguments given in the source
command before the commands in the sourced file are executed.
@sp 1
Lines that start with a hash (#) character are ignored.
Hash characters can be used to place comments in your mdb scripts.
@sp 1
The option @samp{-i} or @samp{--ignore-errors} tells @samp{mdb}
not to complain if the named file does not exist or is not readable.
@sp 1
@item save @var{filename}
@kindex save (mdb command)
Saves the persistent state of the debugger
(aliases, print level, scroll controls,
set of breakpoints, browser parameters,
set of objects trusted by the declarative debugger, etc)
to the specified file.
The state is saved in the form of mdb commands,
so that sourcing the file will recreate the saved state.
Note that this command does not save transient state,
such as the current event.
There is also a small part of the persistent state
(breakpoints established with a @samp{break here} command)
that cannot be saved.
@sp 1
@item quit [-y]
@kindex quit (mdb command)
Quits the debugger and aborts the execution of the program.
If the option @samp{-y} is not present, asks for confirmation first.
Any answer starting with @samp{y}, or end-of-file, is considered confirmation.
@sp 1
End-of-file on the debugger's input is considered a quit command.
@end table
@node Developer commands
@subsection Developer commands
@sp 1
The following commands are intended for use by the developers
of the Mercury implementation.
@sp 1
@table @code
@item var_details
@kindex var_details (mdb command)
Prints all the information the debugger has
about all the variables at the current program point.
@c @item term_size @var{name}
@c @itemx term_size @var{num}
@c @itemx term_size *
@c @kindex term_size (mdb command)
@c In term size profiling grades, prints the size of the term
@c bound to the specified variable(s).
@c In other grades, reports an error.
@c @sp 1
@item flag
@kindex flag (mdb command)
Prints the values of all the runtime low-level debugging flags.
@sp 1
@item flag @var{flagname}
Prints the value of the specified runtime low-level debugging flag.
@sp 1
@item flag @var{flagname} on
Sets the specified runtime low-level debugging flag to true.
@sp 1
@item flag @var{flagname} off
Sets the specified runtime low-level debugging flag to false.
@sp 1
@item subgoal @var{n}
@kindex subgoal (mdb command)
In minimal model grades,
prints the details of the specified subgoal.
In other grades, it reports an error.
@sp 1
@item consumer @var{n}
@kindex consumer (mdb command)
In minimal model grades,
prints the details of the specified consumer.
In other grades, it reports an error.
@sp 1
@item gen_stack
@kindex gen_stack (mdb command)
In minimal model grades,
prints the contents of the frames on the generator stack.
In other grades, it reports an error.
@sp 1
@item cut_stack
@kindex cut_stack (mdb command)
In minimal model grades,
prints the contents of the frames on the cut stack.
In other grades, it reports an error.
@sp 1
@item pneg_stack
@kindex pneg_stack (mdb command)
In minimal model grades,
prints the contents of the frames on the possible negated context stack.
In other grades, it reports an error.
@sp 1
@item mm_stacks
@kindex mm_stacks (mdb command)
In minimal model grades,
prints the contents of the frames on the generator stack,
the cut stack and the possible negated context stack.
In other grades, it reports an error.
@sp 1
@item nondet_stack [-d] [-f@var{numframes}] [@var{numlines}]
@kindex nondet_stack (mdb command)
Prints the contents of the frames on the nondet stack.
By default, it prints only the fixed slots in each nondet stack frame,
but if the @samp{-d} or @samp{--detailed} option is given,
it will also print the names and values of the live variables in them.
@sp 1
The @samp{-f} option, if present, specifies that
only the topmost @var{numframes} stack frames should be printed.
@sp 1
The optional number @var{numlines}, if present,
specifies that only the topmost @var{numlines} lines should be printed.
@sp 1
@item stack_regs
@kindex stack_regs (mdb command)
Prints the contents of the virtual machine registers
that point to the det and nondet stacks.
@sp 1
@item all_regs
@kindex all_regs (mdb command)
Prints the contents of all the virtual machine registers.
@sp 1
@item debug_vars
@kindex debug_vars (mdb command)
Prints the values of the variables used by the debugger
to record event numbers, call sequence numbers and call depths.
@sp 1
@item stats [-f @var{filename}] @var{subject}
@kindex stats (mdb command)
Prints statistics about the given subject to standard output,
unless the @samp{-f} or @samp{--filename} option is given,
in which case it prints the statistic to @var{filename}.
@sp 1
@var{subject} can be @samp{procs},
which asks for statistics about proc layout structures in the program.
@sp 1
@var{subject} can be @samp{labels},
which asks for statistics about label layout structures in the program.
@sp 1
@var{subject} can be @samp{var_names},
which asks for statistics about the space occupied by variable names
in the layout structures in the program.
@sp 1
@var{subject} can be @samp{io_tabling},
which asks for statistics about the number of times
each predicate appears in the I/O action table.
@sp 1
@item print_optionals
@kindex print_optionals (mdb command)
Reports whether optionally-printed values such as typeinfos
that are usually of interest only to implementors are being printed or not.
@sp 1
@item print_optionals on
Tells the debugger to print optionally-printed values.
@sp 1
@item print_optionals off
Tells the debugger not to print optionally-printed values.
@sp 1
@item unhide_events
@kindex unhide_events (mdb command)
Reports whether events that are normally hidden
(that are usually of interest only to implementors)
are being exposed or not.
@sp 1
@item unhide_events on
Tells the debugger to expose events that are normally hidden.
@sp 1
@item unhide_events off
Tells the debugger to hide events that are normally hidden.
@sp 1
@item table @var{proc} [@var{num1} ...]
@kindex table (mdb command)
Tells the debugger to print the call table of the named procedure,
together with the saved answer (if any) for each call.
Reports an error if the named procedure isn't tabled.
@sp 1
For now, this command is supported only for procedures
whose arguments are all either integers, floats or strings.
@sp 1
If the user specifies one or more integers on the command line,
the output is restricted to the entries in the call table in which
the @var{n}th argument is equal to the @var{n}th number on the command line.
@sp 1
@item type_ctor [-fr] @var{modulename} @var{typectorname} @var{arity}
@kindex type_ctor (mdb command)
Tests whether there is a type constructor defined in the given module,
with the given name, and with the given arity.
If there isn't, it prints a message to that effect.
If there is, it echoes the identity of the type constructor.
@sp 1
If the option @samp{-r} or @samp{--print-rep} option is given,
it also prints the name of the type representation scheme
used by the type constructor
(known as its `type_ctor_rep' in the implementation).
@sp 1
If the option @samp{-f} or @samp{--print-functors} option is given,
it also prints the names and arities
of function symbols defined by type constructor.
@sp 1
@item all_type_ctors [-fr] [@var{modulename}]
@kindex all_type_ctors (mdb command)
If the user specifies a module name,
lists all the type constructors defined in the given module.
If the user doesn't specify a module name,
lists all the type constructors defined in the whole program.
@sp 1
If the option @samp{-r} or @samp{--print-rep} option is given,
it also prints the name of the type representation scheme
of each type constructor
(known as its `type_ctor_rep' in the implementation).
@sp 1
If the option @samp{-f} or @samp{--print-functors} option is given,
it also prints the names and arities
of function symbols defined by each type constructor.
@sp 1
@item class_decl [-im] @var{modulename} @var{typeclassname} @var{arity}
@kindex class_decl (mdb command)
Tests whether there is a type class defined in the given module,
with the given name, and with the given arity.
If there isn't, it prints a message to that effect.
If there is, it echoes the identity of the type class.
@sp 1
If the option @samp{-m} or @samp{--print-methods} option is given,
it also lists all the methods of the type class.
@sp 1
If the option @samp{-i} or @samp{--print-instance} option is given,
it also lists all the instances of the type class.
@sp 1
@item all_class_decls [-im] [@var{modulename}]
@kindex all_class_decls (mdb command)
If the user specifies a module name,
lists all the type classes defined in the given module.
If the user doesn't specify a module name,
lists all the type classes defined in the whole program.
@sp 1
If the option @samp{-m} or @samp{--print-methods} option is given,
it also lists all the methods of each type class.
@sp 1
If the option @samp{-i} or @samp{--print-instance} option is given,
it also lists all the instances of each type class.
@sp 1
@item all_procedures [-su] [-m @var{modulename}] @var{filename}
@kindex all_procedures (mdb command)
In the absence of the @samp{-m} or @samp{--module} option,
puts a list of all the debuggable procedures in the program
into the named file.
In the presence of the @samp{-m} or @samp{--module} option,
puts a list of all the debuggable procedures in the names module
into the named file.
@sp 1
If the @samp{-s} or @samp{--separate} option is given,
the various components of procedure names are separated by spaces.
@sp 1
If the @samp{-u} or @samp{--uci} option is given,
the list will include the procedures of
compiler generated unify, compare, index and initialization predicates.
Normally, the list includes the procedures of only user defined predicates.
@sp 1
@item ambiguity [-o @var{filename}] [-ptf] [@var{modulename} ...]
@kindex ambiguity (mdb command)
Print ambiguous procedure, type constructor and/or function symbol names.
A procedure name is ambiguous
if a predicate or function is defined with that name
in more than one module or with more than one arity.
A type constructor name is ambiguous
if a type constructor is defined with that name
in more than one module or with more than one arity.
A function symbol name is ambiguous
if a function symbol is defined with that name
in more than one module or with more than one arity.
@sp 1
If any module names are given, then only those modules are consulted,
(any ambiguities involving predicates, functions and type constructors
in non-listed modules are ignored).
The module names have to be fully qualified,
if a module @var{child} is a submodule of module @var{parent},
the module name list must include @var{parent.child};
listing just @var{child} won't work,
since that is not a fully qualified module name.
@sp 1
If the @samp{-o} or @samp{--outputfile} option is given,
the output goes to the file named as the argument of the option;
otherwise, it goes to standard output.
@sp 1
If one or more of the @samp{-p}, @samp{-t}, @samp{-f} options
or their long equivalents, @samp{--types}, or @samp{--functors},
this command prints ambiguities only for the indicated kinds of constructs.
The default is to print all ambiguities.
@sp 1
@item trail_details
@kindex trail_details (mdb command)
Prints out low-level details of the state of the trail.
In other grades, it reports an error.
@end table
@node Declarative debugging
@section Declarative debugging
The debugger incorporates a declarative debugger
which can be accessed from its command line.
Starting from an event that exhibits a bug,
e.g.@: an event giving a wrong answer,
the declarative debugger can find a bug which explains that behaviour
using knowledge of the intended interpretation of the program only.
Note that this is a work in progress,
so there are some limitations in the implementation.
@menu
* Declarative debugging overview::
* Declarative debugging concepts::
* Oracle questions::
* Declarative debugging commands::
* Diagnoses::
* Search Modes::
* Improving the search::
@end menu
@node Declarative debugging overview
@subsection Overview
The declarative debugger tries to find a bug in your program by asking
questions about the correctness of calls executed in your program.
Because pure Mercury code does not have any side effects, the declarative
debugger can make inferences such as ``if a call produces incorrect output
from correct input, then there must be a bug in the code executed by one of
the descendents of the call''.
The declarative debugger is therefore able to automate much of the
`detective work' that must be done manually when using the
procedural debugger.
@node Declarative debugging concepts
@subsection Concepts
Every CALL event corresponds to an atomic goal,
the one printed by the "print" command at that event.
This atom has the actual arguments in the input argument positions
and distinct free variables in the output argument positions
(including the return value for functions).
We refer to this as the @emph{call atom} of the event.
The same view can be taken of EXIT events,
although in this case the outputs as well as the inputs will be bound.
We refer to this as the @emph{exit atom} of the event.
The exit atom is always an instance of
the call atom for the corresponding CALL event.
Using these concepts, it is possible to interpret
the events at which control leaves a procedure
as assertions about the semantics of the program.
These assertions may be true or false, depending on whether or not
the program's actual semantics are consistent with its intended semantics.
@sp 1
@table @asis
@item EXIT
The assertion corresponding to an EXIT event is that
the exit atom is valid in the intended interpretation.
In other words, the procedure generates correct outputs
for the given inputs.
@sp 1
@item FAIL
Every FAIL event has a matching CALL event,
and a (possibly empty) set of matching EXIT events
between the call and fail.
The assertion corresponding to a FAIL event is that
every instance of the call atom which is true in the intended interpretation
is an instance of one of the exit atoms.
In other words, the procedure generates the complete set of answers
for the given inputs.
(Note that this does not imply that all exit atoms represent correct answers;
some exit atoms may in fact be wrong,
but the truth of the assertion is not affected by this.)
@sp 1
@item EXCP
Every EXCP event is associated with an exception term,
and has a matching CALL event.
The assertion corresponding to an EXCP event is that
the call atom can abnormally terminate with the given exception.
In other words, the thrown exception was expected for that call.
@end table
If one of these assertions is wrong,
then we consider the event to represent incorrect behaviour of the program.
If the user encounters an event for which the assertion is wrong,
then they can request the declarative debugger to
diagnose the incorrect behaviour by giving the @samp{dd} command
to the procedural debugger at that event.
@node Oracle questions
@subsection Oracle questions
Once the @samp{dd} command has been given,
the declarative debugger asks the user
a series of questions about the truth of various assertions
in the intended interpretation.
The first question in this series will be about
the validity of the event for which the @samp{dd} command was given.
The answer to this question will nearly always be ``no'',
since the user has just implied the assertion is false
by giving the @samp{dd} command.
Later questions will be about other events
in the execution of the program,
not all of them necessarily of the same kind as the first.
The user is expected to act as an ``oracle''
and provide answers to these questions
based on their knowledge of the intended interpretation.
The debugger provides some help here:
previous answers are remembered and used where possible,
so questions are not repeated unnecessarily.
Commands are available to provide answers,
as well as to browse the arguments more closely
or to change the order in which the questions are asked.
See the next section for details of the commands that are available.
When seeking to determine the validity of
the assertion corresponding to an EXIT event,
the declarative debugger prints the exit atom
followed by the question @samp{Valid?} for the user to answer.
The atom is printed using
the same mechanism that the debugger uses to print values,
which means some arguments may be abbreviated if they are too large.
When seeking to determine the validity of
the assertion corresponding to a FAIL event,
the declarative debugger prints the call atom, prefixed by @samp{Call},
followed by each of the exit atoms
(indented, and on multiple lines if need be),
and prints the question @samp{Complete?} (or @samp{Unsatisfiable?} if there
are no solutions) for the user to answer.
Note that the user is not required to provide any missing instance
in the case that the answer is no.
(A limitation of the current implementation is that
it is difficult to browse a specific exit atom.
This will hopefully be addressed in the near future.)
When seeking to determine the validity of
the assertion corresponding to an EXCP event,
the declarative debugger prints the call atom
followed by the exception that was thrown,
and prints the question @samp{Expected?} for the user to answer.
In addition to asserting whether a call behaved correctly or not
the user may also assert that a call should never have occurred in the first
place, because its inputs violated some precondition of the call. For example
if an unsorted list is passed to a predicate that is only designed to work with
sorted lists. Such calls should be deemed @emph{inadmissible} by the user.
This tells the declarative debugger that either the call was given the wrong
input by its caller or whatever generated the input is incorrect.
In some circumstances
the declarative debugger provides a default answer to the question.
If this is the case, the default answer will be shown in square brackets
immediately after the question,
and simply pressing return is equivalent to giving that answer.
@node Declarative debugging commands
@subsection Commands
At the above mentioned prompts, the following commands may be given.
Most commands can be abbreviated by their first letter.
It is also legal to press return without specifying a command.
If there is a default answer (@pxref{Oracle questions}),
pressing return is equivalent to giving that answer.
If there is no default answer,
pressing return is equivalent to the skip command.
@table @code
@item yes
Answer `yes' to the current question.
@sp 1
@item no
Answer `no' to the current question.
@sp 1
@item inadmissible
Answer that the call is inadmissible.
@sp 1
@item trust
Answer that the predicate or function the question is about does not contain
any bugs. However predicates or functions called by this predicate/function
may contain bugs. The debugger will not ask you further questions about the
predicate or function in the current question.
@sp 1
@item trust module
Answer that the module the current question relates to does not contain any
bugs. No more questions about any predicates or functions from this module
will be asked.
@item skip
Skip this question and ask a different one if possible.
@sp 1
@item undo
Undo the most recent answer or mode change.
@sp 1
@item mode [ top-down | divide-and-query | binary ]
Change the current search mode. The search modes may be abbreviated to
@samp{td}, @samp{dq} and @samp{b} respectively.
@sp 1
@item browse [--xml] [@var{n}]
Start the interactive term browser and browse the @var{n}th argument
before answering. If the argument number
is omitted then browse the whole call as if it were a data term.
While browsing a @samp{track} command may be issued to find the point at
which the current subterm was bound (see @ref{Improving the search}).
To return to the declarative debugger question issue a @samp{quit}
command from within the interactive term browser. For more information
on the use of the interactive term browser see the @samp{browse} command
in @ref{Browsing commands} or type @samp{help} from within the
interactive query browser.
@sp 1
Giving the @samp{--xml} or @samp{-x} option causes the term to be displayed
in an XML browser.
@sp 1
@item browse io [--xml] @var{n}
Browse the @var{n}th IO action.
@sp 1
@item print [@var{n}]
Print the @var{n}th argument of the current question. If no
argument is given then display the current question.
@sp 1
@item print io @var{n}
Print the @var{n}th IO action.
@sp 1
@item print io @var{n}-@var{m}
Print the @var{n}th to @var{m}th IO actions (inclusive).
@sp 1
@item print io limits
Print the values for which @samp{print @var{n}} makes sense.
@sp 1
@item print io
Print some I/O actions,
starting just after the last action printed (if there was one)
or at the first available action (if there was not).
@sp 1
@item format @var{format}
Set the default format to @var{format},
which should be one of @samp{flat}, @samp{verbose} and @samp{pretty}.
@sp 1
@item depth @var{num}
Set the maximum depth to which terms are printed to @var{num}.
@sp 1
@item depth io @var{num}
Set the maximum depth to which I/O actions are printed to @var{num}.
I/O actions are printed using the browser's @samp{print *} command so the
@samp{depth io} command updates the configuration parameters for the
browser's @samp{print *} command.
@sp 1
@item size @var{num}
Set the maximum number of function symbols
to be printed in terms to @var{num}.
@sp 1
@item size io @var{num}
Set the maximum number of function symbols
to be printed in I/O actions to @var{num}.
I/O actions are printed using the browser's @samp{print *} command so the
@samp{size io} command updates the configuration parameters for the
browser's @samp{print *} command.
@sp 1
@item width @var{num}
Set the number of columns in which terms are to be printed to @var{num}.
@sp 1
@item width io @var{num}
Set the number of columns in which I/O actions are to be printed to @var{num}.
I/O actions are printed using the browser's @samp{print *} command so the
@samp{width io} command updates the configuration parameters for the
browser's @samp{print *} command.
@sp 1
@item lines @var{num}
Set the maximum number of lines in terms to be printed to @var{num}.
@sp 1
@item lines io @var{num}
Set the maximum number of lines in I/O actions to be printed to @var{num}.
I/O actions are printed using the browser's @samp{print *} command so the
@samp{lines io} command updates the configuration parameters for the
browser's @samp{print *} command.
@sp 1
@item actions @var{num}
Set the maximum number of I/O actions to be printed in questions to @var{num}.
@sp 1
@item params
Print the current values of browser parameters.
@sp 1
@item track [-a] [@var{term-path}]
The @samp{track} command can only be given from within the interactive
term browser and tells the declarative debugger to find the point at which
the current subterm was bound.
If no argument is given the current subterm is taken to be incorrect.
If a @var{term-path} is given then the subterm at @var{term-path} relative to
the current subterm will be considered incorrect.
The declarative debugger will ask about the call that bound the given subterm
next.
To find out the location of the unification that bound the subterm,
issue an @samp{info} command when asked about the call that bound the subterm.
The declarative debugger can use one of two algorithms to find the
point at which the subterm was bound.
The first algorithm uses some heuristics
to find the subterm more quickly than the second algorithm.
It is possible, though unlikely,
for the first algorithm to find the wrong call.
The first algorithm is the default.
To tell the declarative debugger to
use the second, more accurate but slower algorithm,
give the @samp{-a} or @samp{--accurate} option to the @samp{track} command.
@item mark [-a] [@var{term-path}]
The @samp{mark} command has the same effect as the @samp{track} command
except that it also asserts that the atom is inadmissible or erroneous,
depending on whether the subterm is input or output respectively.
@sp 1
@item pd
Commence procedural debugging from the current point.
This command is notionally the inverse of the @samp{dd} command
in the procedural debugger.
The session can be resumed with a @samp{dd --resume} command.
@item quit
End the declarative debugging session and return to
the event at which the @samp{dd} command was given.
The session can be resumed with a @samp{dd --resume} command.
@sp 1
@item info
List the filename and line number of the predicate the current question
is about as well as the filename and line number where the predicate
was called (if this information is available). Also print some information
about the state of the bug search, such as the current search mode,
how many events are yet to be eliminated and the reason for asking
the current question.
@sp 1
@item help [@var{command}]
Summarize the list of available commands or give help on a specific
command.
@end table
@node Diagnoses
@subsection Diagnoses
If the oracle keeps providing answers to the asked questions,
then the declarative debugger will eventually locate a bug.
A ``bug'', for our purposes,
is an assertion about some call which is false,
but for which the assertions about every child of that call are not false
(i.e. they are either correct or inadmissible).
There are four different classes of bugs that this debugger can diagnose,
one associated with each kind of assertion.
Assertions about EXIT events
lead to a kind of bug we call an ``incorrect contour''.
This is a contour (an execution path through the body of a clause)
which results in a wrong answer for that clause.
When the debugger diagnoses a bug of this kind, it displays the exit atoms in
the contour. The resulting incorrect exit atom is displayed last. The program
event associated with this bug, which we call the ``bug event'', is the exit
event at the end of the contour.
Assertions about FAIL events lead to a kind of bug we call
a ``partially uncovered atom''.
This is a call atom which has some instance which is valid,
but which is not covered by any of the applicable clauses.
When the debugger diagnoses a bug of this kind,
it displays the call atom;
it does not, however,
provide an actual instance that satisfies the above condition.
The bug event in this case is the fail event
reached after all the solutions were exhausted.
Assertions about EXCP events lead to a kind of bug we call
an ``unhandled exception''.
This is a contour which throws an exception
that needs to be handled but which is not handled.
When the debugger diagnoses a bug of this kind,
it displays the call atom
followed by the exception which was not handled.
The bug event in this case is the exception event
for the call in question.
If the assertion made by an EXIT, FAIL or EXCP event is false and one or
more of the children of the call that resulted in the incorrect EXIT, FAIL or
EXCP event is inadmissible, while all the other calls are correct, then an
``inadmissible call'' bug has been found. This is a call that behaved
incorrectly (by producing the incorrect output, failing or throwing an
exception) because it passed unexpected input to one of it's children.
The guilty call is displayed as well as the inadmissible child.
After the diagnosis is displayed, the user is asked to confirm
that the event located by the declarative debugger
does in fact represent a bug.
The user can answer @samp{yes} or @samp{y} to confirm the bug,
@samp{no} or @samp{n} to reject the bug,
or @samp{abort} or @samp{a} to abort the diagnosis.
If the user confirms the diagnosis,
they are returned to the procedural debugger
at the event which was found to be the bug event.
This gives the user an opportunity, if they need it,
to investigate (procedurally) the events in the neighbourhood of the bug.
If the user rejects the diagnosis,
which implies that some of their earlier answers may have been mistakes,
diagnosis is resumed from some earlier point determined by the debugger.
The user may now be asked questions they have already answered,
with the previous answer they gave being the default,
or they may be asked entirely new questions.
If the user aborts the diagnosis,
they are returned to the event at which the @samp{dd} command was given.
@node Search Modes
@subsection Search Modes
The declarative debugger can operate in one of several modes when
searching for a bug.
Different search modes will result in different sequences of questions
being asked by the declarative debugger.
The user can specify which mode to use by giving the
@samp{--search-mode} option to the @samp{dd} command (see
@ref{Declarative debugging mdb commands}) or with the @samp{mode} declarative
debugger command (see @ref{Declarative debugging commands}).
@subsubsection Top-down Mode
Using this mode the declarative debugger will ask about the children of the
last question the user answered @samp{no} to. The child calls will be asked
about in the order they were executed. This makes the search more predictable
from the user's point of view as the questions will more or less follow the
program execution. The drawback of top-down search is that it may require a
lot of questions to be answered before a bug is found, especially with deeply
recursive programs.
This search mode is used by default when no other mode is specified.
@subsubsection Divide and Query Mode
With this search mode the declarative debugger attempts to halve the size
of the search space with each question. In many cases this will result in the
bug being found after O(log(N)) questions where N is the number of events
between the event where the @samp{dd} command was given and the corresponding
@samp{CALL} event. This makes the search feasible for long running programs
where top-down search would require an unreasonably large number of questions
to be answered. However, the questions may appear to come from unrelated parts
of the program which can make them harder to answer.
@subsubsection Suspicion Divide and Query Mode
In this search mode the declarative debugger assigns a suspicion level to
each event based on which parts of the program were executed in failing
test cases, but not in passing test cases. It then attempts to divide the
search space into two areas of equal suspicion with each question. This tends
to result in questions about parts of the program executed in a failing test
case, but not in passing test cases.
@subsubsection Binary Search Mode
The user may ask the declarative debugger to do a binary search along the
path in the call tree between the current question and the question that the
user last answered @samp{no} to. This is useful, for example, when a
recursive predicate is producing incorrect output, but the base case is
correct.
@node Improving the search
@subsection Improving the search
The number of questions asked by the declarative debugger before it pinpoints
the location of a bug can be reduced by giving it extra information. The kind
of extra information that can be given and how to convey this information are
explained in this section.
@subsubsection Tracking suspicious subterms
An incorrect subterm can be tracked to the call that bound the subterm
from within the interactive term browser
(see @ref{Declarative debugging commands}).
After issuing a @samp{track} command,
the next question asked by the declarative debugger will
be about the call that bound the incorrect subterm,
unless that call was
eliminated as a possible bug because of an answer to a previous
question or the call that bound the subterm was not traced.
For example consider the following fragment of a program that calculates
payments for a loan:
@example
:- type payment
---> payment(
date :: date,
amount :: float
).
:- type date ---> date(int, int, int). % date(day, month, year).
:- pred get_payment(loan::in, int::in, payment::out) is det.
get_payment(Loan, PaymentNo, Payment) :-
get_payment_amount(Loan, PaymentNo, Amount),
get_payment_date(Loan, PaymentNo, Date),
Payment = payment(Date, Amount).
@end example
Suppose that @code{get_payment} produces an incorrect result and the
declarative debugger asks:
@noindent
@example
get_payment(loan(...), 10, payment(date(9, 10, 1977), 10.000000000000)).
Valid?
@end example
Then if we know that this is the right payment amount for the given loan,
but the date is incorrect, we can track the date(...) subterm and the
debugger will then ask us about @code{get_payment_date}:
@noindent
@example
get_payment(loan(...), 10, payment(date(9, 10, 1977), 10.000000000000)).
Valid? browse
browser> cd 3/1
browser> ls
date(9, 10, 1977)
browser> track
get_payment_date(loan(...), 10, date(9, 10, 1977)).
Valid?
@end example
Thus irrelevant questions about @code{get_payment_amount} are avoided.
@noindent
If, say, the date was only wrong in the year part, then we could also have
tracked the year subterm in which case the next question would have been about
the call that constructed the year part of the date.
This feature is also useful when using the procedural debugger. For example,
suppose that you come across a @samp{CALL} event and you would like to know the
source of a particular input to the call. To find out you could first go to
the final event by issuing a @samp{finish} command. Invoke the declarative
debugger with a @samp{dd} command and then track the input term you are
interested in. The next question will be about the call that bound the term.
Issue a @samp{pd} command at this point to return to the procedural debugger.
It will now show the final event of the call that bound the term.
Note that this feature is only available if the executable is compiled
in a .decldebug grade or with the @samp{--trace rep} option. If a module
is compiled with the @samp{--trace rep} option but other modules in the
program are not then you will not be able to track subterms through those
other modules.
@subsubsection Trusting predicates, functions and modules
The declarative debugger can also be told to assume that certain predicates,
functions or entire modules do not contain any bugs. The declarative
debugger will never ask questions about trusted predicates or functions. It
is a good idea to trust standard library modules imported by a program being
debugged.
The declarative debugger can be told which predicates/functions it can trust
before the @samp{dd} command is given. This is done using the @samp{trust},
@samp{trusted} and @samp{untrust} commands at the mdb prompt (see
@ref{Declarative debugging mdb commands} for details on how to use these
commands).
Trust commands may be placed in the @samp{.mdbrc} file which contains default
settings for mdb (see @ref{Mercury debugger invocation}). Trusted
predicates will also be exported with a @samp{save} command (see
@ref{Miscellaneous commands}).
During the declarative debugging session the user may tell the declarative
debugger to trust the predicate or function in the current question.
Alternatively the user may tell the declarative debugger to trust all the
predicates and functions in the same module as the predicate or function in the
current question. See the @samp{trust} command in
@ref{Declarative debugging commands}.
@subsubsection When different search modes are used
If a search mode is given when invoking the declarative debugger then that
search mode will be used, unless (a) a subterm is tracked during the session,
or (b) the user has not answered @samp{no} to any questions yet,
in which case top-down search is used until @samp{no} is answered to at least
one question.
If no search mode is specified with the @samp{dd} command then
the search mode depends on if the @samp{--resume} option is
given.
If it is then the previous search mode will be used,
otherwise top-down search will be used.
You can check the search mode used to find a particular question by issuing
an @samp{info} command at the question prompt in the declarative debugger.
You can also change the search mode from within the declarative debugger
with the @samp{mode} command.
@node Trace counts
@section Trace counts
A program with debugging enabled may be run in a special mode
that causes it to write out to a @emph{trace count file}
a record of how many times each @emph{debugger event} in the program
was executed during that run.
Trace counts are useful for determining
what parts of a failing program are being run
and possibly causing the failure;
this is called @emph{slicing}.
Slices from failing and passing runs can be compared
to see which parts of the program are being executed during failing runs,
but not during passing runs; this is called @emph{dicing}.
@menu
* Generating trace counts::
* Combining trace counts::
* Slicing::
* Dicing::
* Coverage testing::
@end menu
@node Generating trace counts
@subsection Generating trace counts
To generate a slice for a program run,
first compile the program with deep tracing enabled
(either by using the @samp{--trace deep} option
or by compiling the program in a debugging grade).
Then invoke the program with the @samp{mtc} script,
passing any required arguments after the program name.
@sp 1
For example:
@sp 1
@example
mtc ./myprog arg1 arg2
@end example
@sp 1
The program will run as usual, except that when it terminates
it will write the number of times each debugger event was executed
to a trace count file.
@sp 1
@samp{mtc} accepts an @samp{-o} or @samp{--output-file} option.
The argument to this option is the filename to use
for the generated trace count file.
If this option is not given,
then the trace count will be written to a file
with the prefix @samp{.mercury_trace_counts} and a unique suffix.
@sp 1
Ordinarily, the generated trace count file will list
only the debugger events that were actually executed during this run.
However, it will list all debugger events, even unexecuted ones,
if @samp{mtc} is given the @samp{-c} or @samp{--coverage-test} option.
@sp 1
@samp{mtc} also supports two more options intended for coverage testing:
@samp{-s} or @samp{--summary-file}, and @samp{--summary-count}.
These each set an option in the @samp{MERCURY_OPTIONS} environment variable,
@samp{--trace-count-summary-file} and @samp{--trace-count-summary-max}
respectively.
For the documentation of these @samp{mtc} options,
see the documentation of @samp{MERCURY_OPTIONS} environment variable.
@sp 1
Trace count files
can be manipulated with the @samp{mtc_union} and @samp{mtc_diff} tools,
and they can be analysed by the @samp{mslice} and @samp{mdice} tools.
They can also be used to help direct a declarative debugging search
(see @ref{Search Modes}).
@sp 1
@node Combining trace counts
@subsection Combining trace counts
The @samp{mtc_union} tool can be used
to combine several trace count files into one trace count file.
You need to use this when you have
many trace count files you wish to analyse with @samp{mslice} or @samp{mdice}.
@samp{mtc_union} is invoked by issuing a command of the form:
@sp 1
@example
mtc_union [-v] -o output_file file1 file2 ...
@end example
@sp 1
@samp{file1}, @samp{file2}, etc.
are the trace count files that should be combined.
The new trace count file will be written to @samp{output_file}.
This file will preserve
the count of the test cases that contributed to its contents,
even if some of @samp{file1}, @samp{file2}, etc themselves
were created by @samp{mtc_union}.
If the @samp{-v} or @samp{--verbose} option is specified
then a progress message will be displayed
as each file is read and its contents merged into the union.
The @samp{mtc_diff} tool can be used
to subtract one trace count file from another.
@samp{mtc_diff} is invoked by issuing a command of the form:
@sp 1
@example
mtc_diff -o output_file file1 file2
@end example
@sp 1
@samp{file1} and @samp{file2} must both be trace counts files.
The output, written to @samp{output_file}, will contain
the difference between the trace counts in @samp{file1} and @samp{file2}
for every event that occurs in @samp{file1}.
Unlike @samp{mtc_union}, @samp{mtc_diff} does not preserve
the count of the test cases that contributed to its contents in any useful way.
@sp 1
@node Slicing
@subsection Slicing
Once a slice has been generated
it can be viewed in various ways using the mslice tool.
The output of the mslice tool will look something like the following:
@sp 1
@example
Procedure Path/Port File:Line Count (1)
pred mrg.merge/3-0 CALL mrg.m:60 14 (1)
pred mrg.merge/3-0 EXIT mrg.m:60 14 (1)
pred mrg.msort_n/4-0 CALL mrg.m:33 12 (1)
pred mrg.msort_n/4-0 EXIT mrg.m:33 12 (1)
pred mrg.msort_n/4-0 <?;> mrg.m:35 12 (1)
@end example
@sp 1
Each row corresponds to a label in the program.
The meanings of the columns are as follows:
@itemize @bullet
@item @samp{Procedure}:
This column displays the procedure that the label relates to.
@item @samp{Path/Port}:
For interface events this column displays the event port,
while for internal events it displays the goal path.
(See @ref{Tracing of Mercury programs}
for an explanation of interface and internal events.)
@item @samp{File:Line}:
This column displays the context of the event.
@item @samp{Count}:
This column displays how many times the event was executed.
The number in parentheses for each event row
says in how many runs the event was executed.
The number in parentheses in the heading row (after the word "Count")
indicates how many runs were represented
in the trace counts file analysed by the mslice tool.
@end itemize
@sp 1
The mslice tool is invoked using a command of the form:
@example
mslice [-s sortspec] [-l N] [-m module] [-n N] [-p N] [-f N] file
@end example
@sp 1
where @samp{file} is a trace count file,
generated either directly by a program run
or indirectly by the @samp{mtc_union} or @samp{mtc_diff} tools.
@sp 1
The @samp{-s} or @samp{--sort} option
specifies how the output should be sorted.
@samp{sortspec} should be a string made up of
any combination of the letters @samp{cCtT}.
Each letter specifies a column and direction to sort on:
@itemize @bullet
@item @samp{c}: Count ascending
@item @samp{C}: Count descending
@item @samp{t}: Number of runs ascending
@item @samp{T}: Number of runs descending
@end itemize
@sp 1
For example the option @samp{-s cT} will sort the output table
by the Count column in ascending order.
If the counts for two or more events are the same,
then those events will be sorted by number of runs in descending order.
@sp 1
The default is to sort descending on the Count column.
@sp 1
The @samp{-l} or @samp{--limit} option limits the output to @samp{N} lines.
@sp 1
The @samp{-m} or @samp{--module} option limits the output
to events only from the given module.
@sp 1
The @samp{-n} or @samp{--max-name-column} option's argument
gives the maximum width of the column containing predicate names.
If the argument is zero, there is no maximum width.
@sp 1
The @samp{-p} or @samp{--max-path-column} option's argument
gives the maximum width of the column containing ports and goal paths.
If the argument is zero, there is no maximum width.
@sp 1
The @samp{-f} or @samp{--max-file-column} option's argument
gives the maximum width of the column containing file names and line numbers.
If the argument is zero, there is no maximum width.
@sp 1
@node Dicing
@subsection Dicing
A dice is a comparison between passing and failing runs of a program.
@sp 1
Dice are created using the @samp{mdice} tool.
To use the @samp{mdice} tool,
one must first generate a set of trace count files for passing runs
and a set of trace count files for failing runs
using the @samp{mtc} tool (@ref{Generating trace counts}).
Once this has been done,
and the union of each set computed using @samp{mtc_union},
@samp{mdice} can be used to display a table of statistics
that compares the passing runs to the failing runs.
@sp 1
Here is an example of the output of the @samp{mdice} tool:
@sp 1
@example
Procedure Path/Port File:Line Pass (3) Fail Suspicion
pred s.mrg/3-0 <s2;c2;e;> s.m:74 0 (0) 1 1.00
pred s.mrg/3-0 <s2;c2;t;> s.m:67 10 (3) 4 0.29
pred s.mrg/3-0 CALL s.m:64 18 (3) 7 0.28
pred s.mrg/3-0 EXIT s.m:64 18 (3) 7 0.28
@end example
@sp 1
This example tells us that the @samp{else} in @samp{s.m} on line 74
was executed once in the failing test run,
but never during the passing test runs,
so this would be a good place to start looking for a bug.
@sp 1
Each row corresponds to an event in the program.
The meanings of the columns are as follows:
@itemize @bullet
@item @samp{Procedure}:
This column displays the procedure the event relates to.
@item @samp{Path/Port}:
For interface events this column displays the event port,
while for internal events it displays the goal path.
(See @ref{Tracing of Mercury programs}
for an explanation of interface and internal events.)
@item @samp{File:Line}:
This column displays the context of the event.
@item @samp{Pass (total passing test runs)}:
This columns displays the total number of times
the event was executed in all the passing test runs.
This is followed by a number in parentheses
which indicates the number of test runs the event was executed in.
The heading of this column also has a number in parentheses
which is the total number of passing test cases.
@item @samp{Fail}:
This column displays the number of times
the goal was executed in the failing test run(s).
@item @samp{Suspicion}:
This columns displays a number between 0 and 1
which gives an indication of how likely a particular goal is to contain a bug.
The suspicion is calculated as Suspicion = F / (P + F)
where F is the number of times the goal was executed in failing runs
and P is the number of times the goal was executed in passing runs.
@end itemize
@sp 1
The @samp{mdice} tool is invoked with a command of the form:
@sp 1
@example
mdice [-s sortspec] [-l N] [-m module] [-n N] [-p N] [-f N] passfile failfile
@end example
@samp{passfile} is a trace count file,
generated either directly by a passing program run
or as the union of the trace count files of passing program runs.
@samp{failfile} is a trace count file,
generated either directly by a failing program run
or as the union of the trace count files of failing program runs.
@sp 1
The table can be sorted on the Pass, Fail or Suspicion columns,
or a combination of these.
This can be done with the @samp{-s} or @samp{--sort} option.
The argument of this option is a string
made up of any combination of the letters @samp{pPfFsS}.
The letters in the string indicate how the table should be sorted:
@sp 1
@itemize @bullet
@item @samp{p}: Pass ascending
@item @samp{P}: Pass descending
@item @samp{f}: Fail ascending
@item @samp{F}: Fail descending
@item @samp{s}: Suspicion ascending
@item @samp{S}: Suspicion descending
@end itemize
@sp 1
For example the string "SF" means
sort the table by suspicion in descending order,
and if any two suspicions are the same,
then by number of executions in the failing run(s), also in descending order.
@sp 1
The default is to sort descending on the Suspicion column.
@sp 1
The option @samp{-l} or @samp{--limit}
can be used to limit the number of lines displayed.
@sp 1
The @samp{-m} or @samp{--module} option
limits the output to the given module and any submodules.
@sp 1
The @samp{-n} or @samp{--max-name-column} option's argument
gives the maximum width of the column containing predicate names.
If the argument is zero, there is no maximum width.
@sp 1
The @samp{-p} or @samp{--max-path-column} option's argument
gives the maximum width of the column containing ports and goal paths.
If the argument is zero, there is no maximum width.
@sp 1
The @samp{-f} or @samp{--max-file-column} option's argument
gives the maximum width of the column containing file names and line numbers.
If the argument is zero, there is no maximum width.
@sp 1
@node Coverage testing
@subsection Coverage testing
Coverage testing is the process of finding out
which parts of the code of a program
are not executed during any test case,
so that new test cases can be designed specifically to exercise those parts.
@sp 1
The first step in coverage testing a Mercury program
is compiling that program with execution tracing enabled,
either by using the @samp{--trace deep} option
or by compiling the program in a debugging grade.
The second step is to execute that program on all its test cases
with coverage testing enabled.
This can be done either by running the program with @samp{mtc --coverage-test},
or by including one of the corresponding options
(@samp{--coverage-test} or @samp{--coverage-test-if-exec=@var{programname}})
in the value of the @samp{MERCURY_OPTIONS} environment variable.
These runs generate a set of trace counts files
that can be given to the Mercury test coverage tool, the @samp{mcov} program.
@sp 1
The @samp{mcov} tool is invoked with a command of the form:
@sp 1
@example
mcov [-d] [-v] [-o output_file] tracecountfile1 ...
@end example
The arguments consist of one or more trace count files.
The output will normally be a list of all the procedures in the program
that were not executed in any of the runs
that generated these trace count files.
The output will go to standard output
unless this is overridden by the @samp{-o} or @samp{--output-file} option.
@sp 1
If the @samp{-d} or @samp{--detailed} option is specified,
then the output will list all the @emph{events} in the program
that were not executed in any of these runs.
This option can thus show the unexecuted parts of the executed procedures.
@sp 1
If the @samp{-v} or @samp{--verbose} option is specified,
then a progress message will be displayed as each file is read.
@c ----------------------------------------------------------------------------
@node Profiling
@chapter Profiling
@pindex mprof
@pindex mdprof
@cindex Profiling
@cindex Profiling memory allocation
@cindex Time profiling
@cindex Heap profiling
@cindex Memory profiling
@cindex Allocation profiling
@cindex Deep profiling
@menu
* Profiling introduction:: What is profiling useful for?
* Building profiled applications:: How to enable profiling.
* Creating profiles:: How to create profile data.
* Using mprof for time profiling:: How to analyze the time performance of a
program with mprof.
* Using mprof for profiling memory allocation::
How to analyze the memory performance of a
program with mprof.
* Using mprof for profiling memory retention::
How to analyze what memory is on the heap.
* Using mdprof:: How to analyze the time and/or memory
performance of a program with mdprof.
* Using threadscope:: How to analyse the parallel
execution of a program with threadscope.
* Profiling and shared libraries:: Profiling dynamically linked executables.
@end menu
@node Profiling introduction
@section Profiling introduction
@cindex Profiling
@cindex Measuring performance
@cindex Optimization
@cindex Efficiency
@cindex Parallel performance
To obtain the best trade-off between productivity and efficiency,
programmers should not spend too much time optimizing their code
until they know which parts of the code are really taking up most of the time.
Only once the code has been profiled should the programmer consider
making optimizations that would improve efficiency
at the expense of readability or ease of maintenance.
A good profiler is therefore a tool
that should be part of every software engineer's toolkit.
Mercury programs can be analyzed using two distinct profilers.
The Mercury profiler @samp{mprof} is a conventional call-graph profiler
(or graph profiler for short) in the style of @samp{gprof}.
The Mercury deep profiler @samp{mdprof} is a new kind of profiler
that associates a lot more context with each measurement.
@samp{mprof} can be used to profile either time or space,
but not both at the same time;
@samp{mdprof} can profile both time and space at the same time.
The parallel execution of Mercury programs can be analyzed with a third
profiler called @samp{threadscope}.
@samp{threadscope} allows programmers to visualise CPU utilisation for work,
garbage collection, and idle time.
This enables programmers to see the effect of parallelization decisions such as
task granularity.
The @samp{threadscope} tool is not included with the Melbourne Mercury
Compiler,
See @url{http://research.microsoft.com/en-us/projects/threadscope/,
Threadscope: Performance Tuning Parallel Haskell Programs}.
@node Building profiled applications
@section Building profiled applications
@cindex Building profiled applications
@pindex mprof
@pindex mdprof
@pindex threadscope
@cindex Time profiling
@cindex Heap profiling
@cindex Memory profiling
@cindex Allocation profiling
@cindex Deep profiling
@cindex Threadscope profiling
@cindex Parallel runtime profiling
@findex --parallel
@findex --threadscope
To enable profiling, your program must be built with profiling enabled.
The three different profilers require different support,
and thus you must choose which one to enable when you build your program.
@itemize @bullet
@item
To build your program with time profiling enabled for @samp{mprof},
pass the @samp{-p} (@samp{--profiling}) option to @samp{mmc}
(and also to @samp{mgnuc} and @samp{ml}, if you invoke them separately).
@item
To build your program with memory profiling enabled for @samp{mprof},
pass the @samp{--memory-profiling} option to @samp{mmc},
@samp{mgnuc} and @samp{ml}.
@item
To build your program with deep profiling enabled (for @samp{mdprof}),
pass the @samp{--deep-profiling} option to @samp{mmc},
@samp{mgnuc} and @samp{ml}.
@item
To build your program with threadscope profiling enabled (for @samp{threadscope}).
pass the @samp{--parallel} and @samp{--threadscope} options to @samp{mmc},
@samp{mgnuc} and @samp{ml}.
@end itemize
If you are using Mmake,
then you pass these options to all the relevant programs
by setting the @samp{GRADEFLAGS} variable in your Mmakefile,
e.g.@: by adding the line @samp{GRADEFLAGS=--profiling}.
(For more information about the different grades,
see @ref{Compilation model options}.)
Enabling @samp{mprof} or @samp{mdprof} profiling has several effects.
First, it causes the compiler to generate slightly modified code,
which counts the number of times each predicate or function is called,
and for every call, records the caller and callee.
With deep profiling, there are other modifications as well,
the most important impact of which is the loss of tail-recursion
for groups of mutually tail-recursive predicates
(self-tail-recursive predicates stay tail-recursive).
Second, your program will be linked with versions of the library and runtime
that were compiled with the same kind of profiling enabled.
Third, if you enable graph profiling,
the compiler will generate for each source file
the static call graph for that file in @samp{@var{module}.prof}.
Enabling @samp{threadscope} profiling causes the compiler to build the program
against a different runtime system.
This runtime system logs events relevant to parallel execution.
@samp{threadscope} support is not compatible with all processors,
see @file{README.ThreadScope} for more information.
@node Creating profiles
@section Creating profiles
@cindex Profiling
@cindex Creating profiles
@pindex mprof
@pindex mdprof
@cindex Time profiling
@cindex Heap profiling
@cindex Memory profiling
@cindex Allocation profiling
@cindex Deep profiling
Once you have created a profiled executable,
you can gather profiling information by running the profiled executable
on some test data that is representative of the intended uses of the program.
The profiling version of your program
will collect profiling information during execution,
and save this information at the end of execution,
provided execution terminates normally and not via an abort.
Executables compiled with @samp{--profiling}
save profiling data in the files
@file{Prof.Counts}, @file{Prof.Decls}, and @file{Prof.CallPair}.
(@file{Prof.Decl} contains the names
of the procedures and their associated addresses,
@file{Prof.CallPair} records the number of times
each procedure was called by each different caller,
and @file{Prof.Counts} records the number of times
that execution was in each procedure when a profiling interrupt occurred.)
Executables compiled with @samp{--memory-profiling}
will use two of those files (@file{Prof.Decls} and @file{Prof.CallPair})
and a two others: @file{Prof.MemoryWords} and @file{Prof.MemoryCells}.
Executables compiled with @samp{--deep-profiling}
save profiling data in a single file, @file{Deep.data}.
Executables compiled with the @samp{--threadscope} option write profiling data
to a file whose name is that of the program being profiled with the extension
@samp{.eventlog}.
For example, the profile for the program @samp{my_program} would be written to
the file @file{my_program.eventlog}.
It is also possible to combine @samp{mprof} profiling results
from multiple runs of your program.
You can do by running your program several times,
and typing @samp{mprof_merge_counts} after each run.
It is not (yet) possible to combine @samp{mdprof} profiling results
from multiple runs of your program.
Due to a known timing-related bug in our code,
you may occasionally get segmentation violations
when running your program with @samp{mprof} profiling enabled.
If this happens, just run it again --- the problem occurs only very rarely.
The same vulnerability does not occur with @samp{mdprof} profiling.
With the @samp{mprof} and @samp{mdprof} profilers,
you can control whether time profiling measures
real (elapsed) time, user time plus system time, or user time only,
by including the options @samp{-Tr}, @samp{-Tp}, or @samp{-Tv} respectively
in the environment variable MERCURY_OPTIONS
when you run the program to be profiled.
@c (See the environment variables section below.)
Currently, only the @samp{-Tr} option works on Cygwin; on that
platform it is the default.
@c the above sentence is duplicated below
The default is user time plus system time,
which counts all time spent executing the process,
including time spent by the operating system working on behalf of the process,
but not including time that the process was suspended
(e.g.@: due to time slicing, or while waiting for input).
When measuring real time,
profiling counts even periods during which the process was suspended.
When measuring user time only,
profiling does not count time inside the operating system at all.
@node Using mprof for time profiling
@section Using mprof for time profiling
@pindex mprof
@cindex Time profiling
To display the graph profile information
gathered from one or more profiling runs,
just type @samp{mprof} or @samp{mprof -c}.
(For programs built with @samp{--high-level-code},
you need to also pass the @samp{--no-demangle} option to @samp{mprof} as well.)
@findex --high-level-code
@findex --demangle
@findex --no-demangle
Note that @samp{mprof} can take quite a while to execute
(especially with @samp{-c}),
and will usually produce quite a lot of output,
so you will usually want to redirect the output into a file
with a command such as @samp{mprof > mprof.out}.
The output of @samp{mprof -c} consists of three major sections.
These are named the call graph profile,
the flat profile and the alphabetic listing.
The output of @samp{mprof} contains
the flat profile and the alphabetic listing only.
@cindex Call graph profile
The call graph profile presents the local call graph of each procedure.
For each procedure it shows
the parents (callers) and children (callees) of that procedure,
and shows the execution time and call counts for each parent and child.
It is sorted on the total amount of time spent
in the procedure and all of its descendents
(i.e.@: all of the procedures that it calls, directly or indirectly.)
@cindex Flat profile
The flat profile presents the just execution time spent in each procedure.
It does not count the time spent in descendents of a procedure.
The alphabetic listing just lists the procedures in alphabetical order,
along with their index number in the call graph profile,
so that you can quickly find the entry for a particular procedure
in the call graph profile.
@cindex Profiling interrupts
The profiler works by interrupting the program at frequent intervals,
and each time recording the currently active procedure and its caller.
It uses these counts to determine
the proportion of the total time spent in each procedure.
This means that the figures calculated for these times
are only a statistical approximation to the real values,
and so they should be treated with some caution.
In particular, if the profiler's assumption
that calls to a procedure from different callers have roughly similar costs
is not true,
the graph profile can be quite misleading.
The time spent in a procedure and its descendents is calculated by
propagating the times up the call graph,
assuming that each call to a procedure from a particular caller
takes the same amount of time.
This assumption is usually reasonable,
but again the results should be treated with caution.
(The deep profiler does not make such an assumption,
and hence its output is significantly more reliable.)
@cindex Garbage collection, profiling
Note that any time spent in a C function
(e.g.@: time spent in @samp{GC_malloc()},
which does memory allocation and garbage collection)
is credited to the Mercury procedure that called that C function.
Here is a small portion of the call graph profile from an example program.
@example
called/total parents
index %time self descendents called+self name index
called/total children
<spontaneous>
[1] 100.0 0.00 0.75 0 call_engine_label [1]
0.00 0.75 1/1 do_interpreter [3]
-----------------------------------------------
0.00 0.75 1/1 do_interpreter [3]
[2] 100.0 0.00 0.75 1 io.run/0(0) [2]
0.00 0.00 1/1 io.init_state/2(0) [11]
0.00 0.74 1/1 main/2(0) [4]
-----------------------------------------------
0.00 0.75 1/1 call_engine_label [1]
[3] 100.0 0.00 0.75 1 do_interpreter [3]
0.00 0.75 1/1 io.run/0(0) [2]
-----------------------------------------------
0.00 0.74 1/1 io.run/0(0) [2]
[4] 99.9 0.00 0.74 1 main/2(0) [4]
0.00 0.74 1/1 sort/2(0) [5]
0.00 0.00 1/1 print_list/3(0) [16]
0.00 0.00 1/10 io.write_string/3(0) [18]
-----------------------------------------------
0.00 0.74 1/1 main/2(0) [4]
[5] 99.9 0.00 0.74 1 sort/2(0) [5]
0.05 0.65 1/1 list.perm/2(0) [6]
0.00 0.09 40320/40320 sorted/1(0) [10]
-----------------------------------------------
8 list.perm/2(0) [6]
0.05 0.65 1/1 sort/2(0) [5]
[6] 86.6 0.05 0.65 1+8 list.perm/2(0) [6]
0.00 0.60 5914/5914 list.insert/3(2) [7]
8 list.perm/2(0) [6]
-----------------------------------------------
0.00 0.60 5914/5914 list.perm/2(0) [6]
[7] 80.0 0.00 0.60 5914 list.insert/3(2) [7]
0.60 0.60 5914/5914 list.delete/3(3) [8]
-----------------------------------------------
40319 list.delete/3(3) [8]
0.60 0.60 5914/5914 list.insert/3(2) [7]
[8] 80.0 0.60 0.60 5914+40319 list.delete/3(3) [8]
40319 list.delete/3(3) [8]
-----------------------------------------------
0.00 0.00 3/69283 tree234.set/4(0) [15]
0.09 0.09 69280/69283 sorted/1(0) [10]
[9] 13.3 0.10 0.10 69283 compare/3(0) [9]
0.00 0.00 3/3 __Compare___io__stream/0(0) [20]
0.00 0.00 69280/69280 builtin_compare_int/3(0) [27]
-----------------------------------------------
0.00 0.09 40320/40320 sort/2(0) [5]
[10] 13.3 0.00 0.09 40320 sorted/1(0) [10]
0.09 0.09 69280/69283 compare/3(0) [9]
-----------------------------------------------
@end example
The first entry is @samp{call_engine_label} and its parent is
@samp{<spontaneous>}, meaning that it is the root of the call graph.
(The first three entries, @samp{call_engine_label}, @samp{do_interpreter},
and @samp{io.run/0} are all part of the Mercury runtime;
@samp{main/2} is the entry point to the user's program.)
Each entry of the call graph profile consists of three sections, the parent
procedures, the current procedure and the children procedures.
Reading across from the left, for the current procedure the fields are:
@itemize @bullet
@item
The unique index number for the current procedure.
(The index numbers are used only to make it easier to find
a particular entry in the call graph.)
@item
The percentage of total execution time spent in the current procedure
and all its descendents.
As noted above, this is only a statistical approximation.
@item
The ``self'' time: the time spent executing code that is
part of current procedure.
As noted above, this is only a statistical approximation.
@item
The descendent time: the time spent in the
current procedure and all its descendents.
As noted above, this is only a statistical approximation.
@item
The number of times a procedure is called.
If a procedure is (directly) recursive, this column
will contain the number of calls from other procedures,
a plus sign, and then the number of recursive calls.
These numbers are exact, not approximate.
@item
The name of the procedure followed by its index number.
@end itemize
The predicate or function names are not just followed by their arity but
also by their mode in brackets. A mode of zero corresponds to the first mode
declaration of that predicate in the source code. For example,
@samp{list.delete/3(3)} corresponds to the @samp{(out, out, in)} mode
of @samp{list.delete/3}.
Now for the parent and child procedures the self and descendent time have
slightly different meanings. For the parent procedures the self and descendent
time represent the proportion of the current procedure's self and descendent
time due to that parent. These times are obtained using the assumption that
each call contributes equally to the total time of the current procedure.
@node Using mprof for profiling memory allocation
@section Using mprof for profiling memory allocation
@pindex mprof
@cindex Memory profiling
@cindex Allocation profiling
@cindex Profiling memory allocation
To create a profile of memory allocations, you can invoke @samp{mprof}
with the @samp{-m} (@samp{--profile memory-words}) option.
This will profile the amount of memory allocated, measured in units of words.
(A word is 4 bytes on a 32-bit architecture,
and 8 bytes on a 64-bit architecture.)
Alternatively, you can use @samp{mprof}'s @samp{-M}
(@samp{--profile memory-cells}) option.
This will profile memory in units of ``cells''.
A cell is a group of words allocated together in a single allocation,
to hold a single object.
Selecting this option this will therefore profile
the number of memory allocations,
while ignoring the size of each memory allocation.
With memory profiling, just as with time profiling,
you can use the @samp{-c} (@samp{--call-graph}) option to display
call graph profiles in addition to flat profiles.
When invoked with the @samp{-m} option, @samp{mprof} only reports
allocations, not deallocations (garbage collection).
It can tell you how much memory was allocated by each procedure,
but it won't tell you how long the memory was live for,
or how much of that memory was garbage-collected.
This is also true for @samp{mdprof}.
The memory retention profiling tool described in the next section can tell
you which memory cells remain on the heap.
@node Using mprof for profiling memory retention
@section Using mprof for profiling memory retention
@pindex mprof
@cindex Memory attribution
@cindex Memory retention
@cindex Heap profiling
When a program is built with memory profiling enabled and uses the Boehm
garbage collector, i.e. a grade with @samp{.memprof.gc} modifiers,
each memory cell is ``attributed'' with information about its origin
and type. This information can be
collated to tell you what kinds of objects are being retained when
the program executes.
To do this, you must instrument the program by adding calls to
@code{benchmarking.report_memory_attribution/1} or
@code{benchmarking.report_memory_attribution/3}
at points of interest.
The first argument of the @code{report_memory_attribution} predicates is a
string that is used to label the memory retention data corresponding to that
call in the profiling output.
You may want to call them from within @samp{trace} goals:
@example
trace [run_time(env("SNAPSHOTS")), io(!IO)] (
benchmarking.report_memory_attribution("Phase 2", !IO)
)
@end example
If a program operates in distinct phases
you may want to add a call in between the phases.
The @samp{report_memory_attribution} predicates do nothing in other grades,
so are safe to leave in the program.
Next, build the program in a @samp{.memprof.gc} grade.
After the program has finished executing, it will generate a file
called @samp{Prof.Snapshots} in the current directory.
Run @samp{mprof -s} to view the profile.
You will see the memory cells which were on the heap at each time
that @samp{report_memory_attribution} was called: the origin of the cells, and
their type constructors.
Passing the option @samp{-T} will group the profile first by
type constructors, then by procedure. The @samp{-b} option produces a brief
profile by hiding the secondary level of information.
Memory cells allocated by the Mercury runtime system
itself are normally excluded from the profile; they can be viewed by passing
the @samp{-r} option.
Note that Mercury values which are dead may in fact be still reachable from the
various execution stacks. This is particularly noticeable on the high-level C
back-end, as the C compiler does not take conservative garbage collection into
account and Mercury values may linger on the C stack for longer than necessary.
The low-level C grades should suffer to a lesser extent.
The attribution requires an extra word of memory per cell, which
is then rounded up by the memory allocator.
This is accounted for in @samp{mprof} output, but the memory usage
of the program may be significantly higher than in non-memory profiling grades.
@node Using mdprof
@section Using mdprof
@pindex mdprof
@cindex Deep profiling
The user interface of the deep profiler is a browser.
To display the information contained in a deep profiling data file
(which will be called @file{Deep.data} unless you renamed it),
start up your browser and give it a URL of the form
@file{http://server.domain.name/cgi-bin/mdprof_cgi?/full/path/name/Deep.data}.
The @file{server.domain.name} part should be the name of a machine
with the following qualifications:
it should have a web server running on it,
and it should have the @samp{mdprof_cgi} program installed in
the web server's CGI program directory.
(On many Linux systems, this directory is @file{/usr/lib/cgi-bin}.)
The @file{/full/path/name/Deep.data} part
should be the full path name of the deep profiling data file
whose data you wish to explore.
The name of this file must not have percent signs in it,
and it must end in the suffix @file{.data}.
When you start up @samp{mdprof} using the command above,
you will see a list of the usual places
where you may want to start looking at the profile.
Each place is represented by a link.
Clicking on and following that link will give you a web page
that contains both the profile information you asked for
and other links,
some of which present the same information in a different form
and some of which lead to further information.
You explore the profile
by clicking on links and looking at the resulting pages.
The deep profiler can generate several kinds of pages.
@table @asis
@item The menu page
The menu page gives summary information about the profile,
and the usual starting points for exploration.
@item Clique pages
Clique pages are the most fundamental pages of the deep profiler.
Each clique page presents performance information about a clique,
which is either a single procedure or a group of mutually recursive procedures,
in a given ancestor context,
which in turn is a list of other cliques
starting with the caller of the entry point of the clique
and ending with the clique of the @samp{main} predicate.
Each clique page lists the closest ancestor cliques,
and then the procedures of the clique.
It gives the cost of each call site in each procedure,
as well as the cost of each procedure in total.
These costs will be just those incurred in the given ancestor context;
the costs incurred by these call sites
and procedures in other ancestor contexts
will be shown on other clique pages.
@item Procedure pages
Procedure pages give the total cost of a procedure and its call sites
in all ancestor contexts.
@item Module pages
Module pages give the total cost of all the procedures of a module.
@item Module getters and setters pages
These pages identifies the getter and setter procedures in a module.
Getters and setters are simply predicates and functions
that contain @samp{_get_} and @samp{_set_} respectively in their names;
they are usually used to access fields of data structures.
@item Program modules page
The program modules page gives the list of the program's modules.
@item Top procedure pages
Top procedure pages identify the procedures that are
most expensive as measured by various criteria.
@item Procedure caller pages
A procedure caller page lists the call sites, procedures, modules or cliques
that call the given procedure.
@end table
When exploring a procedure's callers,
you often want only the ancestors
that are at or above a certain level of abstraction.
Effectively you want to draw a line through the procedures of the program,
such that you are interested in the procedures on or above the line
but those below the line.
Since we want to exclude procedures below the line
from procedure caller pages,
we call this line an @emph{exclusion contour}.
You can tell the deep profiler where you want to draw this line
by giving it a @samp{exclusion contour file}.
The name of this file should be the same
as the name of the deep profiling data file,
but with the suffix @samp{.data} replaced with @samp{.contour}.
This file should consist of a sequence of lines,
and each line should contain two words.
The first word should be either @samp{all} or @samp{internal};
the second should the name of a module.
If the first word is @samp{all}, then
all procedures in the named module are below the exclusion contour;
if the first word is @samp{internal}, then
all internal (non-exported) procedures in the named module
are below the exclusion contour.
Here is an example of an exclusion contour file.
@example
all bag
all list
all map
internal set
@end example
@node Using threadscope
@section Using threadscope
@pindex threadscope
@pindex show-ghc-events
@cindex ThreadScope profiling
@cindex Parallel execution profiling
The ThreadScope tools are not distributed with Mercury.
For information about how to install them please see the
@file{README.ThreadScope} file included in the Mercury distribution.
ThreadScope provides two programs that can be used to view profiles in
@file{.eventlog} files.
The first, @samp{show-ghc-events}, lists the ThreadScope events sorted from the
earliest to the latest,
while the second, @samp{threadscope} provides a graphical display for browsing
the profile.
Both programs accept the name of a @file{.eventlog} file on the command
line.
The @samp{threadscope} program also provides a menu from which users can choose
a file to open.
@node Profiling and shared libraries
@section Profiling and shared libraries
@pindex mprof
@cindex Shared libraries and profiling
@cindex Profiling and shared libraries
@vindex LD_BIND_NOW
On some operating systems,
Mercury's profiling doesn't work properly with shared libraries.
The symptom is errors (@samp{map.lookup failed}) or warnings from @samp{mprof}.
On some systems, the problem occurs because the C implementation
fails to conform to the semantics specified by the ISO C standard
for programs that use shared libraries.
For other systems, we have not been able to analyze the cause of the failure
(but we suspect that the cause may be the same as on those systems
where we have been able to analyze it).
If you get errors or warnings from @samp{mprof},
and your program is dynamically linked,
try rebuilding your application statically linked,
e.g.@: by using @samp{MLFLAGS=--static} in your Mmakefile.
Another work-around that sometimes works is to set the environment variable
@samp{LD_BIND_NOW} to a non-null value before running the program.
@c ----------------------------------------------------------------------------
@node Invocation
@chapter Invocation
This section contains a brief description of all the options
available for @samp{mmc}, the Mercury compiler.
Sometimes this list is a little out-of-date;
use @samp{mmc --help} to get the most up-to-date list.
@findex --help
@menu
* Invocation overview::
* Verbosity options::
* Warning options::
* Output options::
* Auxiliary output options::
* Language semantics options::
* Termination analysis options::
* Compilation model options::
* Code generation options::
* Optimization options::
* Target code compilation options::
* Link options::
* Build system options::
* Miscellaneous options::
@end menu
@node Invocation overview
@section Invocation overview
@code{mmc} is invoked as
@example
mmc [@var{options}] @var{arguments}
@end example
Arguments can be either module names or file names.
Arguments ending in @samp{.m} are assumed to be file names,
while other arguments are assumed to be module names, with
@samp{.} (rather than @samp{__} or @samp{:}) as module qualifier.
If you specify a module name such as @samp{foo.bar.baz},
the compiler will look for the source in files @file{foo.bar.baz.m},
@file{bar.baz.m}, and @file{baz.m}, in that order.
Options are either short (single-letter) options preceded by a single @samp{-},
or long options preceded by @samp{--}.
Options are case-sensitive.
We call options that do not take arguments @dfn{flags}.
Single-letter flags may be grouped with a single @samp{-}, e.g.@: @samp{-vVc}.
Single-letter flags may be negated
by appending another trailing @samp{-}, e.g.@: @samp{-v-}.
Long flags may be negated by preceding them with @samp{no-},
e.g.@: @samp{--no-verbose}.
@findex --no-
@node Warning options
@section Warning options
@cindex Warning options
@table @code
@item -w
@itemx --inhibit-warnings
@findex -w
@findex --inhibit-warnings
Disable all warning messages.
@sp 1
@item --halt-at-warn
@findex --halt-at-warn
This option causes the compiler to treat all
warnings as if they were errors. This means that
if any warning is issued, the compiler will not
generate code --- instead, it will return a
non-zero exit status.
@sp 1
@item --halt-at-syntax-error
@findex --halt-at-syntax-error
This option causes the compiler to halt immediately
after syntax checking and not do any semantic checking
if it finds any syntax errors in the program.
% @sp 1
% @item --halt-at-auto-parallel-failure
% @findex --halt-at-auto-parallel-failure
% @cindex Automatic parallelism
% @cindex Profiler feedback
% This option causes the compiler to halt if it cannot perform
% an auto-parallelization requested by a feedback file.
@sp 1
@item --inhibit-accumulator-warnings
@findex --inhibit-accumulator-warnings
Don't warn about argument order rearrangement caused by
@samp{--introduce-accumulators}.
@sp 1
@item --no-warn-singleton-variables
@findex --no-warn-singleton-variables
@findex --warn-singleton-variables
Don't warn about variables which only occur once.
@sp 1
@item --no-warn-missing-det-decls
@findex --no-warn-missing-det-decls
@findex --warn-missing-det-decls
For predicates that are local to a module (those that
are not exported), don't issue a warning if the @samp{pred}
or @samp{mode} declaration does not have a determinism annotation.
Use this option if you want the compiler to perform automatic
determinism inference for non-exported predicates.
@sp 1
@item --no-warn-det-decls-too-lax
@findex --no-warn-det-decls-too-lax
@findex --warn-det-decls-too-lax
Don't warn about determinism declarations
which could have been stricter.
@sp 1
@item --no-warn-inferred-erroneous
@findex --no-warn-inferred-erroneous
@findex --warn-inferred-erroneous
Don't warn about procedures whose determinism is inferred erroneous
but whose determinism declarations are laxer.
@sp 1
@item --no-warn-insts-without-matching-type
@findex --no-warn-insts-without-matching-type
@findex --warn-insts-without-matching-type
Don't warn about insts that are not consistent with any
types in scope.
@sp 1
@c @item --no-warn-unused-imports
@item --warn-unused-imports
@findex --no-warn-unused-imports
@findex --warn-unused-imports
Warn about modules that are imported but not used.
@c Don't warn about modules that are imported but not used.
@sp 1
@item --no-warn-nothing-exported
@findex --no-warn-nothing-exported
@findex --warn-nothing-exported
Don't warn about modules whose interface sections have no
exported predicates, functions, insts, modes or types.
@sp 1
@item --warn-unused-args
@findex --warn-unused-args
Warn about predicate or function arguments which are not used.
@sp 1
@item --warn-interface-imports
@findex --warn-interface-imports
Warn about modules imported in the interface which are not
used in the interface.
@sp 1
@item --warn-missing-opt-files
@findex --warn-missing-opt-files
Warn about @samp{.opt} files that cannot be opened.
@sp 1
@item --warn-missing-trans-opt-files
@findex --warn-missing-trans-opt-files
Warn about @samp{.trans_opt} files that cannot be opened.
@sp 1
@item --no-warn-non-contiguous-clauses
@findex --no-warn-non-contiguous-clauses
Do not generate a warning if the clauses of a predicate or function
are not contiguous.
@sp 1
@item --warn-non-contiguous-foreign-procs
@findex --warn-non-contiguous-foreign-procs
Generate a warning if the clauses and foreign_procs of a predicate
or function are not contiguous.
@sp 1
@item --warn-non-stratification
@findex --warn-non-stratification
Warn about possible non-stratification of the predicates and/or functions
in the module.
Non-stratification occurs when a predicate or function can call itself
negatively through some path along its call graph.
@sp 1
@item --no-warn-simple-code
@findex --no-warn-simple-code
@findex --warn-simple-code
Disable warnings about constructs which are so
simple that they are likely to be programming errors.
@sp 1
@item --warn-duplicate-calls
@findex --warn-duplicate-calls
Warn about multiple calls to a predicate with the same
input arguments.
@sp 1
@item --no-warn-missing-module-name
@findex --no-warn-missing-module-name
@findex --warn-missing-module-name
Disable warnings for modules that do not start with a
@samp{:- module} declaration.
@sp 1
@item --no-warn-wrong-module-name
@findex --no-warn-wrong-module-name
@findex --warn-wrong-module-name
Disable warnings for modules whose @samp{:- module} declaration
does not match the module's file name.
@sp 1
@item --no-warn-smart-recompilation
@findex --warn-smart-recompilation
@findex --no-warn-smart-recompilation
Disable warnings from the smart recompilation system.
@sp 1
@item --no-warn-undefined-options-variables
@findex --no-warn-undefined-options-variables
@findex --make
Do not warn about references to undefined variables in
options files with @samp{--make}.
@sp 1
@item --warn-non-tail-recursion
@findex --warn-non-tail-recursion
Warn about any directly recursive calls that are not tail recursive.
@sp 1
@item --no-warn-target-code
@findex --no-warn-target-code
Disable warnings from the compiler used to process the
target code (e.g. gcc).
@sp 1
@item --no-warn-up-to-date
@findex --no-warn-up-to-date
@findex --warn-up-to-date
Don't warn if targets specified on the command line
with @samp{--make} are already up to date.
@sp 1
@item --no-warn-stubs
@findex --no-warn-stubs
@findex --warn-stubs
Disable warnings about procedures for which there are no
clauses. Note that this option only has any effect if
the @samp{--allow-stubs} option (@pxref{Language semantics options})
is enabled.
@sp 1
@item --warn-dead-procs
@findex --warn-dead-procs
Warn about procedures which are never called.
@sp 1
@item --no-warn-table-with-inline
@findex --no-warn-table-with-inline
Disable warnings about tabled procedures that also have
a `pragma inline' declaration.
@sp 1
@item --no-warn-non-term-special-preds
@findex --no-warn-non-term-special-preds
Do not warn about types that have user-defined equality or
comparison predicates that cannot be proved to terminate.
This option is only enabled when termination analysis is enabled.
(See @ref{Termination analysis options} for further details).
@sp 1
@item --no-warn-known-bad-format-calls
@findex --no-warn-known-bad-format-calls
Do not warn about calls to string.format or io.format that
the compiler knows for sure contain mismatches between the format
string and the supplied values.
@sp 1
@item --warn-unknown-format-calls
@findex --warn-unknown-format-calls
Warn about calls to string.format or io.format for which
the compiler cannot tell whether there are any mismatches between
the format string and the supplied values.
@sp 1
@item --no-warn-obsolete
@findex --no-warn-obsolete
Do not warn about calls to predicates or functions that have been
marked as obsolete.
@sp 1
@item --inform-ite-instead-of-switch
@findex ---ite-instead-of-switch
Generate informational messages for if-then-elses that could be
replaced by switches.
@sp 1
@item --no-warn-unresolved-polymorphism
@findex --no-warn-unresolved-polymorphism
Do not warn about unresolved polymorphism.
@sp 1
@item --warn-suspicious-foreign-procs
@findex --warn-suspicious-foreign-procs
Warn about possible errors in the bodies of foreign procedures.
When enabled, the compiler attempts to determine whether the success
indicator for a foreign procedure is correctly set, and whether
the foreign procedure body contains operations that are not allowed
(for example, @code{return} statements in a C foreign procedure).
Note that since the compiler's ability to parse foreign language code
is limited some warnings reported by this option may be spurious and
some actual errors may not be detected at all.
@sp 1
@item --no-warn-state-var-shadowing
@findex --no-warn-state-var-shadowing
Do not warn about one state variable shadowing another.
@sp 1
@item --no-inform-inferred
@findex --no-inform-inferred
Do not generate messages about inferred types or modes.
@sp 1
@item --no-inform-inferred-types
@findex --no-inform-inferred-types
Do not generate messages about inferred types.
@sp 1
@item --no-inform-inferred-modes
@findex --no-inform-inferred-modes
Do not generate messages about inferred modes.
@end table
@node Verbosity options
@section Verbosity options
@cindex Verbosity options
@table @code
@item -v
@itemx --verbose
@findex -v
@findex --verbose
Output progress messages at each stage in the compilation.
@sp 1
@item -V
@itemx --very-verbose
@findex -V
@findex --very-verbose
Output very verbose progress messages.
@sp 1
@item -E
@itemx --verbose-error-messages
@findex -E
@findex --verbose-error-messages
Explain error messages. Asks the compiler to give you a more
detailed explanation of any errors it finds in your program.
@sp 1
@item --no-verbose-make
@findex --no-verbose-make
@findex --make
Disable messages about the progress of builds using
the @samp{--make} option.
@sp 1
@item --verbose-commands
@findex --verbose-commands
Output each external command before it is run.
Note that some commands will only be printed
with @samp{--verbose}.
@sp 1
@item --verbose-recompilation
@findex --verbose-recompilation
When using @samp{--smart-recompilation}, output messages
explaining why a module needs to be recompiled.
@sp 1
@item --find-all-recompilation-reasons
@findex --find-all-recompilation-reasons
Find all the reasons why a module needs to be recompiled,
not just the first. Implies @samp{--verbose-recompilation}.
@sp 1
@item --output-compile-error-lines @var{n}
@findex --output-compile-error-lines
@findex --make
With @samp{--make}, output the first @var{n} lines of the @samp{.err}
file after compiling a module (default: 15).
@sp 1
@item --report-cmd-line-args
@findex --report-cmd-line-args
Report the command line arguments.
@sp 1
@item --report-cmd-line-args-in-doterr
@findex --report-cmd-line-args-in-doterr
Report the command line arguments for compilations whose output
mmake normally redirects to a .err file.
@sp 1
@item -S
@itemx --statistics
@findex -S
@findex --statistics
Output messages about the compiler's time/space usage.
At the moment this option implies @samp{--no-trad-passes},
so you get information at the boundaries between phases of the compiler.
@findex --no-trad-passes
@findex --trad-passes
@sp 1
@item --proc-size-statistics @var{filename}
@findex --proc-size-statistics
Append information about the size of each procedure in the module
in terms of goals and variables to the end of the named file.
@c @sp 1
@c @item -T
@c @itemx --debug-types
@c @findex -T
@c @findex --debug-types
@c Output detailed debugging traces of the type checking.
@sp 1
@item -N
@itemx --debug-modes
@findex -N
@findex --debug-modes
Output debugging traces of the mode checking.
@sp 1
@item --debug-modes-verbose
@findex --debug-modes-verbose
Output detailed debugging traces of the mode checking.
@sp 1
@item --debug-modes-pred-id @var{predid}
@findex --debug-modes-pred-id
With @samp{--debug-modes}, restrict the debugging traces
to the mode checking of the predicate or function with the specified pred id.
@sp 1
@item --debug-det
@itemx --debug-determinism
@findex --debug-det
@findex --debug-determinism
Output detailed debugging traces of determinism analysis.
@sp 1
@item --debug-opt
@findex --debug-opt
Output detailed debugging traces of the optimization process.
@sp 1
@item --debug-opt-pred-id @var{predid}
@findex --debug-opt-pred-id
Output detailed debugging traces of the optimization process
only for the predicate/function with the specified pred id.
May be given more than once.
@sp 1
@item --debug-opt-pred-name @var{name}
@findex --debug-opt-pred-name
Output detailed debugging traces of the optimization process
only for the predicate/function with the specified name.
May be given more than once.
@sp 1
@item --debug-pd
@findex --debug-pd
Output detailed debugging traces of the partial
deduction and deforestation process.
@sp 1
@item --debug-liveness <n>
@findex --debug-liveness
Output detailed debugging traces of the liveness analysis
of the predicate with the given predicate id.
@sp 1
@item --debug-make
@findex --debug-make
@findex --make
Output detailed debugging traces of the `--make' option.
@c XXX This can be uncommented when the documentation for
@c `--analyse-closures' is uncommented.
@c @sp 1
@c @item --debug-closures
@c @findex --debug-closures
@c Output detailed debugging traces of the `--analyse-closures' option.
@sp 1
@item --debug-intermodule-analysis
@findex --debug-intermodule-analysis
Output detailed debugging traces of the `--intermodule-analysis' option.
@sp 1
@item --debug-indirect-reuse
@findex --debug-indirect-reuse
Output detailed debugging traces of the indirect reuse pass of
`--structure-reuse' option.
@sp 1
@item --debug-type-rep
@findex --debug-type-rep
Output debugging traces of type representation choices.
@end table
@node Output options
@section Output options
These options are mutually exclusive.
If more than one of these options is specified, only the first in
this list will apply.
If none of these options are specified, the default action is to
compile and link the modules named on the command line to produce
an executable.
@table @code
@item -f
@itemx --generate-source-file-mapping
@findex --generate-source-file-mapping
Output the module name to file name mapping for the list
of source files given as non-option arguments to @samp{mmc}
to @file{Mercury.modules}. This must be done before
@samp{mmc --generate-dependencies} if there are any modules
for which the file name does not match the module name.
If there are no such modules the mapping need not be
generated.
@sp 1
@item -M
@itemx --generate-dependencies
@findex -M
@findex --generate-dependencies
@cindex dependencies
Output ``Make''-style dependencies for the module and all of its
dependencies to @file{@var{module}.dep}, @file{@var{module}.dv} and the
relevant @samp{.d} files.
@sp 1
@itemx --generate-dependency-file
@findex --generate-dependency-file
Output `Make'-style dependencies for the module to @file{@var{module}.d}'.
@sp 1
@item --generate-module-order
@findex --generate-module-order
Output the strongly connected components of the module
dependency graph in top-down order to @file{@var{module}.order}.
Implies @samp{--generate-dependencies}.
@sp 1
@item --generate-standalone-interface @var{basename}
@findex --generate-standalone-interface @var{basename}
Output a stand-alone interface. @var{basename} is used as
the basename of any files generated for the stand-alone interface.
(See @pxref{Stand-alone Interfaces} for further details.)
@sp 1
@item --generate-mmc-deps
@itemx --generate-mmc-make-module-dependencies
@findex --generate-mmc-deps
@findex --generate-mmc-make-module-dependencies
@findex --make
Generate dependencies for use by @samp{mmc --make} even
when using Mmake. This is recommended when building a
library for installation.
@sp 1
@item -i
@itemx --make-int
@itemx --make-interface
@findex -i
@findex --make-int
@findex --make-interface
Write the module interface to @file{@var{module}.int}.
Also write the short interface to @file{@var{module}.int2}.
@sp 1
@item --make-short-int
@itemx --make-short-interface
@findex --make-short-int
@findex --make-short-interface
Write the unqualified version of the short interface to
@file{@var{module}.int3}.
@sp 1
@item --make-priv-int
@itemx --make-private-interface
@findex --make-priv-int
@findex --make-private-interface
Write the module's private interface (used for compiling
nested sub-modules) to @file{@var{module}.int0}.
@sp 1
@item --make-opt-int
@itemx --make-optimization-interface
@findex --make-opt-int
@findex --make-optimization-interface
Write information used for inter-module optimization to
@file{@var{module}.opt}.
@sp 1
@item --make-trans-opt
@itemx --make-transitive-optimization-interface
@findex --make-trans-opt
@findex --make-transitive-optimization-interface
Write the @file{@var{module}.trans_opt} file. This file is used to store
information used for inter-module optimization. The information is read
in when the compiler is invoked with the
@samp{--transitive-intermodule-optimization} option.
The file is called the ``transitive'' optimization interface file
because a @samp{.trans_opt} file may depend on other
@samp{.trans_opt} and @samp{.opt} files. In contrast,
a @samp{.opt} file can only hold information derived directly
from the corresponding @samp{.m} file.
@sp 1
@item --make-xml-documentation
@findex --make-xml-documentation
Output an XML representation of all the declarations in the module
into the `<module>.xml' file.
This XML file can then be transformed via a XSL transform into
another documentation format.
@sp 1
@item -P
@itemx --pretty-print
@itemx --convert-to-mercury
@findex -P
@findex --pretty-print
@findex --convert-to-mercury
Convert to Mercury. Output to file @file{@var{module}.ugly}.
This option acts as a Mercury ugly-printer.
(It would be a pretty-printer, except that comments are stripped
and nested if-then-elses are indented too much --- so the result
is rather ugly.)
@sp 1
@item --typecheck-only
@findex --typecheck-only
Just check the syntax and type-correctness of the code.
Don't invoke the mode analysis and later passes of the compiler.
When converting Prolog code to Mercury,
it can sometimes be useful to get the types right first
and worry about modes second;
this option supports that approach.
@sp 1
@item -e
@itemx --errorcheck-only
@findex -e
@findex --errorcheck-only
Check the module for errors, but do not generate any code.
@sp 1
@item -C
@itemx --target-code-only
@findex -C
@findex --target-code-only
Generate target code (i.e.@: C in @file{@var{module}.c},
assembler in @file{@var{module}.s} or @file{@var{module}.pic_s},
IL in @file{@var{module}.il}, C# in @file{@var{module}.cs},
Java in @file{@var{module}.java}
or Erlang in @file{@var{module}.erl}),
but not object code.
@sp 1
@item -c
@itemx --compile-only
@findex -c
@findex --compile-only
Generate C code in @file{@var{module}.c}
and object code in @file{@var{module}.o}
but do not attempt to link the named modules.
@sp 1
@item --output-grade-string
@findex --output-grade-string
Compute from the rest of the option settings the canonical grade
string and print it on the standard output.
@sp 1
@item --output-link-command
@findex --output-link-command
Print the command used to link executables to the
standard output.
@sp 1
@item --output-shared-lib-link-command
@findex --output-shared-lib-link-command
Print the command used to link shared libraries to the
standard output.
@sp 1
@item --output-libgrades
@findex --output-libgrades
Print the list of compilation grades in which a library to
be installed should be built to the standard output.
@sp 1
@item --output-cc
@findex --output-cc
Print the command used to invoke C compiler to the standard output.
@sp 1
@item --output-cc-type
@itemx --output-c-compiler-type
@findex --output-cc-type
@findex --output-c-compiler-type
Print the C compiler type to the standard output.
@sp 1
@item --output-cflags
@findex --output-cflags
Print the flags with which the C compiler will be invoked
to the standard output.
@sp 1
@itemx --output-csharp-compiler-type
@findex --output-csharp-compiler-type
Print the C# compiler type to the standard output.
@sp 1
@item --output-library-link-flags
@findex --output-library-link-flags
Print the flags that are passed to the linker in order to link
against the current set of libraries. This includes the standard
library as well as any other libraries specified via the
@samp{--ml} option. The flags are printed to the standard output.
@sp 1
@item --output-grade-defines
@findex --output-grade-defines
Print the flags that are passed to the C compiler to define the macros used to
specify the compilation grade.
The flags are printed to the standard outut.
@sp 1
@item --output-c-include-dir-flags
@item --output-c-include-directory-flags
@findex --output-c-include-dir-flags
@findex --output-c-include-directory-flags
Print the flags that are passed to the C compiler to specify which directories
to search for C header files.
This includes the C header files from the standard library.
The flags are printed to the standard output.
@end table
@node Auxiliary output options
@section Auxiliary output options
@table @code
@item --smart-recompilation
@findex --smart-recompilation
When compiling, write program dependency information
to be used to avoid unnecessary recompilations if an
imported module's interface changes in a way which does
not invalidate the compiled code. @samp{--smart-recompilation} does
not yet work with @samp{--intermodule-optimization}.
@item --no-assume-gmake
@findex --no-assume-gmake
@findex --assume-gmake
When generating @file{.d}, @file{.dep} and @file{.dv} files,
generate Makefile fragments that use only the features of standard make;
do not assume the availability of GNU Make extensions.
This can make these files significantly larger.
@item --trace-level @var{level}
@findex --trace-level @var{level}
Generate code that includes the specified level of execution tracing.
The @var{level} should be one of
@samp{none}, @samp{shallow}, @samp{deep}, @samp{rep} and @samp{default}.
@xref{Debugging}.
@item --trace-optimized
@findex --trace-optimized
Do not disable optimizations that can change the trace.
@c This is a developer only option:
@c @item --force-disable-trace
@c findex --force-disable-trace
@c Force tracing to be set to trace level none.
@c This overrides all other tracing/grade options.
@c Its main use is to turn off tracing in the browser directory
@c even for .debug and .decldebug grades.
@item --profile-optimized
@findex --profile-optimized
Do not disable optimizations that can distort deep profiles.
@item --no-delay-death
@findex --no-delay-death
@findex --delay-death
When the trace level is `deep', the compiler normally
preserves the values of variables as long as possible, even
beyond the point of their last use, in order to make them
accessible from as many debugger events as possible.
However, it will not do this if this option is given.
@item --delay-death-max-vars @var{N}
@findex --delay-death-max-vars
Delay the deaths of variables only when the number of variables
in the procedure is no more than N. The default value is 1000.
@item --stack-trace-higher-order
@findex --stack-trace-higher-order
Enable stack traces through predicates and functions with
higher-order arguments, even if stack tracing is not
supported in general.
@c @item --no-tabling-via-extra-args
@c @findex --no-tabling-via-extra-args
@c Make the tabling transformation emit each primitive operation
@c as a separate call to a predicate,
@c instead of consolidating sequences of primitives
@c into a single piece of foreign language code
@c and passing the required data as extra arguments.
@c @item --allow-table-reset
@c @findex --allow-table-reset
@c Generate C code for resetting tabling data structures.
@item --generate-bytecode
@findex --generate-bytecode
@c Output a bytecode version of the module
@c into the @file{@var{module}.bytecode} file,
@c and a human-readable version of the bytecode
@c into the @file{@var{module}.bytedebug} file.
@c The bytecode is for an experimental debugger.
Output a bytecode form of the module for use
by an experimental debugger.
@item --auto-comments
@findex --auto-comments
Output comments in the @file{@var{module}.c} file.
This is primarily useful for trying to understand
how the generated C code relates to the source code,
e.g.@: in order to debug the compiler.
The code may be easier to understand if you also use the
@samp{--no-llds-optimize} option.
@findex --no-llds-optimize
@findex --llds-optimize
@sp 1
@item -n-
@itemx --no-line-numbers
@findex -n-
@findex --no-line-numbers
@findex --line-numbers
Do not put source line numbers in the generated code.
The generated code may be in C (the usual case)
or in Mercury (with @samp{--convert-to-mercury}).
@sp 1
@item --max-error-line-width @var{N}
@findex --max-error-line-width
Set the maximum width of an error message line to @var{N} characters
(unless a long single word forces the line over this limit).
@sp 1
@item --show-dependency-graph
@findex --show-dependency-graph
Write out the dependency graph to @var{module}.dependency_graph.
@sp 1
@item --imports-graph
@findex --imports-graph
Write out the imports graph to @var{module}.imports_graph.
The imports graph contains the directed graph module A
imports module B.
The resulting file can be processed by the graphviz tools.
@c @sp 1
@c @item -d @var{stage}
@c @itemx --dump-trace-counts @var{stage}
@c @findex --dump-trace-counts
@c tIf the compiler was compiled with debugging enabled
@c and is being run with trace counting enabled,
@c write out the trace counts file after the specified stage to
@c @file{@var{module}.trace_counts.@var{num}-@var{name}}.
@c Stage numbers range from 1 to 599; not all stage numbers are valid.
@c If a stage number is followed by a plus sign,
@c all stages after the given stage will be dumped as well.
@c The special stage name @samp{all} causes the dumping of all stages.
@c Multiple dump options accumulate.
@sp 1
@item -d @var{stage}
@itemx --dump-hlds @var{stage}
@findex -d
@findex --dump-hlds
Dump the HLDS (a high-level intermediate representation) after
the specified stage number or stage name to
@file{@var{module}.hlds_dump.@var{num}-@var{name}}.
Stage numbers range from 1 to 599; not all stage numbers are valid.
If a stage number is followed by a plus sign,
all stages after the given stage will be dumped as well.
The special stage name @samp{all} causes the dumping of all stages.
Multiple dump options accumulate.
@sp 1
@item --dump-hlds-options @var{options}
@findex --dump-hlds-options
With @samp{--dump-hlds}, include extra detail in the dump.
Each type of detail is included in the dump
if its corresponding letter occurs in the option argument.
These details are:
a - argument modes in unifications,
b - builtin flags on calls,
c - contexts of goals and types,
d - determinism of goals,
e - created, removed, carried, allocated into, and used regions,
f - follow_vars sets of goals,
g - goal feature lists,
i - variables whose instantiation changes,
l - pred/mode ids and unify contexts of called predicates,
m - mode information about clauses,
n - nonlocal variables of goals,
p - pre-birth, post-birth, pre-death and post-death sets of goals,
r - resume points of goals,
s - store maps of goals,
t - results of termination analysis,
u - unification categories and other implementation details of unifications,
v - variable numbers in variable names,
x - predicate type information.
y - structured insts in the arg-modes of unification
z - purity annotations on impure and semipure goals
A - argument passing information,
B - mode constraint information,
C - clause information,
D - instmap deltas of goals (meaningful only with i),
E - deep profiling information,
G - compile-time garbage collection information,
I - imported predicates,
M - mode and inst information,
P - goal id and path information,
R - live forward use, live backward use and reuse possibilities,
S - information about structure sharing,
T - type and typeclass information,
U - unify and compare predicates,
X - constant structures,
Z - information about globals structs representing call and answer tables.
@sp 1
@item --dump-hlds-pred-id @var{predid}
@findex --dump-hlds-pred-id
With @samp{--dump-hlds}, restrict the output
to the HLDS of the predicate or function with the specified pred id.
May be given more than once.
@sp 1
@item --dump-hlds-pred-name @var{name}
@findex --dump-hlds-pred-name
With @samp{--dump-hlds}, restrict the output
to the HLDS of the predicate or function with the specified name.
May be given more than once.
@sp 1
@item --dump-hlds-inst-limit @var{N}
@findex --dump-hlds-inst-limit
Dump at most @var{N} insts in each inst table.
@sp 1
@item --dump-hlds-file-suffix
@findex --dump-hlds-file-suffix
Append the given suffix to the names of the files created by the
@samp{--dump-hlds} option.
@sp 1
@item --dump-same-hlds
@findex --dump-same-hlds
Create a file for a HLDS stage even if the file notes only that
this stage is identical to the previously dumped HLDS stage.
@sp 1
@item --dump-mlds @var{stage}
@findex --dump-mlds
Dump the MLDS (a C-like intermediate representation) after
the specified stage number or stage name.
The MLDS is converted to a C source file/header file pair,
which is dumped to @file{@var{module}.c_dump.@var{num}-@var{name}}
and @file{@var{module}.h_dump.@var{num}-@var{name}}.
Stage numbers range from 1 to 99; not all stage numbers are valid.
The special stage name @samp{all} causes the dumping of all stages.
Multiple dump options accumulate.
@sp 1
@item --verbose-dump-mlds @var{stage}
@findex --verbose-dump-mlds
Dump the internal compiler representation of the MLDS,
after the specified stage number or stage name, to
@file{@var{module}.mlds_dump.@var{num}-@var{name}}.
@sp 1
@item --mode-constraints
@findex --mode-constraints
Perform constraint based mode analysis on the given modules.
At the moment, the only effect of this
is to include more information in HLDS dumps,
to allow the constraint based mode analysis algorithm to be debugged.
@sp 1
@item --simple-mode-constraints
@findex --simple-mode-constraints
Ask for the simplified variant of constraint based mode analysis,
in which there is only one constraint variable per program variable,
rather than one constraint variable
per node in the inst graph of a program variable.
This option is ignored unless --mode-constraints is also given.
@sp 1
@item --benchmark-modes
@findex --benchmark-modes
Output information about the performance
of the constraint based mode analysis algorithm.
@sp 1
@item --benchmark-modes-repeat @var{num}
@findex --benchmark-modes-repeat @var{num}
Specifies the number of times the mode analysis algorithm should run.
More repetitions may smooth out fluctuations
due to background load or clock granularity.
This option is ignored unless --benchmark-modes is also given.
@end table
@node Language semantics options
@section Language semantics options
@cindex Language semantics options
@cindex Semantics options
@cindex Order of execution
@cindex Reordering
@cindex Optimization
See the Mercury language reference manual for detailed explanations
of these options.
@table @code
@item --no-reorder-conj
@findex --no-reorder-conj
@findex --reorder-conj
Execute conjunctions left-to-right except where the modes imply
that reordering is unavoidable.
@sp 1
@item --no-reorder-disj
@findex --no-reorder-disj
@findex --reorder-disj
Execute disjunctions strictly left-to-right.
@sp 1
@item --no-fully-strict
@findex --no-fully-strict
Allow infinite loops or goals with determinism erroneous to be optimised
away.
@sp 1
@item --allow-stubs
@findex --allow-stubs
@cindex Stubs
@cindex Procedures with no clauses
@cindex No clauses, procedures with
@cindex Clauses, procedures without
Allow procedures to have no clauses.
Any calls to such procedures will raise an exception at run-time.
This option is sometimes useful during program development.
(See also the documentation for the @samp{--warn-stubs} option
in @ref{Warning options}.)
@sp 1
@item --infer-all
@findex --infer-all
@cindex Inference
An abbreviation for @samp{--infer-types --infer-modes --infer-det}.
@sp 1
@item --infer-types
@findex --infer-types
@cindex Inference of types
If there is no type declaration for a predicate or function,
try to infer the type, rather than just reporting an error.
@sp 1
@item --infer-modes
@findex --infer-modes
@cindex Inference of modes
If there is no mode declaration for a predicate,
try to infer the modes, rather than just reporting an error.
@sp 1
@item --no-infer-det
@itemx --no-infer-determinism
@findex --no-infer-det
@findex --no-infer-determinism
@findex --infer-det
@findex --infer-determinism
@cindex Determinism inference
@cindex Inference of determinism
If there is no determinism declaration for a procedure,
don't try to infer the determinism, just report an error.
@sp 1
@item --type-inference-iteration-limit @var{n}
@findex --type-inference-iteration-limit
@cindex Inference of types
@cindex Type inference
Perform at most @var{n} passes of type inference (default: 60).
@sp 1
@item --mode-inference-iteration-limit @var{n}
@cindex Inference of modes
@cindex Mode inference
Perform at most @var{n} passes of mode inference (default: 30).
@end table
@node Termination analysis options
@section Termination analysis options
@cindex Termination analysis options
For detailed explanations, see the ``Termination analysis'' section
of the ``Implementation-dependent extensions'' chapter in the Mercury
Language Reference Manual.
@table @code
@item --enable-term
@itemx --enable-termination
@findex --enable-term
@findex --enable-termination
Enable termination analysis. Termination analysis analyses each mode of
each predicate to see whether it terminates. The @samp{terminates},
@samp{does_not_terminate} and @samp{check_termination}
pragmas have no effect unless termination analysis is enabled. When
using termination, @samp{--intermodule-optimization} should be enabled,
as it greatly improves the accuracy of the analysis.
@sp 1
@item --chk-term
@itemx --check-term
@itemx --check-termination
@findex --chk-term
@findex --check-term
@findex --check-termination
Enable termination analysis, and emit warnings for some predicates or
functions that cannot be proved to terminate. In many cases in which the
compiler is unable to prove termination, the problem is either a lack of
information about the termination properties of other predicates, or the
fact that the program used language constructs (such as higher order
calls) which cannot be analysed. In these cases the compiler does
not emit a warning of non-termination, as it is likely to be spurious.
@sp 1
@item --verb-chk-term
@itemx --verb-check-term
@itemx --verbose-check-termination
@findex --verb-chk-term
@findex --verb-check-term
@findex --verbose-check-termination
Enable termination analysis, and emit warnings for all predicates or
functions that cannot be proved to terminate.
@sp 1
@item --term-single-arg @var{limit}
@itemx --termination-single-argument-analysis @var{limit}
@findex --term-single-arg @var{limit}
@findex --termination-single-argument-analysis
When performing termination analysis, try analyzing
recursion on single arguments in strongly connected
components of the call graph that have up to @var{limit} procedures.
Setting this limit to zero disables single argument analysis.
@sp 1
@item --termination-norm @var{norm}
@findex --termination-norm
The norm defines how termination analysis measures the size
of a memory cell. The @samp{simple} norm says that size is always one.
The @samp{total} norm says that it is the number of words in the cell.
The @samp{num-data-elems} norm says that it is the number of words in
the cell that contain something other than pointers to cells of
the same type.
@sp 1
@item --term-err-limit @var{limit}
@itemx --termination-error-limit @var{limit}
@findex --term-err-limit
@findex --termination-error-limit
Print at most @var{n} reasons for any single termination error.
@sp 1
@item --term-path-limit @var{limit}
@itemx --termination-path-limit @var{limit}
@findex --term-path-limit
@findex --termination-path-limit
Perform termination analysis only on predicates with at most @var{n} paths.
@end table
@node Compilation model options
@section Compilation model options
@cindex Link errors
@cindex Undefined symbol
@cindex Compilation model options
@cindex Compilation models
@cindex Compilation grades
@cindex Grades
@cindex ABI (Application Binary Interface)
@cindex Application Binary Interface (ABI)
The following compilation options affect the generated
code in such a way that the entire program must be
compiled with the same setting of these options,
and it must be linked to a version of the Mercury
library which has been compiled with the same setting.
(Attempting to link object files compiled with different
settings of these options will generally result in an error at
link time, typically of the form @samp{undefined symbol MR_grade_@dots{}}
or @samp{symbol MR_runtime_grade multiply defined}.)
The options below must be passed to @samp{mgnuc}, @samp{c2init}
and @samp{ml} as well as to @samp{mmc}.
If you are using Mmake, then you should specify
these options in the @samp{GRADEFLAGS} variable rather than specifying
them in @samp{MCFLAGS}, @samp{MGNUCFLAGS} and @samp{MLFLAGS}.
@vindex GRADEFLAGS
@vindex MCFLAGS
@vindex MGNUCFLAGS
@vindex MLFLAGS
@menu
* Grades and grade components:: Setting the compilation model
* Target options:: Choosing a target language
* Optional features compilation model options:: Debugging, Profiling, etc.
* LLDS back-end compilation model options:: For the original back-end
* MLDS back-end compilation model options:: For the new high-level back-end
* Developer compilation model options:: Not for general use
@end menu
@node Grades and grade components
@subsection Grades and grade components
@cindex Grades and grade components
@table @asis
@item @code{-s @var{grade}}
@itemx @code{--grade @var{grade}}
@findex -s
@findex --grade
Select the compilation model.
The @var{grade} should be a @samp{.} separated list of the
grade options to set. The grade options may be given in any order.
The available options each belong to a set of mutually
exclusive alternatives governing a single aspect of the compilation model.
The set of aspects and their alternatives are:
@cindex none (compilation grade)
@cindex reg (compilation grade)
@cindex jump (compilation grade)
@cindex asm_jump (compilation grade)
@cindex fast (compilation grade)
@cindex asm_fast (compilation grade)
@cindex hl (compilation grade)
@cindex hlc (compilation grade)
@cindex il (compilation grade)
@cindex csharp (compilation grade)
@cindex java (compilation grade)
@cindex erlang (compilation grade)
@cindex .prof (grade modifier)
@cindex .memprof (grade modifier)
@cindex .profdeep (grade modifier)
@cindex .tr (grade modifier)
@cindex .trseg (grade modifier)
@cindex .gc (grade modifier)
@cindex .mps (grade modifier)
@cindex .agc (grade modifier)
@cindex .spf (grade modifier)
@cindex .stseg (grade modifier)
@cindex .debug (grade modifier)
@cindex .decldebug (grade modifier)
@c @cindex .ssdebug (grade modifier)
@cindex .par (grade modifier)
@cindex .threadscope (grade modifier)
@cindex prof (grade modifier)
@cindex memprof (grade modifier)
@cindex profdeep (grade modifier)
@cindex tr (grade modifier)
@cindex trseg (grade modifier)
@cindex gc (grade modifier)
@cindex agc (grade modifier)
@cindex spf (grade modifier)
@cindex stseg (grade modifier)
@cindex debug (grade modifier)
@cindex decldebug (grade modifier)
@c @cindex ssdebug (grade modifier)
@cindex par (grade modifier)
@cindex threadscope (grade modifier)
@table @asis
@item What target language to use, what data representation to use, and (for C) what combination of GNU C extensions to use:
@samp{none}, @samp{reg}, @samp{jump}, @samp{asm_jump},
@samp{fast}, @samp{asm_fast}, @samp{hl}, @samp{hlc}, @samp{il}, @samp{csharp},
@samp{java} and
@samp{erlang}
(the default is system dependent).
@item What garbage collection strategy to use:
@samp{gc}, and @samp{agc} (the default is no garbage collection).
@item What kind of profiling to use:
@samp{prof},
@c @samp{proftime}, @samp{profcalls},
@samp{memprof}, and @samp{profdeep}
(the default is no profiling).
@item Whether to enable the trail:
@samp{tr} and @samp{trseg} (the default is no trailing).
@item Whether to use single-precision representation of floating point values:
@samp{spf} (the default is to use double-precision floats)
@item Whether to use dynamically sized stacks that are composed of
small segments: @samp{stseg} (the default is to used fixed size stacks)
@item What debugging features to enable:
@samp{debug} and @samp{decldebug} (the default is no debugging features).
@c XXX and @samp{ssdebug}
@item Whether to use a thread-safe version of the runtime environment:
@samp{par} (the default is a non-thread-safe environment).
@item Whether to include support for profiling of parallel programs:
@samp{threadscope} (the default is no support for profiling parallel programs).
@c See also the @samp{--profile-parallel-execution} runtime option.
@end table
The default grade is system-dependent; it is chosen at installation time
by @samp{configure}, the auto-configuration script, but can be overridden
with the environment variable @samp{MERCURY_DEFAULT_GRADE} if desired.
@vindex MERCURY_DEFAULT_GRADE
Depending on your particular installation, only a subset
of these possible grades will have been installed.
Attempting to use a grade which has not been installed
will result in an error at link time.
(The error message will typically be something like
@samp{ld: can't find library for -lmercury}.)
The tables below show the options that are selected by each base grade
and grade modifier; they are followed by descriptions of those options.
@table @asis
@item @var{Grade}
@var{Options implied}.
@findex --gcc-global-registers
@findex --no-gcc-global-registers
@findex --gcc-nonlocal-gotos
@findex --no-gcc-nonlocal-gotos
@findex --asm-labels
@findex --no-asm-labels
@findex --high-level-code
@findex --no-high-level-code
@findex --target
@findex --il
@findex --csharp
@findex --java
@findex --erlang
@findex --gc
@findex --profiling
@findex --memory-profiling
@findex --deep-profiling
@findex --use-trail
@findex --trail-segments
@findex --no-trail-segments
@findex --record-term-sizes-as-words
@findex --record-term-sizes-as-cells
@findex --single-prec-float
@findex --stack-segments
@item @samp{none}
@code{--target c --no-gcc-global-registers --no-gcc-nonlocal-gotos --no-asm-labels}.
@item @samp{reg}
@code{--target c --gcc-global-registers --no-gcc-nonlocal-gotos --no-asm-labels}.
@item @samp{jump}
@code{--target c --no-gcc-global-registers --gcc-nonlocal-gotos --no-asm-labels}.
@item @samp{fast}
@code{--target c --gcc-global-registers --gcc-nonlocal-gotos --no-asm-labels}.
@item @samp{asm_jump}
@code{--target c --no-gcc-global-registers --gcc-nonlocal-gotos --asm-labels}.
@item @samp{asm_fast}
@code{--target c --gcc-global-registers --gcc-nonlocal-gotos --asm-labels}.
@item @samp{hlc}
@code{--target c --high-level-code}.
@item @samp{hl}
@code{--target c --high-level-code --high-level-data}.
@item @samp{il}
@code{--target il --high-level-code --high-level-data}.
@item @samp{csharp}
@code{--target csharp --high-level-code --high-level-data}.
@item @samp{java}
@code{--target java --high-level-code --high-level-data}.
@item @samp{erlang}
@code{--target erlang}.
@item @samp{.gc}
@code{--gc boehm}.
@item @samp{.mps}
@code{--gc mps}
@item @samp{.agc}
@code{--gc accurate}.
@item @samp{.prof}
@code{--profiling}.
@item @samp{.memprof}
@code{--memory-profiling}.
@item @samp{.profdeep}
@code{--deep-profiling}.
@c The following are undocumented because
@c they are basically useless... documenting
@c them would just confuse people.
@c
@c @item @samp{.profall}
@c @code{--profile-calls --profile-time --profile-memory}.
@c (not recommended because --profile-memory interferes with
@c --profile-time)
@c
@c @item @samp{.proftime}
@c @code{--profile-time}.
@c
@c @item @samp{.profcalls}
@c @code{--profile-calls}.
@c
@item @samp{.tr}
@code{--use-trail --no-trail-segments}.
@item @samp{.trseg}
@code{--use-trail --trail-segments}.
@item @samp{.tsw}
@code{--record-term-sizes-as-words}.
@item @samp{.tsc}
@code{--record-term-sizes-as-cells}.
@item @samp{.spf}
@code{--single-prec-float}
@item @samp{.stseg}
@code{--stack-segments}
@item @samp{.debug}
@code{--debug}.
@item @samp{.decldebug}
@code{--decl-debug}.
@c @item @samp{.ssdebug}
@c @code{--ss-debug}.
@item @samp{.par}
@code{--parallel}.
@item @samp{.threadscope}
@code{--threadscope}.
@end table
@end table
@node Target options
@subsection Target options
@cindex Target options
@table @asis
@item @code{--target c} (grades: none, reg, jump, fast, asm_jump, asm_fast, hl, hlc)
@item @code{--target asm} (grades: hlc)
@itemx @code{--il}, @code{--target il} (grades: il)
@itemx @code{--csharp}, @code{--target csharp} (grades: csharp)
@itemx @code{--java}, @code{--target java} (grades: java)
@itemx @code{--erlang}, @code{--target erlang} (grades: erlang)
Specify the target language used for compilation: C, assembler, IL, C#, Java
or Erlang.
C means ANSI/ISO C, optionally with GNU C extensions (see below).
IL means the Intermediate Language of the .NET Common Language Runtime.
(IL is sometimes also known as "CIL" or "MSIL".)
Targets other than C and Erlang imply @samp{--high-level-code}.
@c @sp 1
@c @item @code{--il-only}
@c @findex --il-only
@c An abbreviation for @samp{--target il --target-code-only}.
@c Generate IL assembler code in @file{@var{module}.il}, but do not invoke
@c ilasm to produce IL object code.
@c
@c @sp 1
@c @item @code{--dotnet-library-version @var{version-number}}
@c @findex --dotnet-library-version
@c The version number for the mscorlib assembly distributed with the
@c Microsoft .NET SDK.
@c
@c @sp 1
@c @item @code{--no-support-ms-clr}
@c @findex --no-support-ms-clr
@c Don't use Microsoft CLR specific workarounds in the generated code.
@c
@c @sp 1
@c @item @code{--support-rotor-clr}
@c @findex --support-rotor-clr
@c Use ROTOR CLR specific workarounds in the generated code.
@sp 1
@item @code{--compile-to-c}
@itemx @code{--compile-to-C}
@findex --compile-to-c
An abbreviation for @samp{--target c --target-code-only}.
Generate C code in @file{@var{module}.c}, but do not invoke the
C compiler to generate object code.
@sp 1
@item @code{--csharp-only}
@findex --csharp-only
An abbreviation for @samp{--target csharp --target-code-only}.
Generate C# code in @file{@var{module}.cs}, but do not invoke
the C# compiler to produce CIL bytecode.
@sp 1
@item @code{--java-only}
@findex --java-only
An abbreviation for @samp{--target java --target-code-only}.
Generate Java code in @file{@var{module}.java}, but do not invoke
the Java compiler to produce Java bytecode.
@sp 1
@item @code{--erlang-only}
@findex --erlang-only
An abbreviation for @samp{--target erlang --target-code-only}.
Generate Erlang code in @file{@var{module}.erl}, but do not invoke
the Erlang compiler to produce Erlang bytecode.
@end table
@node LLDS back-end compilation model options
@subsection LLDS back-end compilation model options
@cindex LLDS back-end compilation model options
@table @asis
@sp 1
@item @code{--gcc-global-registers} (grades: reg, fast, asm_fast)
@itemx @code{--no-gcc-global-registers} (grades: none, jump, asm_jump)
@findex --gcc-global-registers
@findex --no-gcc-global-registers
Specify whether or not to use GNU C's global register variables extension.
This option is ignored if the @samp{--high-level-code} option is enabled.
@sp 1
@item @code{--gcc-non-local-gotos} (grades: jump, fast, asm_jump, asm_fast)
@itemx @code{--no-gcc-non-local-gotos} (grades: none, reg)
@findex --gcc-non-local-gotos
@findex --no-gcc-non-local-gotos
Specify whether or not to use GNU C's ``labels as values'' extension.
This option is ignored if the @samp{--high-level-code} option is enabled.
@sp 1
@item @code{--asm-labels} (grades: asm_jump, asm_fast)
@itemx @code{--no-asm-labels} (grades: none, reg, jump, fast)
@findex --asm-labels
@findex --no-asm-labels
Specify whether or not to use GNU C's asm extensions
for inline assembler labels.
This option is ignored if the @samp{--high-level-code} option is enabled.
@sp 1
@item @code{--pic-reg} (grades: any grade containing `.pic_reg')
@findex --pic-reg
@findex -fpic
@cindex Position independent code
@cindex PIC (position independent code)
@cindex Shared libraries
[For Unix with intel x86 architecture only.]
Select a register usage convention that is compatible
with position-independent code (gcc's `-fpic' option).
This is necessary when using shared libraries on Intel x86 systems
running Unix. On other systems it has no effect.
This option is also ignored if the @samp{--high-level-code} option is enabled.
@sp 1
@item @code{--stack-segments} (grades: any grade containing `.stseg')
@findex --stack-segments
@cindex Stack segments
Specify whether use dynamically sized stacks that are composed of small
segments.
This can help to avoid stack exhaustion at the cost of increased
execution time.
This option is ignored if the @samp{--high-level-code} option is enabled.
@end table
@node MLDS back-end compilation model options
@subsection MLDS back-end compilation model option
@cindex MLDS back-end compilation model options
@table @asis
@item @code{-H}, @code{--high-level-code} (grades: hl, hlc, il, csharp, java)
@findex -H
@findex --high-level-code
Use an alternative back-end that generates high-level code
rather than the very low-level code that is generated by our
original back-end.
@item @code{--high-level-data} (grades: hl, il, csharp, java)
@findex --high-level-data
Use an alternative, higher-level data representation that uses structs
or classes, rather than treating all objects as arrays.
@end table
@node Optional features compilation model options
@subsection Optional features compilation model options
@cindex Optional features compilation model options
@table @asis
@sp 1
@item @code{--debug} (grades: any grade containing @samp{.debug})
@findex --debug
@cindex Debugging
Enables the inclusion in the executable of code and data structures
that allow the program to be debugged with @samp{mdb} (@pxref{Debugging}).
This option is only supported by the low-level C back-end.
@sp 1
@item @code{--decl-debug} (grades: any grade containing @samp{.decldebug})
@findex --decl-debug
@cindex Debugging
Enables the inclusion in the executable of code and data structures
that allow subterm dependency tracking in the declarative debugger.
This option is only supported by the low-level C back-end.
@c @sp 1
@c @item @code{--ss-debug} (grades: any grade containing @samp{.ssdebug})
@c @findex --ss-debug
@c @cindex Debugging
@c Enables the source-to-source debugging transform.
@sp 1
@item @code{--profiling}, @code{--time-profiling} (grades: any grade containing @samp{.prof})
@cindex Profiling
@cindex Time profiling
@findex --profiling
@findex --time-profiling
Enable time profiling. Insert profiling hooks in the
generated code, and also output some profiling
information (the static call graph) to the file
@samp{@var{module}.prof}. @xref{Profiling}.
This option is only supported by the C back-ends.
@sp 1
@item @code{--memory-profiling} (grades: any grade containing @samp{.memprof})
@findex --memory-profiling
@cindex Profiling
@cindex Memory profiling
@cindex Heap profiling
@cindex Allocation profiling
Enable memory profiling. Insert memory profiling hooks in the
generated code, and also output some profiling
information (the static call graph) to the file
@samp{@var{module}.prof}.
@xref{Using mprof for profiling memory allocation}.
@xref{Using mprof for profiling memory retention}.
This option is only supported by the C back-ends.
@sp 1
@item @code{--deep-profiling} (grades: any grade containing @samp{.profdeep})
@findex --deep-profiling
@cindex Deep profiling
Enable deep profiling by inserting the appropriate hooks in the generated code.
This option is only supported by the low-level C back-end.
@sp 1
@item @code{--no-coverage-profiling}
@findex --no-coverage-profiling
@cindex Coverage Profiling
Disable coverage profiling.
Coverage profiling is part of the deep profiler and only used in deep profiling
grades,
it inserts coverage points that are used to measure how execution moves through
a procedure.
@sp 1
@item @code{--profile-for-feedback}
@findex --profile-for-implicit-parallelism
@findex --profile-for-feedback
@cindex Coverage profiling
@cindex Profiler feedback
@cindex Automatic parallelism
Enable options in the deep profiler that generate suitable profiles for use
with profile directed feedback analysis.
Currently the only feedback analysis is used for automatic parallelism.
This option only effects programs built in deep profiling grades.
@ignore
Coverage profiling is experimental, some options below tune coverage
profiling however they are intended for developers.
@sp 1
Switches to effect coverage profiling (part of deep profiling). they enable
different types of coverage points.
@sp 1
@item @code{--no-profile-deep-coverage-after-goals}
@findex --no-profile-deep-coverage-after-goals
Disable insertion of coverage points after goals where coverage is unknown.
@sp 1
@item @code{--no-profile-deep-coverage-branch-ite}
@findex --no-profile-deep-coverage-branch-if
Disable coverage points at the beginning of then and else branches.
@sp 1
@item @code{--no-profile-deep-coverage-branch-switch}
@findex --no-profile-deep-coverage-branch-switch
Disable coverage points at the beginning of switch branches.
@sp 1
@item @code{--no-profile-deep-coverage-branch-disj}
@findex --no-profile-deep-coverage-branch-disj
Disable coverage points at the beginning of disjunction branches.
@sp 1
Switches to tune the coverage profiling pass, useful for
debugging.
@sp 1
@item @code{--no-profile-deep-coverage-use-portcounts}
@findex --no-profile-deep-coverage-use-portcounts
Turn off usage of port counts in the deep profiler to provide some coverage
information.
@sp
@item @code{--no-profile-deep-coverage-use-trivial}
@findex --no-profile-deep-coverage-use-trivial
Turn off usage of trivial goal information
@end ignore
@ignore
The following are basically useless, hence undocumented.
@sp 1
@item @code{--profile-calls} (grades: any grade containing @samp{.profcalls})
@findex --profile-calls
Similar to @samp{--profiling}, except that this option only gathers
call counts, not timing information. Useful on systems where time
profiling is not supported --- but not as useful as @samp{--memory-profiling}.
@sp 1
@item @code{--profile-time} (grades: any grade containing @samp{.proftime})
@findex --profile-time
Similar to @samp{--profiling}, except that this option only gathers
timing information, not call counts. For the results to be useful,
call counts for an identical run of your program need to be gathered
using @samp{--profiling} or @samp{--profile-calls}.
XXX this doesn't work, because the code addresses change.
The only advantage of using @samp{--profile-time} and @samp{--profile-calls}
to gather timing information and call counts in separate runs,
rather than just using @samp{--profiling} to gather them both at once,
is that the former method can give slightly more accurate timing results.
because with the latter method the code inserted to record call counts
has a small effect on the execution speed.
@end ignore
@sp 1
@item @code{--record-term-sizes-as-words} (grades: any grade containing @samp{.tsw})
@findex --record-term-sizes-as-words
Record the sizes of terms, using one word as the unit of memory.
@sp 1
@item @code{--record-term-sizes-as-cells} (grades: any grade containing @samp{.tsc})
@findex --record-term-sizes-as-cells
Record the sizes of terms, using one cell as the unit of memory.
@sp 1
@item @code{--experimental-complexity @var{filename}}
@findex --experimental-complexity
Enable experimental complexity analysis
for the predicates listed in the given file.
This option is supported for the C back-end, with @samp{--no-high-level-code}.
For now, this option itself is for developers only.
@sp 1
@item @code{--gc @{none, boehm, mps, accurate, automatic@}}
@itemx @code{--garbage-collection @{none, boehm, mps, accurate, automatic@}}
@cindex Garbage collection
@cindex Conservative garbage collection
@cindex Boehm (et al) conservative garbage collector
@cindex MPS conservative garbage collector
@cindex Memory Pool System conservative garbage collector
@cindex Accurate garbage collection
@cindex Automatic garbage collection
@findex --gc
@findex --garbage-collection
Specify which method of garbage collection to use.
Grades containing @samp{csharp}, @samp{java}, @samp{il} or @samp{erlang} use
@samp{--gc automatic},
grades containing @samp{.gc} use @samp{--gc boehm},
grades containing @samp{.mps} use @samp{--gc mps},
other grades use @samp{--gc none}.
@samp{conservative} or @samp{boehm} is Hans Boehm et al's conservative
garbage collector.
@samp{accurate} is our own type-accurate copying collector.
It requires @samp{--high-level-code}.
@samp{mps} is another conservative collector based on Ravenbrook Limited's
MPS (Memory Pool System) kit.
@samp{automatic} means the target language provides it.
This is the case for the IL, C#, Java and Erlang back-ends, which always use
the underlying implementation's garbage collector.
@sp 1
@item @code{--use-trail} (grades: any grade containing @samp{.tr} or @samp{.trseg})
@findex --use-trail
@cindex Trailing
@cindex Constraint solving
@cindex Backtrackable destructive update
@cindex Destructive update, backtrackable
Enable use of a trail.
This is necessary for interfacing with constraint solvers,
or for backtrackable destructive update.
This option is only supported by the C back-ends.
@item @code{--trail-segments} (grades: any grade containing @samp{.trseg})
@findex --trail-segments
@cindex Trailing
@cindex Constraint solving
@cindex Backtrackable destructive update
@cindex Destructive update, backtrackable
Enable the use of a dynamically sized trail that is composed of small segments.
This can help to avoid trail exhaustion at the cost of increased execution
time.
This option is only supported by the C back-ends.
@sp 1
@item @code{--parallel}
@findex --parallel
@cindex Parallel execution
In low-level C grades this enables support for parallel execution.
Parallel execution can be achieved by using either the parallel conjunction
operator or the concurrency support in the @samp{thread} module of the
standard library.
@xref{Goals, parallel conjunction, Goals, mercury_ref, The Mercury
Language Reference Manual}, and
@xref{thread, the thread module, thread, mercury_library, The Mercury
Library Reference Manual}.
In high-level C grades this enables support for concurrency, which is
accessible via the @samp{thread} module in the standard library.
The runtime uses POSIX threads to achieve this, therefore it may also support
parallel execution of threads.
The Java, C#, IL and Erlang grades support concurrency without this option.
Parallel execution may also be available depending on the target's runtime.
@sp 1
@item @code{--threadscope}
@findex --threadscope
@cindex Threadscope profiling
Enable support for threadscope profiling.
This enables runtime support for profiling the parallel execution of
programs, @xref{Using threadscope}.
@sp 1
@item @code{--maybe-thread-safe @{yes, no@}}
@findex --maybe-thread-safe
Specify how to treat the @samp{maybe_thread_safe} foreign code
attribute. @samp{yes} means that a foreign procedure with the
@samp{maybe_thread_safe} option is treated as thought is has a
@samp{thread_safe} attribute. @samp{no} means that the foreign
procedure is treated as though it has a @samp{not_thread_safe}
attribute. The default is @samp{no}.
@sp 1
@item @code{--single-prec-float} (grades: any grade containing @samp{.spf})
@findex --single-prec-float
@cindex Data representation
Use single precision floats so that, on 32-bit machines
floating point values don't need to be boxed. Double
precision floats are used by default.
This option is not yet supported for the IL, C# or Java back-ends.
This option will not be supported for the Erlang back-end.
@c RBMM is undocumented since it is still experimental.
@c @sp 1
@c @item @code{--use-regions} (grades: any grade containing @samp{.rbmm})
@c Enable support for region-based memory management."
@end table
@node Developer compilation model options
@subsection Developer compilation model options
@cindex Cross-compiling
Of the options listed below, the @samp{--num-tag-bits} option
may be useful for cross-compilation, but apart from that
these options are all experimental and are intended for
use by developers of the Mercury implementation rather than by
ordinary Mercury programmers.
@table @asis
@sp 1
@item @code{--tags @{none, low, high@}}
@findex --tags
@cindex Tags
@cindex Data representation
(This option is not intended for general use.)@*
Specify whether to use the low bits or the high bits of
each word as tag bits (default: low).
@sp 1
@item @code{--num-tag-bits @var{n}}
@findex --num-tag-bits
@cindex Tags
@cindex Data representation
(This option is not intended for general use.)@*
Use @var{n} tag bits. This option is required if you specify
@samp{--tags high}.
With @samp{--tags low}, the default number of tag bits to use
is determined by the auto-configuration script.
@sp 1
@item @code{--num-reserved-addresses @var{n}}
@findex --reserved-addresses
@cindex Reserved addresses
@cindex Addresses, reserved
@cindex Data representation
(This option is not intended for general use.)@*
Treat the integer values from 0 up to @var{n} @minus{} 1 as reserved
addresses that can be used to represent nullary constructors
(constants) of discriminated union types.
@sp 1
@item @code{--num-reserved-objects @var{n}}
@findex --reserved-addresses
@cindex Reserved addresses
@cindex Reserved objects
@cindex Addresses, reserved
@cindex Objects, reserved
@cindex Data representation
(This option is not intended for general use.)@*
Allocate up to @var{n} @minus{} 1 global objects for representing nullary
constructors (constants) of discriminated union types.
Note that reserved objects will only be used if reserved addresses
(see @code{--num-reserved-addresses}) are not available, since the
latter are more efficient.
@sp 1
@item @code{--no-type-layout}
@findex --no-type-layout
@findex --type-layout
(This option is not intended for general use.)@*
Don't output base_type_layout structures or references to them.
This option will generate smaller executables, but will not allow the
use of code that uses the layout information (e.g.@: @samp{functor},
@samp{arg}). Using such code will result in undefined behaviour at
runtime. The C code also needs to be compiled with
@samp{-DNO_TYPE_LAYOUT}.
@end table
@node Code generation options
@section Code generation options
@cindex Code generation options
@table @code
@item --low-level-debug
@findex --low-level-debug
Enables various low-level debugging stuff that was in the distant past
used to debug the Mercury compiler's low-level code generation.
This option is not likely to be useful to anyone except the Mercury
implementors. It causes the generated code to become very big and very
inefficient, and slows down compilation a lot.
@sp 1
@item --pic
@findex --pic
@cindex Position independent code
@cindex PIC (position independent code)
Generate position independent code.
This option is only used by the @samp{--target asm} back-end.
The generated assembler will be written to @samp{@var{module}.pic_s}
rather than to @samp{@var{module}.s}.
@sp 1
@item --no-trad-passes
@findex --no-trad-passes
@findex --trad-passes
The default @samp{--trad-passes} completely processes each predicate
before going on to the next predicate.
This option tells the compiler
to complete each phase of code generation on all predicates
before going on the next phase on all predicates.
@c @sp 1
@c @item --parallel-liveness
@c @item --no-parallel-liveness
@c @item --parallel-liveness
@c Use multiple threads when computing liveness.
@c At the moment this option implies @samp{--no-trad-passes},
@c and requires the compiler to be built in a
@c low-level parallel grade and running with multiple engines.
@c @sp 1
@c @item --parallel-code-gen
@c @item --no-parallel-code-gen
@c @item --parallel-code-gen
@c Use multiple threads when generating code.
@c At the moment this option implies @samp{--no-trad-passes},
@c and requires the compiler to be built in a
@c low-level parallel grade and running with multiple engines.
@c @sp 1
@c @item --no-polymorphism
@c Don't handle polymorphic types.
@c (Generates slightly more efficient code, but stops
@c polymorphism from working except in special cases.)
@sp 1
@item --no-reclaim-heap-on-nondet-failure
@findex --no-reclaim-heap-on-nondet-failure
@findex --reclaim-heap-on-nondet-failure
Don't reclaim heap on backtracking in nondet code.
@sp 1
@item --no-reclaim-heap-on-semidet-failure
@findex --no-reclaim-heap-on-semidet-failure
@findex --reclaim-heap-on-semidet-failure
Don't reclaim heap on backtracking in semidet code.
@sp 1
@item --no-reclaim-heap-on-failure
@findex --no-reclaim-heap-on-failure
@findex --reclaim-heap-on-failure
Combines the effect of the two options above.
@sp 1
@item max-jump-table-size @var{n}
@findex --max-jump-table-size @var{n}
The maximum number of entries a jump table can have.
The special value 0 indicates the table size is unlimited.
This option can be useful to avoid exceeding fixed limits
imposed by some C compilers.
% @sp 1
% @item "--compare-specialization @var{n}
% @findex --compare-specialization @var{n}
% Generate quadratic instead of linear compare predicates for types
% with up to n function symbols. Higher values of n lead to faster
% but also bigger compare predicates.
% @sp 1
% @item --no-should-pretest-equality
% @findex --no-should-pretest-equality
% @findex --should-pretest-equality
% If specified, do not add a test for the two values being equal as words
% to the starts of potentially expensive unify and compare predicates.
@sp 1
@item --fact-table-max-array-size @var{size}
@findex --fact-table-max-array-size @var{size}
@cindex Fact tables
@cindex pragma fact_table
Specify the maximum number of elements in a single
@samp{pragma fact_table} data array (default: 1024).
The data for fact tables is placed into multiple C arrays, each with a
maximum size given by this option. The reason for doing this is that
most C compilers have trouble compiling very large arrays.
@sp 1
@item --fact-table-hash-percent-full @var{percentage}
@findex --fact-table-hash-percent-full
@cindex Fact tables
@cindex pragma fact_table
Specify how full the @samp{pragma fact_table} hash tables should be
allowed to get. Given as an integer percentage (valid range: 1 to 100,
default: 90). A lower value means that the compiler will use
larger tables, but there will generally be less hash collisions,
so it may result in faster lookups.
@end table
@menu
* Code generation target options::
@end menu
@node Code generation target options
@subsection Code generation target options
@cindex Target options
@cindex Cross-compiling
The following options allow the Mercury compiler to optimize the generated
C code based on the characteristics of the expected target architecture.
The default values of these options will be whatever is appropriate
for the host architecture that the Mercury compiler was installed on,
so normally there is no need to set these options manually. They might
come in handy if you are cross-compiling. But even when cross-compiling,
it's probably not worth bothering to set these unless efficiency is
absolutely paramount.
@table @asis
@item @code{--have-delay-slot}
@findex --have-delay-slot
(This option is not intended for general use.)@*
Assume that branch instructions have a delay slot.
@sp 1
@item @code{--num-real-r-regs @var{n}}
@findex --num-real-r-regs
(This option is not intended for general use.)@*
Assume r1 up to r@var{n} are real general purpose registers.
@sp 1
@item @code{--num-real-f-regs @var{n}}
@findex --num-real-f-regs
(This option is not intended for general use.)@*
Assume f1 up to f@var{n} are real floating point registers.
@sp 1
@item @code{--num-real-r-temps @var{n}}
@findex --num-real-r-temps
(This option is not intended for general use.)@*
Assume that @var{n} non-float temporaries will fit into real machine registers.
@sp 1
@item @code{--num-real-f-temps @var{n}}
@findex --num-real-f-temps
(This option is not intended for general use.)@*
Assume that @var{n} float temporaries will fit into real machine registers.
@end table
@node Optimization options
@section Optimization options
@cindex Optimization options
@menu
* Overall optimization options::
* High-level (HLDS -> HLDS) optimization options::
* MLDS backend (MLDS -> MLDS) optimization options::
* Medium-level (HLDS -> LLDS) optimization options::
* Erlang (HLDS -> ELDS) optimization options::
* Low-level (LLDS -> LLDS) optimization options::
* Output-level (LLDS -> C) optimization options::
@end menu
@node Overall optimization options
@subsection Overall optimization options
@table @code
@item -O @var{n}
@itemx --opt-level @var{n}
@itemx --optimization-level @var{n}
@findex -O
@findex --opt-level
@findex --optimization-level
@cindex Optimization levels
@cindex Compilation speed
@cindex Intermodule optimization
@cindex Cross-module optimization
Set optimization level to @var{n}.
Optimization levels range from -1 to 6.
Optimization level -1 disables all optimizations,
while optimization level 6 enables all optimizations
except for the cross-module optimizations listed below.
Some experimental optimizations (for example constraint
propagation) are not be enabled at any optimization level.
In general, there is a trade-off between compilation speed and the
speed of the generated code. When developing, you should normally use
optimization level 0, which aims to minimize compilation time. It
enables only those optimizations that in fact usually @emph{reduce}
compilation time. The default optimization level is level 2, which
delivers reasonably good optimization in reasonable time. Optimization
levels higher than that give better optimization, but take longer,
and are subject to the law of diminishing returns. The difference in
the quality of the generated code between optimization level 5 and
optimization level 6 is very small, but using level 6 may increase
compilation time and memory requirements dramatically.
Note that if you want the compiler to perform cross-module
optimizations, then you must enable them separately;
the cross-module optimizations are not enabled by any @samp{-O}
level, because they affect the compilation process in ways
that require special treatment by @samp{mmake}.
@sp 1
@item --opt-space
@itemx --optimize-space
@findex --opt-space
@findex --optimize-space
@findex Optimizing space
@findex Optimizing code size
Turn on optimizations that reduce code size
and turn off optimizations that significantly increase code size.
@end table
@table @code
@item --intermod-opt
@item --intermodule-optimization
@findex --intermod-opt
@findex --intermodule-optimization
@cindex Intermodule optimization
@cindex Cross-module optimization
@cindex Inlining
Perform inlining and higher-order specialization of the code for
predicates or functions imported from other modules.
@sp 1
@item --trans-intermod-opt
@itemx --transitive-intermodule-optimization
@findex --trans-intermod-opt
@findex --transitive-intermodule-optimization
@cindex Transitive intermodule optimization
@cindex Intermodule optimization, transitive
@cindex Cross-module optimization, transitive
@cindex Termination analysis
Use the information stored in @file{@var{module}.trans_opt} files
to make intermodule optimizations. The @file{@var{module}.trans_opt} files
are different to the @file{@var{module}.opt} files as @samp{.trans_opt}
files may depend on other @samp{.trans_opt} files, whereas each
@samp{.opt} file may only depend on the corresponding @samp{.m} file.
Note that @samp{--transitive-intermodule-optimization} does not
work with @samp{mmc --make}.
@sp 1
@item --no-read-opt-files-transitively
@findex --no-read-opt-files-transitively
Only read the inter-module optimization information
for directly imported modules, not the transitive
closure of the imports.
@item --use-opt-files
@findex --use-opt-files
Perform inter-module optimization using any @samp{.opt} files which are
already built, e.g.@: those for the standard library, but do not build any
others.
@item --use-trans-opt-files
@findex --use-trans-opt-files
Perform inter-module optimization using any @samp{.trans_opt} files which are
already built, e.g.@: those for the standard library, but do not build any
others.
@item --intermodule-analysis
@findex --intermodule-analysis
Perform analyses such as termination analysis and
unused argument elimination across module boundaries.
This option is not yet fully implemented.
@item --analysis-repeat @var{n}
@findex --analysis-repeat
The maximum number of times to repeat analyses of suboptimal modules with
@samp{--intermodule-analyses} (default: 0). This option only works with
@samp{mmc --make}.
@c This feature is still experimental.
@c @item --analysis-file-cache
@c @findex --analysis-file-cache
@c Enable caching of parsed analysis files. This may
@c improve compile times with @samp{--intermodule-analysis}.
@end table
@node High-level (HLDS -> HLDS) optimization options
@subsection High-level (HLDS -> HLDS) optimization options
@cindex HLDS
These optimizations are high-level transformations on our HLDS (high-level
data structure).
@table @code
@c @item --no-allow-inlining
@c This option is meant to used only inside the compiler;
@c its effect is the same as plain --no-inlining
@item --no-inlining
@findex --no-inlining
@findex --inlining
@cindex Inlining
Disable all forms of inlining.
@item --no-inline-simple
@findex --no-inline-simple
@findex --inline-simple
Disable the inlining of simple procedures.
@sp 1
@item --no-inline-builtins
@findex --no-inline-builtins
Generate builtins (e.g.@: arithmetic operators) as calls to
out of line procedures. This is done by default when debugging,
as without this option the execution of builtins is not traced.
@item --no-inline-single-use
@findex --no-inline-single-use
@findex --inline-single-use
Disable the inlining of procedures called only once.
@item --inline-compound-threshold @var{threshold}
@findex --inline-compound-threshold
Inline a procedure if its size
(measured roughly in terms of the number of connectives in its internal form),
multiplied by the number of times it is called,
is below the given threshold.
@item --inline-simple-threshold @var{threshold}
@findex --inline-simple-threshold
Inline a procedure if its size is less than the given threshold.
@item --intermod-inline-simple-threshold @var{threshold}
@findex --intermod-inline-simple-threshold
Similar to --inline-simple-threshold, except used to determine which
predicates should be included in @samp{.opt} files. Note that changing this
between writing the @samp{.opt} file and compiling to C may cause link errors,
and too high a value may result in reduced performance.
@item --inline-vars-threshold @var{threshold}
@findex --inline-vars-threshold
Don't inline a call if it would result in a procedure
containing more than @var{threshold} variables. Procedures
containing large numbers of variables can cause
slow compilation.
@item --loop-invariants
@findex --loop-invariants
Optimize loop invariants by moving computations within a loop that are
the same on every iteration to the outside so they are only calculated
once. (This is a separate optimization to @samp{--optimize-rl-invariants}.)
@sp 1
@item --no-common-struct
@findex --no-common-struct
@findex --common-struct
Disable optimization of common term structures.
@c @sp 1
@c @item --no-common-goal
@c @findex --no-common-goal
@c @findex --common-goal
@c Disable optimization of common goals.
@c At the moment this optimization
@c detects only common deconstruction unifications.
@c Disabling this optimization reduces the class of predicates
@c that the compiler considers to be deterministic.
@item --constraint-propagation
@findex --constraint-propagation
Enable the constraint propagation transformation,
which attempts to transform the code so that goals
which can fail are executed as early as possible.
@item --local-constraint-propagation
@findex --local-constraint-propagation
Enable the constraint propagation transformation,
but only rearrange goals within each procedure.
Specialized versions of procedures will not be created.
@c @sp 1
@c @item --prev-code
@c @findex --prev-code
@c Migrate into the start of branched goals.
@sp 1
@item --no-follow-code
@findex --no-follow-code
@findex --follow-code
Don't migrate builtin goals into branched goals.
@sp 1
@item --optimize-unused-args
@findex --optimize-unused-args
@cindex Unused arguments
Remove unused predicate arguments. The compiler will
generate more efficient code for polymorphic predicates.
@sp 1
@item --intermod-unused-args
@findex --intermod-unused-args
@cindex Unused arguments
Perform unused argument removal across module boundaries.
This option implies @samp{--optimize-unused-args} and
@samp{--intermodule-optimization}.
@sp 1
@item --unneeded-code
@findex --unneeded-code
@cindex Unused outputs
Remove goals from computation paths where their outputs are not needed,
provided the language semantics options allow the deletion or movement
of the goal.
@sp 1
@item --unneeded-code-copy-limit @var{limit}
@findex --unneeded-code-copy-limit
Gives the maximum number of places to which a goal may be copied
when removing it from computation paths on which its outputs are not needed.
A value of zero forbids goal movement and allows only goal deletion;
a value of one prevents any increase in the size of the code.
@sp 1
@item --optimize-higher-order
@findex --optimize-higher-order
@cindex Higher-order specialization
@cindex Specialization of higher-order calls
Specialize calls to higher-order predicates where
the higher-order arguments are known.
@sp 1
@item --type-specialization
@findex --type-specialization
@cindex Type specialization
Specialize calls to polymorphic predicates where
the polymorphic types are known.
@sp 1
@item --user-guided-type-specialization
@findex --user-guided-type-specialization
@cindex Type specialization, user guided
Enable specialization of polymorphic predicates for which
there are `:- pragma type_spec' declarations.
See the ``Type specialization'' section in the ``Pragmas''
chapter of the Mercury Language Reference Manual for more details.
@sp 1
@item --higher-order-size-limit @var{limit}
@findex --higher-order-size-limit
Set the maximum goal size of specialized versions created by
@samp{--optimize-higher-order} and @samp{--type-specialization}.
Goal size is measured as the number of calls, unifications
and branched goals.
@sp 1
@item --higher-order-arg-limit @var{limit}
@findex --higher-order-arg-limit
Set the maximum size of higher-order arguments to
be specialized by @samp{--optimize-higher-order} and
@samp{--type-specialization}.
@sp 1
@item --optimize-constant-propagation
@findex --optimize-constant-propagation
Evaluate constant expressions at compile time.
@sp 1
@item --introduce-accumulators
@findex --introduce-accumulators
@cindex Accumulator introduction
@cindex Tail recursion optimization
@cindex Last call optimization
Attempt to introduce accumulating variables into
procedures, so as to make the procedure tail recursive.
@sp 1
@item --optimize-constructor-last-call
@findex --optimize-constructor-last-call
@cindex Tail recursion optimization
@cindex Last call optimization
Enable the optimization of ``last'' calls that are followed by
constructor application.
@c @sp 1
@c @item --optimize-constructor-last-call-null
@c @findex --optimize-constructor-last-call-null
@c When --optimize-constructor-last-call is enabled,
@c put NULL in uninitialized fields (to prevent the garbage collector
@c from looking at and following a random bit pattern).
@sp 1
@item --optimize-dead-procs
@findex --optimize-dead-procs
@cindex Dead procedure elimination
@cindex Dead predicate elimination
@cindex Dead function elimination
@cindex Unused procedure elimination
Enable dead procedure elimination.
@sp 1
@item --excess-assign
@findex --excess-assign
Remove excess assignment unifications.
@sp 1
@item --no-optimize-format-calls
@findex --no-optimize-format-calls
Do not attempt to interpret the format string in calls to
string.format and related predicates at compile time;
always leave this to be done at runtime.
@sp 1
@item --optimize-duplicate-calls
@findex --optimize-duplicate-calls
@cindex Duplicate call optimization
Optimize away multiple calls to a predicate with the same input arguments.
@sp 1
@item --delay-constructs
@findex --delay-constructs
Reorder goals to move construction unifications after
primitive goals that can fail.
@sp 1
@item --optimize-saved-vars
@findex --optimize-saved-vars
Minimize the number of variables that have to be saved across calls.
@sp 1
@item --deforestation
@findex --deforestation
Enable deforestation. Deforestation is a program transformation whose aim
is to avoid the construction of intermediate data structures and to avoid
repeated traversals over data structures within a conjunction.
@sp 1
@item --deforestation-depth-limit @var{limit}
@findex --deforestation-depth-limit
Specify a depth limit to prevent infinite loops in the deforestation algorithm.
A value of -1 specifies no depth limit. The default is 4.
@sp 1
@item --deforestation-vars-threshold @var{threshold}
@findex --deforestation-vars-threshold
Specify a rough limit on the number of variables
in a procedure created by deforestation.
A value of -1 specifies no limit. The default is 200.
@sp 1
@item --deforestation-size-threshold @var{threshold}
@findex --deforestation-size-threshold
Specify a rough limit on the size of a goal
to be optimized by deforestation.
A value of -1 specifies no limit. The default is 15.
@sp 1
@item --analyse-exceptions
@findex --analyse-exceptions
Try to identify those procedures that cannot throw an exception.
This information can be used by some optimization passes.
@c XXX The documentation `--analyse-closures' can be uncommented
@c when we actually have something that makes use of it.
@c @sp1
@c @item --analyse-closures
@c @findex --analyse-closures
@c Enable closure analysis. Try to identify the possible
@c values that higher-order valued variables can take.
@c Some optimizations can make use of this information.
@sp 1
@item --analyse-trail-usage
@findex --analyse-trail-usage
Enable trail usage analysis.
Identify those procedures that will not modify the trail.
This information is used to reduce the overhead of trailing.
@sp 1
@item --analyse-mm-tabling
@findex --analyse-mm-tabling
Identify those goals that do not calls procedures
that are evaluated using minimal model tabling.
This information is used to reduce the overhead of minimal model tabling.
@c @sp 1
@c @item --untuple
@c @findex --untuple
@c Expand out procedure arguments when the argument type
@c is a tuple or a type with exactly one functor.
@c Note: this is almost always a pessimization.
@c @sp 1
@c @item --tuple
@c @findex --tuple
@c Try to find opportunities for procedures to pass some
@c arguments to each other as a tuple rather than as
@c individual arguments. It requires the option
@c @samp{--tuple-trace-counts-file}, and can be tuned with
@c the options @samp{--osv-cvload-cost}, @samp{--osv-cvstore-cost},
@c @samp{--osv-fvload-cost}, @samp{--osv-fvstore-cost},
@c @samp{--tuple-costs-ratio} and @samp{--tuple-min-args}.
@c Note: so far this has mostly a detrimental effect.
@c @sp 1
@c @item --tuple-trace-counts-file
@c @findex --tuple-trace-counts-file
@c Supply a trace counts summary file for the tupling
@c transformation. The summary should be made from a sample
@c run of the program you are compiling, compiled without
@c optimizations.
@c @sp 1
@c @item --tuple-costs-ratio
@c @findex --tuple-costs-ratio
@c A value of 110 for this parameter means the tupling
@c transformation will transform a procedure if it thinks
@c that procedure would be 10% worse, on average, than
@c whatever transformed version of the procedure it has in
@c mind. The default is 100.
@c @sp 1
@c @item --tuple-min-args
@c @findex --tuple-min-args
@c The minimum number of input arguments that the tupling
@c transformation will consider passing together as a
@c tuple. This is mostly to speed up the compilation
@c process by not pursuing (presumably) unfruitful searches.
@c @sp 1
@c @item --always-specialize-in-dep-par-conjs
@c @findex --always-specialize-in-dep-par-conjs
@c When the transformation for handling dependent parallel conjunctions
@c adds waits and/or signals around a call, create a specialized
@c version of the called procedure, even if this is not profitable.
@end table
@node MLDS backend (MLDS -> MLDS) optimization options
@subsection MLDS backend (MLDS -> MLDS) optimization options
@cindex MLDS
These optimizations are applied to the medium level
intermediate code.
@table @code
@item --no-mlds-optimize
@findex --no-mlds-optimize
@findex --mlds-optimize
Disable the MLDS -> MLDS optimization passes.
@item --no-mlds-peephole
@findex --no-mlds-peephole
@findex --mlds-peephole
Do not perform peephole optimization of the MLDS.
@sp 1
@item --no-optimize-tailcalls
@findex --no-optimize-tailcalls
@findex --optimize-tailcalls
Treat tailcalls as ordinary calls rather than optimizing
by turning self-tailcalls into loops.
@item --no-optimize-initializations
@findex --no-optimize-initializations
@findex --optimize-initializations
Leave initializations of local variables as assignment statements,
rather than converting such assignments statements into initializers.
@item --eliminate-local-variables
@findex --no-eliminate-local-variables
@findex --eliminate-local-variables
Eliminate local variables with known values, where possible,
by replacing occurrences of such variables with their values.
@item --no-generate-trail-ops-inline
@findex --no-generate-trail-ops-inline
@findex --generate-trail-ops-inline
Do not generate trailing operations inline,
but instead insert calls to the versions of these operations
in the standard library.
@end table
@node Erlang (HLDS -> ELDS) optimization options
@subsection Erlang (HLDS -> ELDS) optimization options
@cindex HLDS
@cindex ELDS
@table @code
@item --erlang-switch-on-strings-as-atoms
@findex --no-erlang-switch-on-strings-as-atoms
@findex --erlang-switch-on-strings-as-atoms
Enable a workaround for slow HiPE compilation of large string
switches by converting the string to an atom at runtime and
switching on atoms. Do not enable if the string switched on
could be longer than 255 characters at runtime.
@end table
@node Medium-level (HLDS -> LLDS) optimization options
@subsection Medium-level (HLDS -> LLDS) optimization options
@cindex HLDS
@cindex LLDS
These optimizations are applied during the process of generating
low-level intermediate code from our high-level data structure.
@table @code
@item --no-static-ground-terms
@findex --no-static-ground-terms
@findex --static-ground-terms
Disable the optimization of constructing constant ground terms
at compile time and storing them as static constants.
Note that auxiliary data structures created by the compiler
for purposes such as debugging
will still be created as static constants.
@sp 1
@item --no-smart-indexing
@findex --no-smart-indexing
@findex --smart-indexing
Generate switches as a simple if-then-else chains;
disable string hashing and integer table-lookup indexing.
@sp 1
@item --dense-switch-req-density @var{percentage}
@findex --dense-switch-req-density
The jump table generated for an atomic switch
must have at least this percentage of full slots (default: 25).
@sp 1
@item --dense-switch-size @var{size}
@findex --dense-switch-size
The jump table generated for an atomic switch
must have at least this many entries (default: 4).
@sp 1
@item --lookup-switch-req-density @var{percentage}
@findex --lookup-switch-req-density
The lookup tables generated for an atomic switch
in which all the outputs are constant terms
must have at least this percentage of full slots (default: 25).
@sp 1
@item --lookup-switch-size @var{size}
@findex --lookup-switch-size
The lookup tables generated for an atomic switch
in which all the outputs are constant terms
must have at least this many entries (default: 4).
@sp 1
@item --string-hash-switch-size @var{size}
@findex --string-switch-size
The hash table generated for a string switch
must have at least this many entries (default: 8).
@sp 1
@item --string-binary-switch-size @var{size}
@findex --string-switch-size
The binary search table generated for a string switch
must have at least this many entries (default: 4).
@sp 1
@item --tag-switch-size @var{size}
@findex --tag-switch-size
The number of alternatives in a tag switch
must be at least this number (default: 3).
@sp 1
@item --try-switch-size @var{size}
@findex --try-switch-size
The number of alternatives in a try-chain switch
must be at least this number (default: 3).
@sp 1
@item --binary-switch-size @var{size}
@findex --binary-switch-size
The number of alternatives in a binary search switch
must be at least this number (default: 4).
@sp 1
@item --no-use-atomic-cells
@findex --no-use-atomic-cells
@findex --use-atomic-cells
Don't use the atomic variants of the Boehm gc allocator calls,
even when this would otherwise be possible.
@sp 1
@item --no-middle-rec
@findex --no-middle-rec
@findex --middle-rec
Disable the middle recursion optimization.
@sp 1
@item --no-simple-neg
@findex --no-simple-neg
@findex --simple-neg
Don't generate simplified code for simple negations.
@end table
@node Low-level (LLDS -> LLDS) optimization options
@subsection Low-level (LLDS -> LLDS) optimization options
@cindex LLDS
These optimizations are transformations that are applied to our
low-level intermediate code before emitting C code.
@table @code
@item --no-common-data
@findex --no-common-data
@findex --common-data
Disable optimization of common data structures.
@item --no-layout-common-data
@findex --no-layout-common-data
@findex --common-layout-data
Disable optimization of common subsequences in layout structures.
@item --no-llds-optimize
@findex --no-llds-optimize
@findex --llds-optimize
Disable the low-level optimization passes.
@sp 1
@item --no-optimize-peep
@findex --no-optimize-peep
@findex --optimize-peep
Disable local peephole optimizations.
@sp 1
@item --no-optimize-jumps
@findex --no-optimize-jumps
@findex --optimize-jumps
Disable elimination of jumps to jumps.
@sp 1
@item --no-optimize-fulljumps
@findex --no-optimize-fulljumps
@findex --optimize-fulljumps
Disable elimination of jumps to ordinary code.
@sp 1
@item --pessimize-tailcalls
@findex --pessimize-tailcalls
Disable the optimization of tailcalls.
@sp 1
@item --checked-nondet-tailcalls
@findex --checked-nondet-tailcalls
Convert nondet calls into tail calls whenever possible, even
when this requires a runtime check. This option tries to
minimize stack consumption, possibly at the expense of speed.
@sp 1
@item --no-use-local-vars
@findex --use-local-vars
Disable the transformation to use local variables in C code
blocks wherever possible.
@sp 1
@item --no-optimize-labels
@findex --no-optimize-labels
@findex --optimize-labels
Disable elimination of dead labels and code.
@sp 1
@item --optimize-dups
@findex --optimize-dups
Enable elimination of duplicate code within procedures.
@sp 1
@item --optimize-proc-dups
@findex --optimize-proc-dups
Enable elimination of duplicate procedures.
@c @sp 1
@c @item --optimize-copyprop
@c Enable the copy propagation optimization.
@sp 1
@item --no-optimize-frames
@findex --no-optimize-frames
@findex --optimize-frames
Disable stack frame optimizations.
@sp 1
@item --no-optimize-delay-slot
@findex --no-optimize-delay-slot
@findex --optimize-delay-slot
Disable branch delay slot optimizations.
@sp 1
@item --optimize-reassign
@findex --optimize-reassign
Optimize away assignments to locations that already hold the assigned value.
@sp 1
@item --optimize-repeat @var{n}
@findex --optimize-repeat
Iterate most optimizations at most @var{n} times (default: 3).
@sp 1
@item --layout-compression-limit @var{n}
@findex --layout-compression-limit
Attempt to compress the layout structures used by the debugger
only as long as the arrays involved have at most @var{n} elements
(default: 4000).
@end table
@node Output-level (LLDS -> C) optimization options
@subsection Output-level (LLDS -> C) optimization options
These optimizations are applied during the process of generating
C intermediate code from our low-level data structure.
@table @code
@item --no-emit-c-loops
@findex --no-emit-c-loops
Use only gotos --- don't emit C loop constructs.
@sp 1
@item --use-macro-for-redo-fail
@findex --use-macro-for-redo-fail
Emit the fail or redo macro instead of a branch
to the fail or redo code in the runtime system.
@sp 1
@item --procs-per-c-function @var{n}
@findex --procs-per-c-function
Don't put the code for more than @var{n} Mercury
procedures in a single C function. The default
value of @var{n} is one. Increasing @var{n} can produce
slightly more efficient code, but makes compilation slower.
Setting @var{n} to the special value zero has the effect of
putting all the procedures in a single function,
which produces the most efficient code
but tends to severely stress the C compiler.
@sp 1
@item --no-local-thread-engine-base
@findex --no-local-thread-engine-base
Don't copy the thread-local Mercury engine base address
into local variables. This option only affects low-level
parallel grades not using the global register variables
extension.
@end table
@node Build system options
@section Build system options
@table @code
@item -m
@itemx --make
@findex -m
@findex --make
Treat the non-option arguments to @samp{mmc} as files to
make, rather than source files. Create the specified files,
if they are not already up-to-date.
(Note that this option also enables @samp{--use-subdirs}.)
@sp 1
@item -r
@itemx --rebuild
@findex -r
@findex --rebuild
Same as @samp{--make}, but always rebuild the target files
even if they are up to date.
@sp 1
@item -k
@itemx --keep-going
@findex -k
@findex --keep-going
With @samp{--make} keep going as far as
possible even if an error is detected.
@sp 1
@item -j <n>
@item --jobs <n>
@findex -j <n>
@findex --jobs
With @samp{--make}, attempt to perform up to @samp{<n>} jobs
concurrently for some tasks.
In low-level C grades with parallelism support,
the number of threads is also limited by
the @samp{-P} option in the @samp{MERCURY_OPTIONS} environment variable
(see @ref{Environment}).
@sp 1
@item --track-flags
@findex -track-flags
With @samp{--make}, keep track of the options used when compiling
each module. If an option for a module is added or removed,
@samp{mmc --make} will then know to recompile the module even if the
timestamp on the file itself has not changed. Warning,
verbosity and build system options are not tracked.
@sp 1
@item --pre-link-command @var{command}
@findex --pre-link-command
Specify a command to run before linking with @samp{mmc --make}.
This can be used to compile C source files which rely on
header files generated by the Mercury compiler.
The command will be passed the names of all of the source files in
the program or library, with the source file containing the main
module given first.
@sp 1
@item --extra-init-command @var{command}
@findex --extra-init-command
Specify a command to produce extra entries in the @file{.init}
file for a library.
The command will be passed the names of all of the source files in
the program or library, with the source file containing the main
module given first.
@sp 1
@item --install-prefix @var{dir}
@findex --install-prefix
Specify the directory under which to install Mercury libraries.
@sp 1
@item --install-command @var{command}
@findex --install-command
Specify the command to use to install the files in Mercury
libraries. The given command will be invoked as
@code{@var{command} @var{source} @var{target}}
to install each file in a Mercury library.
The default command is @samp{cp}.
@sp 1
@item --install-command-dir-option @var{option}
@findex --install-command-dir-command
Specify the flag to pass to the install command to install
a directory. The given command will be invoked as
@code{@var{command} @var{option} @var{source} @var{target}}
to install each directory. The default option is @samp{-r}.
@sp 1
@item --libgrade @var{grade}
@findex --libgrade
Add @var{grade} to the list of compilation grades in
which a library to be installed should be built.
@sp 1
@item --no-libgrade
@findex --no-libgrade
Clear the list of compilation grades in which a library
to be installed should be built. The main use of this is to avoid
building and installing the default set of grades.
@sp 1
@item --libgrades-include-component @var{component}
@itemx --libgrades-include @var{component}
@findex --libgrade-include-component
@findex --libgrades-include
Remove grades that do not contain the specified component from the
set of library grades to be installed.
(This option does not work with Mmake, only @samp{mmc --make}.)
@sp 1
@item --libgrades-exclude-component @var{component}
@itemx --libgrade-exclude @var{component}
@findex --libgrades-exclude-component
@findex --libgrade-exclude
Remove grades that contain the specified component from the set of
library grades to be installed.
(This option does not work with Mmake, only @samp{mmc --make}.)
@sp 1
@item --lib-linkage @{shared,static@}
@findex --lib-linkage
Specify whether libraries should be installed for shared
or static linking. This option can be specified multiple
times. By default libraries will be installed for
both shared and static linking.
@sp 1
@item --flags @var{file}
@itemx --flags-file @var{file}
@findex --flags
@findex --flags-file
Take options from the specified file, and handle them
as if they were specified on the command line.
@sp 1
@item --options-file @var{file}
@findex --options-file
@cindex Options files
@cindex Mercury.options
Add @var{file} to the list of options files to be processed.
If @var{file} is @samp{-}, an options file will be read from the
standard input. By default the file @file{Mercury.options}
in the current directory will be read.
See @ref{Using Mmake} for a description of the syntax of options files.
@item --config-file @var{file}
@findex --config-file
@cindex Options files
Read the Mercury compiler's configuration information from @var{file}.
If the @samp{--config-file} option is not set, a default configuration
will be used, unless @samp{--no-mercury-stdlib-dir} is passed to @samp{mmc}.
The configuration file is just an options file (@pxref{Using Mmake}).
@sp 1
@item --options-search-directory @var{dir}
@findex --options-search-directory
Add @var{dir} to the list of directories to be searched for
options files.
@sp 1
@item --mercury-configuration-directory @var{dir}
@itemx --mercury-config-dir @var{dir}
@findex --mercury-configuration-directory
@findex --mercury-config-dir
Search @var{dir} for Mercury system's configuration files.
@sp 1
@item -I @var{dir}
@itemx --search-directory @var{dir}
@findex -I
@findex --search-directory
@cindex Directories
@cindex Search path
Append @var{dir} to the list of directories to be searched for
imported modules.
@sp 1
@item --intermod-directory @var{dir}
@findex --intermod-directory
@cindex Directories
@cindex Search path
Append @var{dir} to the list of directories to be searched for
@samp{.opt} files.
@sp 1
@item --use-search-directories-for-intermod
@findex --use-search-directories-for-intermod
@cindex Directories
@cindex Search path
Append the arguments of all -I options to the list of directories
to be searched for @samp{.opt} files.
@sp 1
@item --no-libgrade-install-check
@findex --no-libgrade-install-check
Do not check that libraries have been installed before attempting
to use them. (This option is only meaningful with `mmc --make'.)
@sp 1
@item --use-subdirs
@findex --use-subdirs
@cindex File names
@cindex Directories
@cindex Subdirectories
@cindex @file{Mercury} subdirectory
Create intermediate files in a @file{Mercury} subdirectory,
rather than in the current directory.
@sp 1
@item --use-grade-subdirs
@findex --use-grade-subdirs
@cindex File names
@cindex Directories
@cindex Subdirectories
@cindex @file{Mercury} subdirectory
@cindex Grades
Generate intermediate files in a @file{Mercury} subdirectory,
laid out so that multiple grades can be built simultaneously.
Executables and libraries will be symlinked or copied into the
current directory.
@samp{--use-grade-subdirs} does not work with Mmake (it does
work with @samp{mmc --make}).
@sp 1
@item --order-make-by-timestamp
@findex --order-make-by-timestamp
Make @samp{mmc --make} compile more recently modified source files first.
@sp 1
@item --show-make-times
@findex --show-make-times
Report run times for commands executed by @samp{mmc --make}.
@sp 1
@item --extra-library-header @var{file}
@item --extra-lib-header @var{file}
@findex --extra-library-header
@findex --extra-lib-header
Install the specified C header file along with a Mercury library.
(This option is only supported by @samp{mmc --make})
@sp 1
@item --restricted-command-line
@findex --restricted-command-line
Enable this option if your shell doesn't support long command lines.
This option uses temporary files to pass arguments to sub-commands.
(This option is only supported by @samp{mmc --make})
@sp 1
@item --env-type @var{type}
@findex --env-type
Specify the environment type in which the compiler and generated
programs will be invoked.
The environment type controls how the compiler and generated programs
interact with the shell and other system tools.
The @var{type} should be one of @samp{posix}, @samp{cygwin}, @samp{msys},
or @samp{windows}.
This option is equivalent to setting both @samp{--host-env-type} and
@samp{--target-env-type} to @var{type}.
@sp 1
@item --host-env-type @var{type}
@findex --host-env-type
Specify the environment type in which the compiler will be invoked.
(See above for a list of supported environment types.)
@sp 1
@item --target-env-type @var{type}
@findex --target-env-type
Specify the environment type in which generated programs will be invoked.
(See above for a list of supported environment types.)
@end table
@node Miscellaneous options
@section Miscellaneous options
@table @code
@sp 1
@item -?
@itemx -h
@itemx --help
@findex -?
@findex -h
@findex --help
@cindex Help option
Print a usage message.
@sp 1
@item --filenames-from-stdin
@findex --filenames-from-stdin
Read then compile a newline terminated module name or file name from the
standard input. Repeat this until EOF is reached. (This allows a program
or user to interactively compile several modules without the overhead of
process creation for each one.)
@sp 1
@item --typecheck-ambiguity-warn-limit @var{n}
@findex typecheck-ambiguity-warn-limit
Set the number of type assignments required to generate a warning
about highly ambiguous overloading to @var{n}.
@sp 1
@item --typecheck-ambiguity-error-limit @var{n}
@findex typecheck-ambiguity-error-limit
Set the number of type assignments required to generate an error
excessively ambiguous overloading to @var{n}.
If this limit is reached,
the typechecker will not process the predicate or function any further.
@c @sp 1
@c @item --ignore-par-conjunctions
@c @findex --ignore-par-conjunctions
@c Replace parallel conjunctions with plain ones.
@c This can help developers benchmark their code.
@c This does not affect implicit parallelism.
@sp 1
@item --control-granularity
@findex --control-granularity
Don't try to generate more parallelism than the machine can handle,
which may be specified at runtime or detected automatically.
(see the @samp{-P} option in the @samp{MERCURY_OPTIONS} environment variable.)
@sp 1
@item --distance-granularity @var{distance_value}
@findex --distance-granularity
Control the granularity of parallel execution
using the specified distance value.
@c Maybe this options *should* exist, but at the moment, it doesn't.
@c @sp 1
@c @item --parallelism-target @var{num_cpus}
@c @findex --parallelism-target
@c Specifies the number of CPUs of the target machine,
@c for use by --control-granularity option.
@sp 1
@item --implicit-parallelism
@findex --implicit-parallelism
@cindex Automatic parallelism
@cindex Profiler feedback
Introduce parallel conjunctions where it could be worthwhile (implicit
parallelism) using deep profiling feedback information generated by
mdprof_create_feedback. The profiling feedback file can be specified using the
--feedback-file option.
@sp 1
@item --feedback-file
@findex --feedback-file
@cindex Automatic parallelism
@cindex Profiler feedback
Use the specified profiling feedback file
which may currently only be processed for automatic parallelism.
@c @sp 1
@c @item --loop-control
@c @findex --loop-control
@c Enable the loop control transformation for parallel conjunctions.
@c This causes right-recursive parallel conjunctions to use fewer contexts while
@c maintaining parallelism.
@c This transformation is under development, when it is ready it will
@c probably be enabled by default.
@c
@c @sp 1
@c @item --no-loop-control-preserve-tail-recursion
@c @findex --no-loop-control-preserve-tail-recursion
@c Do not attempt to preserve tail recursion in the loop control transformation.
@c This option causes all code spawned off using loop control to access it's
@c parent stack frame through the parent stack pointer.
@c Rather than copying (parts) of the stack frame into the child's stack frame and
@c reading it from there.
@c This allows us to compare the cost of copying the stack frame with the cost of
@c non tail recursive code.
@c It is intended for developers only.
@end table
@node Target code compilation options
@section Target code compilation options
@cindex Target code compilation options
If you are using Mmake, you need to pass these options
to the target code compiler (e.g.@: @samp{mgnuc}) rather
than to @samp{mmc}.
@table @code
@sp 1
@item @code{--target-debug}
@findex @code{--target-debug}
@findex @code{--c-debug}
@findex @code{/debug}
Enable debugging of the generated target code.
If the target language is C, this has the same effect as
@samp{--c-debug} (see below).
If the target language is IL or C#, this causes the compiler to
pass @samp{/debug} to the IL assembler or C# compiler.
@sp 1
@item --cc @var{compiler-name}
@findex --cc
@cindex C compiler
Specify which C compiler to use.
@sp 1
@item --c-include-directory @var{dir}
@item --c-include-dir @var{dir}
@findex --c-include-directory
@findex --c-include-dir
@cindex Include directories
@cindex Directories
Append @var{dir} to the list of directories to be searched for
C header files. Note that if you want to override this list, rather than
append to it, then you can set the @samp{MERCURY_MC_ALL_C_INCL_DIRS}
environment variable to a sequence of @samp{--c-include-directory} options.
@sp 1
@item @code{--c-debug}
@findex --c-debug
@cindex C debugging
@cindex Debugging the generated C code
Pass the @samp{-g} flag to the C compiler, to enable debugging
of the generated C code, and also disable stripping of C debugging
information from the executable.
Since the generated C code is very low-level, this option is not likely
to be useful to anyone except the Mercury implementors, except perhaps
for debugging code that uses Mercury's C interface extensively.
@sp 1
@item --no-c-optimize
@findex --no-c-optimize
Don't enable the C compiler's optimizations.
@sp 1
@item --no-ansi-c
@findex --no-ansi-c
Don't specify to the C compiler that the ANSI dialect
of C should be used. Use the full contents of system
headers, rather than the ANSI subset.
@sp 1
@item --inline-alloc
@findex --inline-alloc
Inline calls to @samp{GC_malloc()}.
This can improve performance a fair bit,
but may significantly increase code size.
This option has no effect if @samp{--gc boehm}
is not set or if the C compiler is not GNU C.
@sp 1
@item --cflags @var{options}
@item --cflag @var{option}
@findex --cflags
@findex --cflag
@cindex C compiler options
Specify options to be passed to the C compiler.
@samp{--cflag} should be used for single words which need
to be quoted when passed to the shell.
@sp 1
@item --javac @var{compiler-name}
@item --java-compiler @var{compiler-name}
@findex --javac
@findex --java-compiler
@cindex Java compiler
Specify which Java compiler to use. The default is @samp{javac}.
@sp 1
@item --java-interpreter @var{interpreter-name}
@findex --java-interpreter
@cindex Java interpreter
Specify which Java interpreter to use. The default is @samp{java}.
@sp 1
@item --java-flags @var{options}
@itemx --java-flag @var{option}
@findex --java-flags
@findex --java-flag
@cindex Java compiler options
Specify options to be passed to the Java compiler.
@samp{--java-flag} should be used for single words which need
to be quoted when passed to the shell.
@sp 1
@item --java-classpath @var{dir}
@findex --java-classpath
@cindex classpath
@cindex Directories
Set the classpath for the Java compiler.
@sp 1
@item --java-object-file-extension @var{extension}
@findex --java-object-file-extension
@cindex File extensions
Specify an extension for Java object (bytecode) files. By default this
is @samp{.class}.
@sp 1
@item --csharp-compiler @var{compiler-name}
@findex --csharp-compiler
@cindex C# compiler
Specify which C# compiler to use. The default is @samp{csc}.
@sp 1
@item --csharp-flags @var{options}
@itemx --csharp-flag @var{option}
@findex --csharp-flags
@findex --csharp-flag
@cindex C# compiler options
Specify options to be passed to the C# compiler.
@samp{--csharp-flag} should be used for single words which need
to be quoted when passed to the shell.
@sp 1
@item --cil-interpreter @var{interpreter-name}
@findex --cil-interpreter
@cindex CIL interpreter
Specify the program that implements the Common Language
Infrastructure (CLI) execution environment, e.g. @samp{mono}.
@sp 1
@item --erlang-compiler @var{compiler-name}
@findex --erlang-compiler
@cindex Erlang compiler
Specify which Erlang compiler to use. The default is @samp{erlc}.
@sp 1
@item --erlang-interpreter @var{interpreter-name}
@findex --erlang-interpreter
@cindex Erlang interpreter
Specify which Erlang interpreter to use. The default is @samp{erl}.
@sp 1
@item --erlang-flags @var{options}
@itemx --erlang-flag @var{option}
@findex --erlang-flags
@findex --erlang-flag
@cindex Erlang compiler options
Specify options to be passed to the Erlang compiler.
@samp{--erlang-flag} should be used for single words which need
to be quoted when passed to the shell.
@sp 1
@item --erlang-include-directory @var{dir}
@itemx --erlang-include-dir @var{dir}
@findex --erlang-include-directory
@findex --erlang-include-dir
@cindex Include directories
@cindex Directories
Append @var{dir} to the list of directories to be searched for
Erlang header files (.hrl).
@sp 1
@item --erlang-native-code
@findex --erlang-native-code
@cindex Erlang compiler options
Add @samp{+native} to the start of flags passed to the Erlang compiler.
Cancelled out by @samp{--no-erlang-native-code} so it's useful when you
wish to enable native code generation for all modules except for
a select few.
@sp 1
@item --no-erlang-inhibit-trivial-warnings
@findex --no-erlang-inhibit-trivial-warnings
@cindex Erlang compiler options
Do not add @samp{+nowarn_unused_vars +nowarn_unused_function} to the
list of flags passed to the Erlang compiler.
@c This option is not fully implemented and not very useful.
@c @sp 1
@c @item --erlang-object-file-extension @var{extension}
@c @findex --erlang-object-file-extension
@c @cindex File extensions
@c Specify an extension for Erlang object (bytecode) files. By default this
@c is @samp{.beam}.
@end table
@node Link options
@section Link options
@cindex Link options
@cindex Linker options
@table @code
@sp 1
@item -o @var{filename}
@itemx --output-file @var{filename}
@findex -o
@findex --output-file
Specify the name of the final executable.
(The default executable name is the same as the name of the
first module on the command line, but without the @samp{.m} extension.)
This option is ignored by @samp{mmc --make}.
@sp 1
@item --ld-flags @var{options}
@item --ld-flags @var{option}
@findex --ld-flags
@findex --ld-flag
Specify options to be passed to the command
invoked by @samp{mmc} to link an executable.
Use @code{mmc --output-link-command} to find out
which command is used.
@samp{--ld-flag} should be used for single words which need
to be quoted when passed to the shell.
@sp 1
@item --ld-libflags @var{options}
@item --ld-libflag @var{option}
@findex --ld-libflags
@findex --ld-libflag
Specify options to be passed to the command
invoked by @samp{mmc} to link a shared library.
Use @code{mmc --output-shared-lib-link-command}
to find out which command is used.
@samp{--ld-libflag} should be used for single words which need
to be quoted when passed to the shell.
@sp 1
@item -L @var{directory}
@itemx --library-directory @var{directory}
@findex -L
@findex --library-directory
@cindex Directories for libraries
@cindex Search path for libraries
Append @var{directory} to the list of directories in which
to search for libraries.
@sp 1
@item -R @var{directory}
@itemx --runtime-library-directory @var{directory}
@findex -R
@findex --runtime-library-directory
Append @var{directory} to the list of directories in which
to search for shared libraries at runtime.
@sp 1
@item --shlib-linker-install-name-path @var{directory}
@findex --shlib-linker-install-name-path
@cindex Mac OS X, Darwin, Install name
Specify the path where a shared library will be installed.
This option is useful on systems where the runtime search
path is obtained from the shared library and not via the
-R option (such as Mac OS X).
@sp 1
@item -l @var{library}
@itemx --library @var{library}
@findex -l
@findex --library
@cindex Libraries, linking with
Link with the specified library.
@sp 1
@item --link-object @var{object}
@findex --link-object
@cindex Object files, linking with
Link with the specified object file.
@sp 1
@item --search-lib-files-dir @var{directory}
@itemx --search-library-files-directory @var{directory}
@findex --search-lib-files-dir
@findex --search-library-files-directory
@cindex Directories for libraries
@cindex Search path for libraries
Search @var{directory} for Mercury library files have not yet been
installed. Similar to adding @var{directory} using all of the
@samp{--search-directory}, @samp{--intermod-directory},
@samp{--library-directory}, @samp{--init-file-directory},
@samp{--c-include-directory} and @samp{--erlang-include-directory}
options, but does the right thing when @samp{--use-subdirs} or
@samp{--use-grade-subdirs} options are used.
@sp 1
@item --mld @var{directory}
@itemx --mercury-library-directory @var{directory}
@findex --mld
@findex --mercury-library-directory
@cindex Directories for libraries
@cindex Search path for libraries
Append @var{directory} to the list of directories to
be searched for Mercury libraries.
This will add @samp{--search-directory}, @samp{--library-directory},
@samp{--init-file-directory}, @samp{--c-include-directory}
and @samp{--erlang-include-directory}
options as needed. @xref{Using installed libraries with mmc --make}.
@sp 1
@item --ml @var{library}
@itemx --mercury-library @var{library}
@findex --ml
@findex --mercury-library
@cindex Libraries, linking with
Link with the specified Mercury library.
@xref{Using installed libraries with mmc --make}.
@sp 1
@item --mercury-standard-library-directory @var{directory}
@itemx --mercury-stdlib-dir @var{directory}
@findex --mercury-standard-library-directory
@findex --mercury-stdlib-dir
Search @var{directory} for the Mercury standard library.
Implies @samp{--mercury-library-directory @var{directory}}
and @samp{--mercury-configuration-directory @var{directory}}.
@sp 1
@item --no-mercury-standard-library-directory
@itemx --no-mercury-stdlib-dir
@findex --no-mercury-standard-library-directory
@findex --no-mercury-stdlib-dir
Don't use the Mercury standard library.
Implies @samp{--no-mercury-configuration-directory}.
@sp 1
@item --init-file-directory @var{directory}
@findex --init-file-directory
Append @var{directory} to the list of directories to
be searched for @samp{.init} files by @samp{c2init}.
@sp 1
@item --init-file @var{file}
@findex --init-file
Append @var{file} to the list of @samp{.init} files
to be passed to @samp{c2init}.
@sp 1
@item --trace-init-file @var{file}
@findex --trace-init-file
Append @var{file} to the list of @samp{.init} files
to be passed to @samp{c2init} when tracing is enabled.
@sp 1
@item --linkage @{shared,static@}
@findex --linkage
Specify whether to use shared or static linking for executables.
Shared libraries are always linked with @samp{--linkage shared}.
@sp 1
@item --mercury-linkage @{shared,static@}
@findex --mercury-linkage
Specify whether to use shared or static linking when
linking an executable with Mercury libraries.
Shared libraries are always linked with @samp{--mercury-linkage shared}.
@sp 1
@item --no-strip
@findex --no-strip
Don't strip executables.
@sp 1
@item --no-demangle
@findex --no-demangle
Don't pipe link errors through the Mercury demangler.
@sp 1
@item --no-main
@findex --no-main
Don't generate a C main() function. The user's code must
provide a main() function.
@sp 1
@item --no-allow-undefined
@findex --no-allow-undefined
Do not allow undefined symbols in shared libraries.
@sp 1
@item --no-use-readline
@findex --no-use-readline
Disable use of the readline library in the debugger.
@sp 1
@item --runtime-flags @var{flags}
@findex --runtime-flags
Specify flags to pass to the Mercury runtime.
@sp 1
@item --extra-initialization-functions
@itemx --extra-inits
@findex --extra-initialization-functions
@findex --extra-inits
Search @samp{.c} files for extra initialization functions.
(This may be necessary if the C files contain
hand-coded C code with @samp{INIT} comments, rather than
containing only C code that was automatically generated
by the Mercury compiler.)
@sp 1
@item --link-executable-command @var{command}
@findex --link-executable-command
Specify the command used to invoke the linker when linking an executable.
@sp 1
@item --link-shared-lib-command @var{command}
@findex --link-shared-lib-command
Specify the command used to invoke the linker when linking a shared library.
@sp 1
@item --java-archive-command @var{command}
@findex --java-archive-command
Specify the command used to produce Java archive (JAR) files.
@sp 1
@item --framework @var{framework}
@findex --framework
@cindex Mac OS X, Using Frameworks
Build and link against the specified framework.
(Mac OS X only.)
@sp 1
@item -F @var{directory}
@itemx --framework-directory @var{directory}
@findex -F
@findex --framework-directory
@cindex Directories for libraries
@cindex Search path for libraries
@cindex Mac OS X, Using Frameworks
Append the specified directory to the framework search path.
(Mac OS X only.)
@end table
@c ----------------------------------------------------------------------------
@node Environment
@chapter Environment variables
@cindex Environment variables
@cindex Variables, environment
@cindex Directories
@cindex Search path
The shell scripts in the Mercury compilation environment
will use the following environment variables if they are set.
There should be little need to use these, because the default
values will generally work fine.
@table @code
@item MERCURY_DEFAULT_GRADE
@vindex MERCURY_DEFAULT_GRADE
The default grade to use if no @samp{--grade} option is specified.
@sp 1
@item MERCURY_STDLIB_DIR
@vindex MERCURY_STDLIB_DIR
The directory containing the installed Mercury standard library.
@samp{--mercury-stdlib-dir} options passed to the @samp{mmc}, @samp{ml},
@samp{mgnuc} and @samp{c2init} scripts override the setting of
the MERCURY_STDLIB_DIR environment variable.
@sp 1
@item MERCURY_NONSHARED_LIB_DIR
@vindex MERCURY_NONSHARED_LIB_DIR
For IRIX 5, this environment variable can be used to specify a
directory containing a version of libgcc.a which has been compiled with
@samp{-mno-abicalls}. See the file @samp{README.IRIX-5} in the Mercury
source distribution.
@sp 1
@item MERCURY_OPTIONS
@vindex MERCURY_OPTIONS
A list of options for the Mercury runtime system,
which gets linked into every Mercury program.
The options given in this environment variable apply to every program;
the options given in an environment variable
whose name is of the form @samp{MERCURY_OPTIONS_@var{progname}}
apply only to programs named @var{progname}.
Options may also be set for a particular executable at compile time
by passing @samp{--runtime-flags} options
to the invocations of @samp{ml} and @samp{c2init} which create that executable.
These options are processed first,
followed by those in @samp{MERCURY_OPTIONS},
with the options in @samp{MERCURY_OPTIONS_@var{progname}} being processed last.
The Mercury runtime system accepts the following options.
@sp 1
@table @code
@c @item -a
@c If given force a redo when the entry point procedure succeeds;
@c this is useful for benchmarking when this procedure is model_non.
@c @item -c
@c Check how much of the space reserved for local variables
@c by mercury_engine.c was actually used.
@item -C @var{size}
@findex -C (runtime option)
Tells the runtime system
to optimize the locations of the starts of the various data areas
for a primary data cache of @var{size} kilobytes.
The optimization consists of arranging the starts of the areas
to differ as much as possible modulo this size.
@c @item -d @var{debugflag}
@c @findex -d (runtime option)
@c Sets a low-level debugging flag.
@c These flags are consulted only if
@c the runtime was compiled with the appropriate definitions;
@c most of them depend on MR_LOWLEVEL_DEBUG.
@c For the meanings of the debugflag parameters,
@c see process_options() in mercury_wrapper.c
@c and do a grep for the corresponding variable.
@sp 1
@item -D @var{debugger}
@findex -D (runtime option)
Enables execution tracing of the program,
via the internal debugger if @var{debugger} is @samp{i}
and via the external debugger if @var{debugger} is @samp{e}.
(The mdb script works by including @samp{-Di} in MERCURY_OPTIONS.)
The external debugger is not yet available.
@sp 1
@item -p
@findex -p (runtime option)
Disables profiling.
This only has an effect if the executable was built in a profiling grade.
@sp 1
@item -P @var{num}
@findex -P (runtime option)
Tells the runtime system to use @var{num} threads
if the program was built in a parallel low-level C grade.
The Mercury runtime attempts to automatically determine this value if support
is available from the operating system.
If it cannot or support is unavailable it defaults to @samp{1}.
@sp 1
@item --max-contexts-per-thread @var{num}
@findex --max-contexts-per-thread (runtime option)
Tells the runtime system to create at most @var{num} contexts per
POSIX thread.
Each context created requires a set of stacks, setting this value too high
can consume excess memory.
This only has an effect if the executable was built in a low-level C parallel
grade.
@c @sp 1
@c @item --num-contexts-per-lc-per-thread @var{num}
@c @findex --num-contexts-per-lc-per-thread (runtime option)
@c Tells the runtime system to use @var{num} contexts per POSIX thread to handle
@c each loop controlled loop.
@c This only has an effect if the executable was built in a low-level C parallel
@c grade.
@c
@c @sp 1
@c @item --runtime-granularity-wsdeque-length-factor @var{factor}
@c @findex --runtime-granularity-wsdeque-length-factor (runtime option)
@c Configures the runtime granularity control method not to create sparks if a
@c context's local spark wsdeque is longer than
@c @math{ @var{factor} * @var{num_engines}}.
@c @var{num_engines} is configured with the @samp{-P} runtime option.
@c
@c @sp 1
@c @item --profile-parallel-execution
@c @findex --profile-parallel-execution
@c Tells the runtime to collect and write out parallel execution profiling
@c information to a file named @file{parallel_execution_profile.txt}.
@c This only has an effect if the executable was built in a low level c,
@c parallel, threadscope grade.
@c
@c @sp 1
@c @item --threadscope-use-tsc
@c @findex --threadscope-use-tsc
@c Requests that the runtime's threadscope support use the CPU's time stamp
@c counter (TSC) to measure time rather than gettimeofday(). The TSC may
@c not always be available so the runtime may still use gettomeofday() even
@c with this option.
@sp 1
@item --thread-pinning
@findex --thread-pinning
Request that the runtime system attempts to pin Mercury engines (POSIX threads)
to CPU cores/hardware threads.
This only has an effect if the executable was built in a parallel low-level C
grade.
This is disabled by default but may be enabled by default in the future.
@c In case this is enabled by default the following comment is relevant.
@c This is disabled by default unless @samp{-P @var{num}} is not specified and the
@c runtime system is able to detect the number of processors enabled by the
@c operating system.
@c @item -r @var{num}
@c @findex -r (runtime option)
@c Repeats execution of the entry point procedure @var{num} times,
@c to enable accurate timing.
@c @item -t
@c @findex -t (runtime option)
@c Tells the runtime system to measure the time taken by
@c the (required number of repetitions of) the program,
@c and to print the result of this time measurement.
@sp 1
@item -T @var{time-method}
@findex -T (runtime option)
If the executable was compiled in a grade that includes time profiling,
this option specifies what time is counted in the profile.
@var{time-method} must have one of the following values:
@sp 1
@table @code
@item @samp{r}
Profile real (elapsed) time (using ITIMER_REAL).
@item @samp{p}
Profile user time plus system time (using ITIMER_PROF).
This is the default.
@item @samp{v}
Profile user time (using ITIMER_VIRTUAL).
@end table
@sp 1
Currently, only the @samp{-Tr} option works on Cygwin; on that
platform it is the default.
@c the above sentence is duplicated above
@c @item -x
@c @findex -x (runtime option)
@c Tells the Boehm collector not to perform any garbage collection.
@sp 1
@item --heap-size @var{size}
@findex --heap-size (runtime option)
Sets the size of the heap to @var{size} kilobytes.
@sp 1
@item --heap-size-kwords @var{size}
@findex --heap-size-kwords (runtime option)
Sets the size of the heap to @var{size} kilobytes
multiplied by the word size in bytes.
@sp 1
@item --detstack-size @var{size}
@findex --detstack-size (runtime option)
Sets the size of the det stack to @var{size} kilobytes.
@sp 1
@item --detstack-size-kwords @var{size}
@findex --detstack-size-kwords (runtime option)
Sets the size of the det stack to @var{size} kilobytes
multiplied by the word size in bytes.
@sp 1
@item --nondetstack-size @var{size}
@findex --nondetstack-size (runtime option)
Sets the size of the nondet stack to @var{size} kilobytes.
@sp 1
@item --nondetstack-size-kwords @var{size}
@findex --nondetstack-size-kwords (runtime option)
Sets the size of the nondet stack to @var{size} kilobytes
multiplied by the word size in bytes.
@sp 1
@item --small-detstack-size @var{size}
@findex --small-detstack-size (runtime option)
Sets the size of the det stack used for executing parallel conjunctions
to @var{size} kilobytes.
The regular det stack size must be equal or greater.
@sp 1
@item --small-detstack-size-kwords @var{size}
@findex --small-detstack-size-kwords (runtime option)
Sets the size of the det stack used for executing parallel conjunctions
to @var{size} kilobytes
multiplied by the word size in bytes.
The regular det stack size must be equal or greater.
@sp 1
@item --small-nondetstack-size @var{size}
@findex --small-nondetstack-size (runtime option)
Sets the size of the nondet stack for executing parallel computations
to @var{size} kilobytes.
The regular nondet stack size must be equal or greater.
@sp 1
@item --small-nondetstack-size-kwords @var{size}
@findex --small-nondetstack-size-kwords (runtime option)
Sets the size of the nondet stack for executing parallel computations
to @var{size} kilobytes
multiplied by the word size in bytes.
The regular nondet stack size must be equal or greater.
@sp 1
@item --solutions-heap-size @var{size}
@findex --solutions-heap-size (runtime option)
Sets the size of the solutions heap to @var{size} kilobytes.
@sp 1
@item --solutions-heap-size-kwords @var{size}
@findex --solutions-heap-size-kwords (runtime option)
Sets the size of the solutions heap to @var{size} kilobytes
multiplied by the word size in bytes.
@sp 1
@item --trail-size @var{size}
@findex --trail-size
@cindex Trail size
Sets the size of the trail to @var{size} kilobytes.
This option is ignored in grades that use trail segments.
@sp 1
@item --trail-size-kwords @var{size}
@findex --trail-size-kwords
@cindex Trail size
Sets the size of the trail to @var{size} kilobytes
multiplied by the word size in bytes.
This option is ignored in grades that use trail segments.
@sp 1
@item --trail-segment-size @var{size}
@findex --trail-segment-size
@cindex Trail size
Sets the size of each trail segment to be @var{size} kilobytes.
This option is ignored in grades that do not use trail segments.
@sp 1
@item --trail-segment-size-kwords @var{size}
@findex --trail-segment-size-kwords
@cindex Trail size
Set the size of each trail segment to be @var{size} kilobytes
multiplied by the words size in bytes.
This option is ignored in grades that do not use trail segments.
@sp 1
@item --genstack-size @var{size}
@findex --genstack-size
@cindex Generator stack size
Sets the size of the generator stack to @var{size} kilobytes.
@sp 1
@item --genstack-size-kwords @var{size}
@findex --genstack-size-kwords
@cindex Generator stack size
Sets the size of the generator stack to @var{size} kilobytes
multiplied by the word size in bytes.
@sp 1
@item --cutstack-size @var{size}
@findex --cutstack-size
@cindex Cut stack size
Sets the size of the cut stack to @var{size} kilobytes.
@sp 1
@item --cutstack-size-kwords @var{size}
@findex --cutstack-size-kwords
@cindex Cut stack size
Sets the size of the cut stack to @var{size} kilobytes
multiplied by the word size in bytes.
@sp 1
@item --pnegstack-size @var{size}
@findex --pnegstack-size
@cindex Pneg stack size
Sets the size of the pneg stack to @var{size} kilobytes.
@sp 1
@item --pnegstack-size-kwords @var{size}
@findex --pnegstack-size-kwords
@cindex Pneg stack size
Sets the size of the pneg stack to @var{size} kilobytes
multiplied by the word size in bytes.
@c @sp 1
@c @item --heap-redzone-size @var{size}
@c @findex --heap-redzone-size (runtime option)
@c Sets the size of the redzone on the heap to @var{size} kilobytes.
@c @sp 1
@c @item --heap-redzone-size-kwords @var{size}
@c @findex --heap-redzone-size-kwords (runtime option)
@c Sets the size of the redzone on the heap to @var{size} kilobytes
@c multiplied by the word size in bytes.
@c @sp 1
@c @item --detstack-redzone-size @var{size}
@c @findex --detstack-redzone-size (runtime option)
@c Sets the size of the redzone on the det stack to @var{size} kilobytes.
@c @sp 1
@c @item --detstack-redzone-size-kwords @var{size}
@c @findex --detstack-redzone-size-kwords (runtime option)
@c Sets the size of the redzone on the det stack to @var{size} kilobytes
@c multiplied by the word size in bytes.
@c @sp 1
@c @item --nondetstack-redzone-size @var{size}
@c @findex --nondetstack-redzone-size (runtime option)
@c Sets the size of the redzone on the nondet stack to @var{size} kilobytes.
@c @sp 1
@c @item --nondetstack-redzone-size-kwords @var{size}
@c @findex --nondetstack-redzone-size-kwords (runtime option)
@c Sets the size of the redzone on the nondet stack to @var{size} kilobytes
@c multiplied by the word size in bytes.
@c @sp 1
@c @item --trail-redzone-size @var{size}
@c @findex --trail-redzone-size (runtime option)
@c Sets the size of the redzone on the trail to @var{size} kilobytes.
@c @sp 1
@c @item --trail-redzone-size-kwords @var{size}
@c @findex --trail-redzone-size-kwords (runtime option)
@c Sets the size of the redzone on the trail to @var{size} kilobytes
@c multiplied by the word size in bytes.
@sp 1
@item -i @var{filename}
@itemx --mdb-in @var{filename}
@findex -i (runtime option)
@findex --mdb-in (runtime option)
Read debugger input from the file or device specified by @var{filename},
rather than from standard input.
@sp 1
@item -o @var{filename}
@itemx --mdb-out @var{filename}
Print debugger output to the file or device specified by @var{filename},
@findex -o (runtime option)
@findex --mdb-out (runtime option)
rather than to standard output.
@sp 1
@item -e @var{filename}
@itemx --mdb-err @var{filename}
@findex -e (runtime option)
@findex --mdb-err (runtime option)
Print debugger error messages to the file or device specified by @var{filename},
rather than to standard error.
@sp 1
@item -m @var{filename}
@itemx --mdb-tty @var{filename}
@findex -m (runtime option)
@findex --mdb-tty (runtime option)
Redirect all three debugger I/O streams
--- input, output, and error messages ---
to the file or device specified by @var{filename}.
@c --mdb-in-window is for use only by the mdb script, so it's
@c not documented here.
@c The documentation of --mdb-benchmark-silent is commented out because
@c this option is intended only for implementors.
@c @sp 1
@c @item --mdb-benchmark-silent
@c @findex --mdb-benchmark-silent (runtime option)
@c Redirect all output, including error messages, to /dev/null.
@c This is useful for benchmarking.
@sp 1
@item --debug-threads
@findex --debug-threads (runtime option)
@cindex Debugging Threads
@cindex Threads, Debugging
Output information to the standard error stream about the locking and
unlocking occurring in each module which has been compiled with the C macro
symbol @samp{MR_DEBUG_THREADS} defined.
@sp 1
@item --tabling-statistics
@findex --tabling-statistics
Prints statistics about tabling when the program terminates.
@sp 1
@item --mem-usage-report @var{prefix}
@findex --mem-usage-report @var{prefix}
Print a report about the memory usage of the program
when the program terminates.
The report is printed to a new file named @file{.mem_usage_report@var{N}}
for the lowest value of @var{N} (up to 99)
which doesn't overwrite an existing file.
Note that this capability is not supported on all operating systems.
@sp 1
@item --trace-count
@findex --trace-count
When the program terminates, generate a trace counts file
listing all the debugger events the program executed,
if the program actually executed any debugger events.
If MERCURY_OPTIONS includes
the --trace-count-output-file=@var{filename} option,
then the trace counts are put into the file @var{filename}.
If MERCURY_OPTIONS includes
the --trace-count-summary-file=@var{basename} option,
then the trace counts are put
either in the file @var{basename} (if it does not exist),
or in @var{basename.N} for the lowest value of the integer @var{N}
which doesn't overwrite an existing file.
(There is a limit on the value of @var{N};
see the option --trace-count-summary-max below.)
If neither option is specified,
then the output will be written to a file
with the prefix @samp{.mercury_trace_counts} and a unique suffix.
Specifying both options is an error.
@sp 1
@item --coverage-test
@findex --coverage-test
Act as the --trace-count option, except
include @emph{all} debugger events in the output,
even the ones that were not executed.
@sp 1
@item --trace-count-if-exec=@var{prog}
@findex --trace-count-if-exec=@var{prog}
Act as the --trace-count option,
but only if the executable is named @var{prog}.
This is to allow
the collection of trace count information from only one Mercury program
even if several Mercury programs
are executed with the same setting of MERCURY_OPTIONS.
@sp 1
@item --coverage-test-if-exec=@var{prog}
@findex --coverage-test-if-exec=@var{prog}
Act as the --coverage-test option,
but only if the executable is named @var{prog}.
This is to allow
the collection of coverage test information from only one Mercury program
even if several Mercury programs
are executed with the same setting of MERCURY_OPTIONS.
@sp 1
@item --trace-count-output-file=@var{filename}
@findex --trace-count-output-file=@var{filename}
Documented alongside the --trace-count option.
@sp 1
@item --trace-count-summary-file=@var{basename}
@findex --trace-count-summary-file=@var{basename}
Documented alongside the --trace-count option.
@sp 1
@item --trace-count-summary-max=@var{N}
@findex --trace-count-summary-max=@var{N}
If MERCURY_OPTIONS includes both
the --trace-count option (or one of the other options that imply --trace-count)
and the --trace-count-summary-file=@var{basename} option,
then the generated program will put the generated trace counts
either in @var{basename} (if it does not exist),
or in @var{basename.N} for the lowest value of the integer @var{N}
which doesn't overwrite an existing file.
The --trace-count-summary-max option
specifies the maximum value of this @var{N}.
When this maximum is reached,
the program will invoke the @samp{mtc_union} program
to summarize @var{basename}, @var{basename.1}, ... @var{basename.N}
into a single file @var{basename}, and delete the rest.
By imposing a limit on the total number
(and hence indirectly on the total size) of these trace count files,
this mechanism allows the gathering of trace count information
from a large number of program runs.
The default maximum value of @var{N} is 20.
@c @sp 1
@c @item --trace-count-summary-cmd=@var{cmd}
@c @findex --trace-count-summary-cmd=@var{cmd}
@c This option specifies the command to use instead of mtc_union
@c for computing trace counts summaries, as shown for the above option.
@c This documentation is commented out
@c because the option is for implementors only.
@c The intended use is to override the installed version of mtc_union
@c with the version in a specific workspace, which may be more recent.
@c @sp 1
@c @item --deep-procrep-file
@c #findex --deep-procrep-file
@c This option, which is meaningful only in deep profiling grades,
@c asks that every Deep.data file being generated should be accompanied by
@c a Deep.procrep file that contains a representation of the program that
@c generated it.
@c This documentation is commented out
@c because procedure representations are not yet used.
@c @sp 1
@c @item --deep-random-write=@var{N}
@c #findex --deep-random-write=@var{N}
@c This option, which is meaningful only in deep profiling grades,
@c asks that Deep.data files (and Deep.procrep files) should be generated
@c only if by processes whose process id is evenly divisible by @var{N}.
@c This documentation is commented out because it is only for use by the
@c bootcheck script (to reduce the time taken for a bootcheck while still
@c testing the code writing out deep profiling data).
@sp 1
@item --boehm-gc-free-space-divisor=@var{N}
@findex --boehm-gc-free-space-divisor=@var{N}
This option sets the value of the free space divisor in the Boehm garbage
collector to @var{N}.
It is meaningful only when using the Boehm garbage collector.
The default value is 3. Increasing its value will reduce
heap space but increase collection time.
See the Boehm GC documentation for details.
@sp 1
@item --boehm-gc-calc-time
@findex --boehm-gc-calc-time
This option enables code in the Boehm garbage collector to calculate
the time spent doing garbage collection so far.
Its only effect is to enable the @samp{report_stats} predicate
in the @samp{benchmarking} module of the standard library
to report this information.
@sp 1
@item --fp-rounding-mode @var{mode}
@findex --fp-rounding-mode @var{mode}
Set the rounding mode for floating point operations to @var{mode}.
Recognised modes are @samp{downward}, @samp{upward}, @samp{to_nearest}
and @samp{toward_zero}. Exactly what modes are available and even
if it is possible to set the rounding mode is system dependent.
@end table
@sp 1
@item MERCURY_COMPILER
@vindex MERCURY_COMPILER
Filename of the Mercury Compiler.
@sp 1
@item MERCURY_MKINIT
@vindex MERCURY_MKINIT
Filename of the program to create the @file{*_init.c} file.
@sp 1
@item MERCURY_DEBUGGER_INIT
@vindex MERCURY_DEBUGGER_INIT
Name of a file that contains startup commands for the Mercury debugger.
This file should contain documentation for the debugger command set,
and possibly a set of default aliases.
@end table
@c ----------------------------------------------------------------------------
@node C compilers
@chapter Using a different C compiler
@cindex C compilers
@cindex Using a different C compiler
@cindex GNU C
@findex --cc
The Mercury compiler takes special advantage of certain extensions
provided by GNU C to generate much more efficient code. We therefore
recommend that you use GNU C for compiling Mercury programs.
However, if for some reason you wish to use another compiler,
it is possible to do so. Here's what you need to do.
@itemize @bullet
@item Create a new configuration for the Mercury system using the
@samp{mercury_config} script, specifying the different C compiler, e.g.@:
@samp{mercury_config --output-prefix=/usr/local/mercury-cc --with-cc=cc}.
@item Add the @samp{bin} directory of the new configuration to the beginning
of your PATH.
@item
You must use a grade beginning with @samp{none}, @samp{hlc} or @samp{hl}
(e.g.@: @samp{hlc.gc}).
You can specify the grade in one of three ways: by setting the
@samp{MERCURY_DEFAULT_GRADE} environment variable, by adding a line
@vindex MERCURY_DEFAULT_GRADE
@samp{GRADE=@dots{}} to your @samp{Mmake} file, or by using the
@samp{--grade} option to @samp{mmc}. (You will also need to install
those grades of the Mercury library, if you have not already done so.)
@item
If your compiler is particularly strict in
enforcing ANSI compliance, you may also need to compile the Mercury
code with @samp{--no-static-ground-terms}.
@end itemize
@c ----------------------------------------------------------------------------
@node Foreign language interface
@chapter Foreign language interface
The Mercury foreign language interfaces allows pragma foreign_proc to
specify multiple implementations (in different foreign programming
languages) for a procedure.
If the compiler generates code for a procedure using a back-end for which
there are multiple applicable foreign languages, it will choose the
foreign language to use for each procedure according to a builtin ordering.
If the language specified in a foreign_proc is not available for a
particular backend, it will be ignored.
If there are no suitable foreign_proc clauses for a particular
procedure but there are Mercury clauses, they will be used instead.
@table @asis
@item @samp{C}
This is the default foreign language on all backends which compile to C
or assembler.
Only available on backends that compile to C or assembler.
@item @samp{C#}
Only available on backends that compile to IL or C#.
This is the second preferred foreign language for IL code generation.
@item @samp{Erlang}
This is the only foreign language for backends which compile to Erlang.
@item @samp{IL}
IL is the intermediate language of the .NET Common Language
Runtime (sometimes also known as CIL or MSIL).
Only available on backends that compile to IL.
This is the preferred foreign language for IL code generation.
@item @samp{Java}
This is the only foreign language for backends which compile to Java.
@end table
@c ----------------------------------------------------------------------------
@node Stand-alone Interfaces
@chapter Stand-alone Interfaces
Programs written in a language other than Mercury should not make calls
to foreign exported Mercury procedures unless the Mercury runtime has been
initialised.
(In the case where the Mercury runtime has not been initialised the
behaviour of these calls is undefined.)
Such programs must also ensure that any module specific initialisation is
performed before calling foreign exported procedures in Mercury modules.
Likewise, module specific finalisation may need to be performed after
all calls to Mercury procedures have been made.
A stand-alone interface provides a mechanism by which non-Mercury programs
may initialise (and shut down) the Mercury runtime plus a specified set
of Mercury libraries.
A stand-alone interface is created by invoking the compiler with the
@samp{--generate-standalone-interface} option.
The set of Mercury libraries to be included in the stand-alone interface
is given via one of the usual mechanisms for specifying what libraries to link
against, e.g. the @samp{--ml} and @samp{--mld} options.
(@pxref{Libraries}).
The Mercury standard library is always included in this set.
In C grades the @samp{--generate-standalone-interface} option causes
the compiler to generate an object file that should be linked into
the executable.
This object file contains two functions:
@code{mercury_init()} and @code{mercury_terminate()}.
The compiler also generates a C header file that contains
the prototypes of these functions.
(This header file may be included in C++ programs.)
The roles of the two functions are described below.
@table @b
@item @bullet{} @code{mercury_init()}
Prototype:
@example
void mercury_init(int @var{argc}, char **@var{argv}, void *@var{stackbottom});
@end example
Initialise the Mercury runtime, standard library and any other Mercury
libraries that were specified when the stand-alone interface was generated.
@var{argc} and @var{argv} are the argument count and argument vector,
as would be passed to the function @code{main()} in a C program.
@var{stackbottom} is the address of the base of the stack.
In grades that use conservative garbage collection this is used to
tell the collector where to begin tracing.
This function must be called before any Mercury procedures
and must only be called once.
It is recommended that the value of @var{stackbottom} be set by passing
the address of a local variable in the @code{main()} function of a program,
for example:
@example
int main(int argc, char **argv) @{
void *dummy;
mercury_init(argc, argv, &dummy);
@dots{}
@}
@end example
Note that the address of the stack base should be word aligned as
some garbage collectors rely upon this.
(This is why the type of the dummy variable in the above example is
@code{void *}.)
If the value of @var{stackbottom} is @code{NULL} then the collector will attempt
to determine the address of the base of the stack itself.
Note that modifying the argument vector, @var{argv}, after the Mercury runtime
has been initialised will result in undefined behaviour since the runtime
maintains a reference into @var{argv}.
@sp 1
@item @bullet{} @code{mercury_terminate()}
Prototype:
@example
int mercury_terminate(void);
@end example
Shut down the Mercury runtime.
The value returned by this function is Mercury's exit status
(as set by the predicate io.set_exit_status/3).
This function will also invoke any finalisers contained in the set
of libraries for which the stand-alone interface was generated.
@end table
The basename of the object and header file are provided as
the argument of @samp{--generate-standalone-interface} option.
@c XXX Mention that stand-alone interfaces work with debugging or
@c (deep) profiling?
Stand-alone interfaces are not currently supported for target languages
other than C.
For an example of using a stand-alone interface see the
@samp{samples/c_interface/standalone_c} directory in the Mercury distribution.
@c ----------------------------------------------------------------------------
@node Index
@unnumbered Index
@printindex cp
@c ----------------------------------------------------------------------------
@bye