mirror of
https://github.com/Mercury-Language/mercury.git
synced 2025-12-16 06:14:59 +00:00
Estimated hours taken: 400
Branches: main
This diff implements stack slot optimization for the LLDS back end based on
the idea that after a unification such as A = f(B, C, D), saving the
variable A on the stack indirectly also saves the values of B, C and D.
Figuring out what subset of {B,C,D} to access via A and what subset to access
via their own stack slots is a tricky optimization problem. The algorithm we
use to solve it is described in the paper "Using the heap to eliminate stack
accesses" by Zoltan Somogyi and Peter Stuckey, available in ~zs/rep/stackslot.
That paper also describes (and has examples of) the source-to-source
transformation that implements the optimization.
The optimization needs to know what variables are flushed at call sites
and at program points that establish resume points (e.g. entries to
disjunctions and if-then-elses). We already had code to compute this
information in live_vars.m, but this code was being invoked too late.
This diff modifies live_vars.m to allow it to be invoked both by the stack
slot optimization transformation and by the code generator, and allows its
function to be tailored to the requirements of each invocation.
The information computed by live_vars.m is specific to the LLDS back end,
since the MLDS back ends do not (yet) have the same control over stack
frame layout. We therefore store this information in a new back end specific
field in goal_infos. For uniformity, we make all the other existing back end
specific fields in goal_infos, as well as the similarly back end specific
store map field of goal_exprs, subfields of this new field. This happens
to significantly reduce the sizes of goal_infos.
To allow a more meaningful comparison of the gains produced by the new
optimization, do not save any variables across erroneous calls even if
the new optimization is not enabled.
compiler/stack_opt.m:
New module containing the code that performs the transformation
to optimize stack slot usage.
compiler/matching.m:
New module containing an algorithm for maximal matching in bipartite
graphs, specialized for the graphs needed by stack_opt.m.
compiler/mercury_compile.m:
Invoke the new optimization if the options ask for it.
compiler/stack_alloc.m:
New module containing code that is shared between the old,
non-optimizing stack slot allocation system and the new, optimizing
stack slot allocation system, and the code for actually allocating
stack slots in the absence of optimization.
Live_vars.m used to have two tasks: find out what variables need to be
saved on the stack, and allocating those variables to stack slots.
Live_vars.m now does only the first task; stack_alloc.m now does
the second, using code that used to be in live_vars.m.
compiler/trace_params:
Add a new function to test the trace level, which returns yes if we
want to preserve the values of the input headvars.
compiler/notes/compiler_design.html:
Document the new modules (as well as trace_params.m, which wasn't
documented earlier).
compiler/live_vars.m:
Delete the code that is now in stack_alloc.m and graph_colour.m.
Separate out the kinds of stack uses due to nondeterminism: the stack
slots used by nondet calls, and the stack slots used by resumption
points, in order to allow the reuse of stack slots used by resumption
points after execution has left their scope. This should allow the
same stack slots to be used by different variables in the resumption
point at the start of an else branch and nondet calls in the then
branch, since the resumption point of the else branch is not in effect
when the then branch is executed.
If the new option --opt-no-return-calls is set, then say that we do not
need to save any values across erroneous calls.
Use type classes to allow the information generated by this module
to be recorded in the way required by its invoker.
Package up the data structures being passed around readonly into a
single tuple.
compiler/store_alloc.m:
Allow this module to be invoked by stack_opt.m without invoking the
follow_vars transformation, since applying follow_vars before the form
of the HLDS code is otherwise final can be a pessimization.
Make the module_info a part of the record containing the readonly data
passed around during the traversal.
compiler/common.m:
Do not delete or move around unifications created by stack_opt.m.
compiler/call_gen.m:
compiler/code_info.m:
compiler/continuation_info.m:
compiler/var_locn.m:
Allow the code generator to delete its last record of the location
of a value when generating code to make an erroneous call, if the new
--opt-no-return-calls option is set.
compiler/code_gen.m:
Use a more useful algorithm to create the messages/comments that
we put into incr_sp instructions, e.g. by distinguishing between
predicates and functions. This is to allow the new scripts in the
tool directory to gather statistics about the effect of the
optimization on stack frame sizes.
library/exception.m:
Make a hand-written incr_sp follow the new pattern.
compiler/arg_info.m:
Add predicates to figure out the set of input, output and unused
arguments of a procedure in several different circumstances.
Previously, variants of these predicates were repeated in several
places.
compiler/goal_util.m:
Export some previously private utility predicates.
compiler/handle_options.m:
Turn off stack slot optimizations when debugging, unless
--trace-optimized is set.
Add a new dump format useful for debugging --optimize-saved-vars.
compiler/hlds_llds.m:
New module for handling all the stuff specific to the LLDS back end
in HLDS goal_infos.
compiler/hlds_goal.m:
Move all the relevant stuff into the new back end specific field
in goal_infos.
compiler/notes/allocation.html:
Update the documentation of store maps to reflect their movement
into a subfield of goal_infos.
compiler/*.m:
Minor changes to accomodate the placement of all back end specific
information about goals from goal_exprs and individual fields of
goal_infos into a new field in goal_infos that gathers together
all back end specific information.
compiler/use_local_vars.m:
Look for sequences in which several instructions use a fake register
or stack slot as a base register pointing to a cell, and make those
instructions use a local variable instead.
Without this, a key assumption of the stack slot optimization,
that accessing a field in a cell costs only one load or store
instruction, would be much less likely to be true. (With this
optimization, the assumption will be false only if the C compiler's
code generator runs out of registers in a basic block, which for
the code we generate should be unlikely even on x86s.)
compiler/options.m:
Make the old option --optimize-saved-vars ask for both the old stack
slot optimization (implemented by saved_vars.m) that only eliminates
the storing of constants in stack slots, and the new optimization.
Add two new options --optimize-saved-vars-{const,cell} to turn on
the two optimizations separately.
Add a bunch of options to specify the parameters of the new
optimizations, both in stack_opt.m and use_local_vars.m. These are
for implementors only; they are deliberately not documented.
Add a new option, --opt-no-return-cells, that governs whether we avoid
saving variables on the stack at calls that cannot return, either by
succeeding or by failing. This is for implementors only, and thus
deliberately documented only in comments. It is enabled by default.
compiler/optimize.m:
Transmit the value of a new option to use_local_vars.m.
doc/user_guide.texi:
Update the documentation of --optimize-saved-vars.
library/tree234.m:
Undo a previous change of mine that effectively applied this
optimization by hand. That change complicated the code, and now
the compiler can do the optimization automatically.
tools/extract_incr_sp:
A new script for extracting stack frame sizes and messages from
stack increment operations in the C code for LLDS grades.
tools/frame_sizes:
A new script that uses extract_incr_sp to extract information about
stack frame sizes from the C files saved from a stage 2 directory
by makebatch and summarizes the resulting information.
tools/avg_frame_size:
A new script that computes average stack frame sizes from the files
created by frame_sizes.
tools/compare_frame_sizes:
A new script that compares the stack frame size information
extracted from two different stage 2 directories by frame_sizes,
reporting on both average stack frame sizes and on specific procedures
that have different stack frame sizes in the two versions.
352 lines
12 KiB
Mathematica
352 lines
12 KiB
Mathematica
%-----------------------------------------------------------------------------%
|
|
% Copyright (C) 2000-2002 The University of Melbourne.
|
|
% This file may only be copied under the terms of the GNU General
|
|
% Public License - see the file COPYING in the Mercury distribution.
|
|
%-----------------------------------------------------------------------------%
|
|
%
|
|
% Author: fjh.
|
|
%
|
|
% This module is an HLDS-to-HLDS transformation that inserts code to
|
|
% handle heap reclamation on backtracking, by saving and restoring
|
|
% the values of the heap pointer.
|
|
% The transformation involves adding calls to impure
|
|
% predicates defined in library/private_builtin.m, which in turn call
|
|
% the MR_mark_hp() and MR_restore_hp() macros defined in
|
|
% runtime/mercury_heap.h.
|
|
%
|
|
% This pass is currently only used for the MLDS back-end.
|
|
% For some reason (perhaps efficiency?? or more likely just historical?),
|
|
% the LLDS back-end inserts the heap operations as it is generating
|
|
% LLDS code, rather than via an HLDS to HLDS transformation.
|
|
%
|
|
% This module is very similar to add_trail_ops.m.
|
|
%
|
|
%-----------------------------------------------------------------------------%
|
|
|
|
% XXX check goal_infos for correctness
|
|
|
|
%-----------------------------------------------------------------------------%
|
|
|
|
:- module ml_backend__add_heap_ops.
|
|
:- interface.
|
|
:- import_module hlds__hlds_pred, hlds__hlds_module.
|
|
|
|
:- pred add_heap_ops(proc_info::in, module_info::in, proc_info::out) is det.
|
|
|
|
%-----------------------------------------------------------------------------%
|
|
|
|
:- implementation.
|
|
|
|
:- import_module parse_tree__prog_data, parse_tree__prog_util.
|
|
:- import_module (parse_tree__inst).
|
|
:- import_module hlds__hlds_goal, hlds__hlds_data.
|
|
:- import_module hlds__goal_util, hlds__quantification, parse_tree__modules.
|
|
:- import_module check_hlds__type_util.
|
|
:- import_module hlds__instmap, backend_libs__code_model.
|
|
:- import_module ll_backend__code_util.
|
|
|
|
:- import_module bool, string.
|
|
:- import_module assoc_list, list, map, set, varset, std_util, require, term.
|
|
|
|
|
|
%
|
|
% As we traverse the goal, we add new variables to hold the
|
|
% saved values of the heap pointer.
|
|
% So we need to thread a varset and a vartypes mapping through,
|
|
% to record the names and types of the new variables.
|
|
%
|
|
% We also keep the module_info around, so that we can use
|
|
% the predicate table that it contains to lookup the pred_ids
|
|
% for the builtin procedures that we insert calls to.
|
|
% We do not update the module_info as we're traversing the goal.
|
|
%
|
|
|
|
:- type heap_ops_info --->
|
|
heap_ops_info(
|
|
varset :: prog_varset,
|
|
var_types :: vartypes,
|
|
module_info :: module_info
|
|
).
|
|
|
|
add_heap_ops(Proc0, ModuleInfo0, Proc) :-
|
|
proc_info_goal(Proc0, Goal0),
|
|
proc_info_varset(Proc0, VarSet0),
|
|
proc_info_vartypes(Proc0, VarTypes0),
|
|
TrailOpsInfo0 = heap_ops_info(VarSet0, VarTypes0, ModuleInfo0),
|
|
goal_add_heap_ops(Goal0, Goal, TrailOpsInfo0, TrailOpsInfo),
|
|
TrailOpsInfo = heap_ops_info(VarSet, VarTypes, _),
|
|
proc_info_set_goal(Proc0, Goal, Proc1),
|
|
proc_info_set_varset(Proc1, VarSet, Proc2),
|
|
proc_info_set_vartypes(Proc2, VarTypes, Proc3),
|
|
% The code below does not maintain the non-local variables,
|
|
% so we need to requantify.
|
|
% XXX it would be more efficient to maintain them
|
|
% rather than recomputing them every time.
|
|
requantify_proc(Proc3, Proc).
|
|
|
|
:- pred goal_add_heap_ops(hlds_goal::in, hlds_goal::out,
|
|
heap_ops_info::in, heap_ops_info::out) is det.
|
|
|
|
goal_add_heap_ops(GoalExpr0 - GoalInfo, Goal) -->
|
|
goal_expr_add_heap_ops(GoalExpr0, GoalInfo, Goal).
|
|
|
|
:- pred goal_expr_add_heap_ops(hlds_goal_expr::in, hlds_goal_info::in,
|
|
hlds_goal::out,
|
|
heap_ops_info::in, heap_ops_info::out) is det.
|
|
|
|
goal_expr_add_heap_ops(conj(Goals0), GI, conj(Goals) - GI) -->
|
|
conj_add_heap_ops(Goals0, Goals).
|
|
|
|
goal_expr_add_heap_ops(par_conj(Goals0), GI, par_conj(Goals) - GI) -->
|
|
conj_add_heap_ops(Goals0, Goals).
|
|
|
|
goal_expr_add_heap_ops(disj([]), GI, disj([]) - GI) --> [].
|
|
|
|
goal_expr_add_heap_ops(disj(Goals0), GoalInfo, Goal - GoalInfo) -->
|
|
{ Goals0 = [FirstDisjunct | _] },
|
|
|
|
{ goal_info_get_context(GoalInfo, Context) },
|
|
{ goal_info_get_code_model(GoalInfo, CodeModel) },
|
|
|
|
%
|
|
% If necessary, save the heap pointer so that we can
|
|
% restore it on back-tracking.
|
|
% We don't need to do this here if it is a model_det or model_semi
|
|
% disjunction and the first disjunct won't allocate any heap --
|
|
% in that case, we delay saving the heap pointer until just before
|
|
% the first disjunct that might allocate heap.
|
|
%
|
|
(
|
|
{ CodeModel = model_non
|
|
; code_util__goal_may_allocate_heap(FirstDisjunct)
|
|
}
|
|
->
|
|
new_saved_hp_var(SavedHeapPointerVar),
|
|
gen_mark_hp(SavedHeapPointerVar, Context, MarkHeapPointerGoal),
|
|
disj_add_heap_ops(Goals0, yes, yes(SavedHeapPointerVar),
|
|
GoalInfo, Goals),
|
|
{ Goal = conj([MarkHeapPointerGoal, disj(Goals) - GoalInfo]) }
|
|
;
|
|
disj_add_heap_ops(Goals0, yes, no, GoalInfo, Goals),
|
|
{ Goal = disj(Goals) }
|
|
).
|
|
|
|
goal_expr_add_heap_ops(switch(A, B, Cases0), GI, switch(A, B, Cases) - GI) -->
|
|
cases_add_heap_ops(Cases0, Cases).
|
|
|
|
goal_expr_add_heap_ops(not(InnerGoal), OuterGoalInfo, Goal) -->
|
|
%
|
|
% We handle negations by converting them into if-then-elses:
|
|
% not(G) ===> (if G then fail else true)
|
|
%
|
|
{ goal_info_get_context(OuterGoalInfo, Context) },
|
|
{ InnerGoal = _ - InnerGoalInfo },
|
|
{ goal_info_get_determinism(InnerGoalInfo, Determinism) },
|
|
{ determinism_components(Determinism, _CanFail, NumSolns) },
|
|
{ true_goal(Context, True) },
|
|
{ fail_goal(Context, Fail) },
|
|
ModuleInfo =^ module_info,
|
|
{ NumSolns = at_most_zero ->
|
|
% The "then" part of the if-then-else will be unreachable,
|
|
% but to preserve the invariants that the MLDS back-end
|
|
% relies on, we need to make sure that it can't fail.
|
|
% So we use a call to `private_builtin__unused' (which
|
|
% will call error/1) rather than `fail' for the "then" part.
|
|
generate_call("unused", [], det, no, [], ModuleInfo, Context,
|
|
ThenGoal)
|
|
;
|
|
ThenGoal = Fail
|
|
},
|
|
{ NewOuterGoal = if_then_else([], InnerGoal, ThenGoal, True) },
|
|
goal_expr_add_heap_ops(NewOuterGoal, OuterGoalInfo, Goal).
|
|
|
|
goal_expr_add_heap_ops(some(A, B, Goal0), GoalInfo,
|
|
some(A, B, Goal) - GoalInfo) -->
|
|
goal_add_heap_ops(Goal0, Goal).
|
|
|
|
goal_expr_add_heap_ops(if_then_else(A, Cond0, Then0, Else0), GoalInfo,
|
|
Goal - GoalInfo) -->
|
|
goal_add_heap_ops(Cond0, Cond),
|
|
goal_add_heap_ops(Then0, Then),
|
|
goal_add_heap_ops(Else0, Else1),
|
|
%
|
|
% If the condition can allocate heap space,
|
|
% save the heap pointer so that we can
|
|
% restore it if the condition fails.
|
|
%
|
|
( { code_util__goal_may_allocate_heap(Cond0) } ->
|
|
new_saved_hp_var(SavedHeapPointerVar),
|
|
{ goal_info_get_context(GoalInfo, Context) },
|
|
gen_mark_hp(SavedHeapPointerVar, Context, MarkHeapPointerGoal),
|
|
%
|
|
% Generate code to restore the heap pointer,
|
|
% and insert that code at the start of the Else branch.
|
|
%
|
|
gen_restore_hp(SavedHeapPointerVar, Context,
|
|
RestoreHeapPointerGoal),
|
|
{ Else1 = _ - Else1GoalInfo },
|
|
{ Else = conj([RestoreHeapPointerGoal, Else1]) -
|
|
Else1GoalInfo },
|
|
{ IfThenElse = if_then_else(A, Cond, Then, Else) - GoalInfo },
|
|
{ Goal = conj([MarkHeapPointerGoal, IfThenElse]) }
|
|
;
|
|
{ Goal = if_then_else(A, Cond, Then, Else1) }
|
|
).
|
|
|
|
|
|
goal_expr_add_heap_ops(call(A,B,C,D,E,F), GI, call(A,B,C,D,E,F) - GI) --> [].
|
|
|
|
goal_expr_add_heap_ops(generic_call(A,B,C,D), GI, generic_call(A,B,C,D) - GI)
|
|
--> [].
|
|
|
|
goal_expr_add_heap_ops(unify(A,B,C,D,E), GI, unify(A,B,C,D,E) - GI) --> [].
|
|
|
|
goal_expr_add_heap_ops(PragmaForeign, GoalInfo, Goal) -->
|
|
{ PragmaForeign = foreign_proc(_,_,_,_,_,_,Impl) },
|
|
( { Impl = nondet(_,_,_,_,_,_,_,_,_) } ->
|
|
% XXX Implementing heap reclamation for nondet pragma
|
|
% foreign_code via transformation is difficult,
|
|
% because there's nowhere in the HLDS pragma_foreign_code
|
|
% goal where we can insert the heap reclamation operations.
|
|
% For now, we don't support this.
|
|
% Instead, we just generate a call to a procedure which
|
|
% will at runtime call error/1 with an appropriate
|
|
% "Sorry, not implemented" error message.
|
|
ModuleInfo =^ module_info,
|
|
{ goal_info_get_context(GoalInfo, Context) },
|
|
{ generate_call("reclaim_heap_nondet_pragma_foreign_code",
|
|
[], erroneous, no, [], ModuleInfo, Context,
|
|
SorryNotImplementedCode) },
|
|
{ Goal = SorryNotImplementedCode }
|
|
;
|
|
{ Goal = PragmaForeign - GoalInfo }
|
|
).
|
|
|
|
goal_expr_add_heap_ops(shorthand(_), _, _) -->
|
|
% these should have been expanded out by now
|
|
{ error("goal_expr_add_heap_ops: unexpected shorthand") }.
|
|
|
|
:- pred conj_add_heap_ops(hlds_goals::in, hlds_goals::out,
|
|
heap_ops_info::in, heap_ops_info::out) is det.
|
|
conj_add_heap_ops(Goals0, Goals) -->
|
|
list__map_foldl(goal_add_heap_ops, Goals0, Goals).
|
|
|
|
:- pred disj_add_heap_ops(hlds_goals::in, bool::in, maybe(prog_var)::in,
|
|
hlds_goal_info::in, hlds_goals::out,
|
|
heap_ops_info::in, heap_ops_info::out) is det.
|
|
|
|
disj_add_heap_ops([], _, _, _, []) --> [].
|
|
disj_add_heap_ops([Goal0 | Goals0], IsFirstBranch, MaybeSavedHeapPointerVar,
|
|
DisjGoalInfo, DisjGoals) -->
|
|
goal_add_heap_ops(Goal0, Goal1),
|
|
{ Goal1 = _ - GoalInfo },
|
|
{ goal_info_get_context(GoalInfo, Context) },
|
|
%
|
|
% If needed, reset the heap pointer before executing the goal,
|
|
% to reclaim heap space allocated in earlier branches.
|
|
%
|
|
(
|
|
{ IsFirstBranch = no },
|
|
{ MaybeSavedHeapPointerVar = yes(SavedHeapPointerVar0) }
|
|
->
|
|
gen_restore_hp(SavedHeapPointerVar0, Context,
|
|
RestoreHeapPointerGoal),
|
|
{ conj_list_to_goal([RestoreHeapPointerGoal, Goal1], GoalInfo,
|
|
Goal) }
|
|
;
|
|
{ Goal = Goal1 }
|
|
),
|
|
|
|
%
|
|
% Save the heap pointer, if we haven't already done so,
|
|
% and if this disjunct might allocate heap space.
|
|
%
|
|
(
|
|
{ MaybeSavedHeapPointerVar = no },
|
|
{ code_util__goal_may_allocate_heap(Goal) }
|
|
->
|
|
% Generate code to save the heap pointer
|
|
new_saved_hp_var(SavedHeapPointerVar),
|
|
gen_mark_hp(SavedHeapPointerVar, Context, MarkHeapPointerGoal),
|
|
% Recursively handle the remaining disjuncts
|
|
disj_add_heap_ops(Goals0, no, yes(SavedHeapPointerVar),
|
|
DisjGoalInfo, Goals1),
|
|
% Put this disjunct and the remaining disjuncts in a
|
|
% nested disjunction, so that the heap pointer variable
|
|
% can scope over these disjuncts
|
|
{ Disj = disj([Goal | Goals1]) - DisjGoalInfo },
|
|
{ DisjGoals = [conj([MarkHeapPointerGoal, Disj]) -
|
|
DisjGoalInfo] }
|
|
;
|
|
% Just recursively handle the remaining disjuncts
|
|
disj_add_heap_ops(Goals0, no, MaybeSavedHeapPointerVar,
|
|
DisjGoalInfo, Goals),
|
|
{ DisjGoals = [Goal | Goals] }
|
|
).
|
|
|
|
:- pred cases_add_heap_ops(list(case)::in, list(case)::out,
|
|
heap_ops_info::in, heap_ops_info::out) is det.
|
|
cases_add_heap_ops([], []) --> [].
|
|
cases_add_heap_ops([Case0 | Cases0], [Case | Cases]) -->
|
|
{ Case0 = case(ConsId, Goal0) },
|
|
{ Case = case(ConsId, Goal) },
|
|
goal_add_heap_ops(Goal0, Goal),
|
|
cases_add_heap_ops(Cases0, Cases).
|
|
|
|
%-----------------------------------------------------------------------------%
|
|
|
|
:- pred gen_mark_hp(prog_var::in, prog_context::in, hlds_goal::out,
|
|
heap_ops_info::in, heap_ops_info::out) is det.
|
|
|
|
gen_mark_hp(SavedHeapPointerVar, Context, MarkHeapPointerGoal) -->
|
|
ModuleInfo =^ module_info,
|
|
{ generate_call("mark_hp", [SavedHeapPointerVar],
|
|
det, yes(impure),
|
|
[SavedHeapPointerVar - ground_inst],
|
|
ModuleInfo, Context, MarkHeapPointerGoal) }.
|
|
|
|
:- pred gen_restore_hp(prog_var::in, prog_context::in, hlds_goal::out,
|
|
heap_ops_info::in, heap_ops_info::out) is det.
|
|
|
|
gen_restore_hp(SavedHeapPointerVar, Context, RestoreHeapPointerGoal) -->
|
|
ModuleInfo =^ module_info,
|
|
{ generate_call("restore_hp", [SavedHeapPointerVar],
|
|
det, yes(impure), [],
|
|
ModuleInfo, Context, RestoreHeapPointerGoal) }.
|
|
|
|
:- func ground_inst = (inst).
|
|
ground_inst = ground(unique, none).
|
|
|
|
%-----------------------------------------------------------------------------%
|
|
|
|
:- pred new_saved_hp_var(prog_var::out,
|
|
heap_ops_info::in, heap_ops_info::out) is det.
|
|
|
|
new_saved_hp_var(Var) -->
|
|
new_var("HeapPointer", heap_pointer_type, Var).
|
|
|
|
:- pred new_var(string::in, (type)::in, prog_var::out,
|
|
heap_ops_info::in, heap_ops_info::out) is det.
|
|
|
|
new_var(Name, Type, Var, TOI0, TOI) :-
|
|
VarSet0 = TOI0 ^ varset,
|
|
VarTypes0 = TOI0 ^ var_types,
|
|
varset__new_named_var(VarSet0, Name, Var, VarSet),
|
|
map__det_insert(VarTypes0, Var, Type, VarTypes),
|
|
TOI = ((TOI0 ^ varset := VarSet)
|
|
^ var_types := VarTypes).
|
|
|
|
%-----------------------------------------------------------------------------%
|
|
|
|
:- pred generate_call(string::in, list(prog_var)::in, determinism::in,
|
|
maybe(goal_feature)::in, assoc_list(prog_var, inst)::in,
|
|
module_info::in, term__context::in, hlds_goal::out) is det.
|
|
|
|
generate_call(PredName, Args, Detism, MaybeFeature, InstMap, Module, Context,
|
|
CallGoal) :-
|
|
mercury_private_builtin_module(BuiltinModule),
|
|
goal_util__generate_simple_call(BuiltinModule, PredName, Args, Detism,
|
|
MaybeFeature, InstMap, Module, Context, CallGoal).
|
|
|
|
%-----------------------------------------------------------------------------%
|