compiler/make_hlds_passes.m:
We used to add all modules imported by an ancestor of the current module
to the set of used modules. Once upon a time, this was meant to stop
the compiler generating misleading warnings about imports being unused
when the import wasn't even done by the current module. However, since
we introduced structured representations of import- and use_module
declarations and taught unused_imports.m to use them, that has not been
an issue. However, a bad side-effect remained, which was that if
a module A imported a module B but did not use it, or it imported
module B in its interface but did not use in its interface, then
any warning we could generate about that import being unused was
suppressed by any import of module B in any of module A's ancestors.
(The "shadowing" mentioned above.)
Fix the problem by adding modules imported by ancestors of the
current module NOT to the set of used modules, but to a new field
in the module_info.
compiler/hlds_module.m:
Add this new field. As it happens, it is not needed right now,
but it may be needed later.
Update some documentation.
Note an only-tangentially-related problem.
compiler/unused_imports.m:
Fix a bug that was hiding behind the shadowing, which was that whether
the text of the warning message we generated for an unused local import-
or use_module declaration could be affected by the presence of an
import- or use_module declaration in an ancestor module.
Improve debugging infrastructure.
Make a predicate name more descriptive.
NEWS:
Announce the bugfix.
compiler/add_pragma_tabling.m:
compiler/add_solver.m:
compiler/add_type.m:
compiler/parse_string_format.m:
compiler/recompilation.usage.m:
compiler/recompilation.used_file.m:
library/io.call_system.m:
library/io.text_read.m:
library/random.sfc32.m:
library/random.sfc64.m:
library/random.system_rng.m:
library/string.parse_runtime.m:
library/string.parse_util.m:
library/string.to_string.m:
library/thread.closeable_channel.m:
mdbcomp/feedback.automatic_parallelism.m:
Delete imports that the fixed compiler now generates unused import
warnings for.
Discussion of these changes can be found on the Mercury developers
mailing list archives from June 2018.
COPYING.LIB:
Add a special linking exception to the LGPL.
*:
Update references to COPYING.LIB.
Clean up some minor errors that have accumulated in copyright
messages.
mdbcomp/builtin_modules.m:
mdbcomp/feedback.automatic_parallelism.m:
mdbcomp/feedback.m:
mdbcomp/mdbcomp.goal_path.m:
mdbcomp/mer_mdbcomp.m:
mdbcomp/prim_data.m:
mdbcomp/program_representation.m:
mdbcomp/rtti_access.m:
mdbcomp/slice_and_dice.m:
mdbcomp/sym_name.m:
mdbcomp/trace_counts.m:
Fix inconsistencies between (a) the order in which functions and predicates
are declared, and (b) the order in which they are defined.
In most modules, either the order of the declarations or the order
of the definitions made sense, and I changed the other to match.
In some modules, neither made sense, so I changed *both* to an order
that *does* make sense (i.e. it has related predicates together).
Move one predicate that is needed in two modules from one of them
to a third module (prim_data.m), since that is the one that defines
the type for which the predicate is a utility predicate.
In some places, put dividers between groups of related
functions/predicates, to make the groups themselves more visible.
In some places, fix comments or programming style.
mdbcomp/MDBCOMP_FLAGS.in:
Since all the modules in this directory are now free from any warnings
generated by --warn-inconsistent-pred-order-clauses, specify that option
by default in this directory to keep it that way.
Fix some other issues as well that I found while fixing the spelling.
mdbcomp/feedback.automatic_parallelism.m:
deep_profiler/autopar_find_best_par.m:
deep_profiler/mdprof_create_feedback.m:
Rename the best_par_algorithm type to alg_for_finding_best_par,
since the old name was misleading. Perform the same rename for
another type based on it, and the option specifying it.
Remove the functor estimate_speedup_by_num_vars, since it hasn't
been used by anything in a long time, and won't in the future.
deep_profiler/autopar_calc_overlap.m:
deep_profiler/autopar_costs.m:
deep_profiler/autopar_reports.m:
deep_profiler/autopar_search_callgraph.m:
deep_profiler/autopar_search_goals.m:
deep_profiler/coverage.m:
deep_profiler/create_report.m:
deep_profiler/dump.m:
deep_profiler/mdprof_report_feedback.m:
deep_profiler/measurement_units.m:
deep_profiler/measurements.m:
deep_profiler/message.m:
deep_profiler/query.m:
deep_profiler/recursion_patterns.m:
deep_profiler/report.m:
deep_profiler/startup.m:
deep_profiler/var_use_analysis.m:
mdbcomp/mdbcomp.goal_path.m:
mdbcomp/program_representation.m:
Conform to the above. Fix spelling errors. In some places, improve
comments and/or variable names.
Estimated hours taken: 120
Branches: main
The algorithm that decides whether the order independent state update
transformation is applicable in a given module needs access to the list
of oisu pragmas in that module, and to information about the types
of variables in the procedures named in those pragmas. This diff
puts this information in Deep.procrep files, to make them available
to the autoparallelization feedback program, to which that algorithm
will later be added.
Compilers that have this diff will generate Deep.procrep files in a new,
slightly different format, but the deep profiler will be able to read
Deep.procrep files not just in the new format, but in the old format as well.
runtime/mercury_stack_layout.h:
Add to module layout structures the fields holding the new information
we want to put into Deep.procrep files. This means three things:
- a bytecode array in module layout structures encoding the list
of oisu pragmas in the module;
- additions to the bytecode arrays in procedure layout structures
mapping the procedure's variables to their types; and
- a bytecode array containing the encoded versions of those types
themselves in the module layout structure. This allows us to
represent each type used in the module just once.
Since there is now information in module layout structures that
is needed only for deep profiling, as well as information that is
needed only for debugging, the old arrangement that split a module's
information between two structures, MR_ModuleLayout (debug specific
info) and MR_ModuleCommonLayout (info used by both debugging and
profiling), is no longer approriate. We could add a third structure
containing profiling-specific info, but it is simpler to move
all the info into just one structure, some of whose fields
may not be used. This wastes only a few words of memory per module,
but allows the runtime system to avoid unnecessary indirections.
runtime/mercury_types.h:
Remove the type synonym for the deleted type.
runtime/mercury_grade.h:
The change in mercury_stack_layout.h destroys binary compatibility
with previous versions of Mercury for debug and deep profiling grades,
so bump their grade-component-specific version numbers.
runtime/mercury_deep_profiling.c:
Write out the information in the new fields in module layout
structures, if they are filled in.
Since this changes the format of the Deep.procrep file, bump
its version number.
runtime/mercury_deep_profiling.h:
runtime/mercury_stack_layout.c:
Conform to the change to mercury_stack_layout.h.
mdbcomp/program_representation.m:
Add to module representations information about the oisu pragmas
defined in that module, and the type table of the module.
Optionally add to procedure representations a map mapping
the variables of the procedure to their types.
Rename the old var_table type to be the var_name_table type,
since it contains just names. Make the var to type map separate,
since it will be there only for selected procedures.
Modify the predicates reading in module and procedure representations
to allow them to read in the new representation, while still accepting
the old one. Use the version number in the Deep.procrep file to decide
which format to expect.
mdbcomp/rtti_access.m:
Add functions to encode the data representations that this module
also decodes.
Conform to the changes above.
mdbcomp/feedback.automatic_parallelism.m:
Conform the changes above.
mdbcomp/prim_data.m:
Fix layout.
compiler/layout.m:
Update the compiler's representation of layout structures
to conform to the change to runtime/mercury_stack_layout.h.
compiler/layout_out.m:
Output the new parts of module layout structures.
compiler/opt_debug.m:
Allow the debugging of code referring to the new parts of
module layout structures.
compiler/llds_out_file.m:
Conform to the move to a single module layout structure.
compiler/prog_rep_tables.m:
This new module provided mechanisms for building the string table
and the type table components of module layouts. The string table
part is old (it is moved here from stack_layout.m); the type table
part is new.
Putting this code in a module of its own allows us to remove
a circular dependency between prog_rep.m and stack_layout.m;
instead, both now just depend on prog_rep_tables.m.
compiler/ll_backend.m:
Add the new module.
compiler/notes/compiler_design.html:
Describe the new module.
compiler/prog_rep.m:
When generating the representation of a module for deep profiling,
include the information needed by the order independent state update
analysis: the list of oisu pragmas in the module, if any, and
information about the types of variables in selected procedures.
To avoid having these additions increasing the size of the bytecode
representation too much, convert some fixed 32 bit numbers in the
bytecode to use variable sized numbers, which will usually be 8 or 16
bits.
Do not use predicates from bytecode_gen.m to encode numbers,
since there is nothing keeping these in sync with the code that
reads them in mdbcomp/program_representation.m. Instead, use
new predicates in program_representation.m itself.
compiler/stack_layout.m:
Generate the new parts of module layouts.
Remove the code moved to prog_rep_tables.m.
compiler/continuation_info.m:
compiler/proc_gen.m:
Make some more information available to stack_layout.m.
compiler/prog_data.m:
Fix some formatting.
compiler/introduce_parallelism.m:
Conform to the renaming of the var_table type.
compiler/follow_code.m:
Fix the bug that used to cause the failure of the
hard_coded/mode_check_clauses test case in deep profiling grades.
deep_profiler/program_representation_utils.m:
Output the new parts of module and procedure representations,
to allow the correctness of this change to be tested.
deep_profiler/mdprof_create_feedback.m:
If we cannot read the Deep.procrep file, print a single error message
and exit, instead of continuing with an analysis that will generate
a whole bunch of error messages, one for each attempt to access
a procedure's representation.
deep_profiler/mdprof_procrep.m:
Give this program an option that specifies what file it is to
look at; do not hardwire in "Deep.procrep" in the current directory.
deep_profiler/report.m:
Add a report type that just prints the representation of a module.
It returns the same information as mdprof_procrep, but from within
the deep profiler, which can be more convenient.
deep_profiler/create_report.m:
deep_profiler/display_report.m:
Respectively create and display the new report type.
deep_profiler/query.m:
Recognize a query asking for the new report type.
deep_profiler/autopar_calc_overlap.m:
deep_profiler/autopar_find_best_par.m:
deep_profiler/autopar_reports.m:
deep_profiler/autopar_search_callgraph.m:
deep_profiler/autopar_search_goals.m:
deep_profiler/autopar_types.m:
deep_profiler/branch_and_bound.m:
deep_profiler/coverage.m:
deep_profiler/display.m:
deep_profiler/html_format.m:
deep_profiler/mdprof_test.m:
deep_profiler/measurements.m:
deep_profiler/query.m:
deep_profiler/read_profile.m:
deep_profiler/recursion_patterns.m:
deep_profiler/top_procs.m:
deep_profiler/top_procs.m:
Conform to the changes above.
Fix layout.
tests/debugger/declarative/dependency.exp2:
Add this file as a possible expected output. It contains the new
field added to module representations.
Estimated hours taken: 1
Branches: main
compiler/polymorphism.m:
When looking up slots in typeclassinfos, we need variables to hold
the values of the indexes of the slots. If possible, do not generate
a new variable for this: instead, reuse an existing integer constant
previously generated by the polymorphism transformation.
Make all the parts of the polymorphism transformation that need
variables holding integer constants use the same mechanism to create
them.
compiler/add_pragma.m:
compiler/analysis.file.m:
compiler/make.dependencies.m:
compiler/make.module_dep_file.m:
compiler/make.module_target.m:
compiler/recompilation.usage.m:
compiler/recompilation.version.m:
compiler/structure_reuse.direct.choose_reuse.m:
library/bit_buffer.read.m:
mdbcomp/feedback.automatic_parallelism.m:
Add a bunch of imports. They are of modules that are imported in
the relevant module's ancestor, but my compiler is giving me errors
without them being duplicated.
Estimated hours taken: 20
Branches: main
Change the types that represent forward and reverse goal paths from being
wrappers around lists of steps, to being full discriminated union types.
This is meant to accomplish two objectives.
First, since taking the wrappers off and putting them back on is inconvenient,
code often dealt with naked lists of steps, with the meaning of those steps
sometimes being unclear.
Second, in a future change I intend to change the way the debugger represents
goal paths from being strings to being statically allocated terms of the
reverse_goal_path type. This should have two benefits. One is reduced memory
consumption, since two different goal path strings cannot share memory
but two different reverse goal paths can share the memory containing their
common tail (the goal paths steps near the root). The other is that the
declarative debugger won't need to do any conversion from string to structure,
and should therefore be faster.
Having the compiler generate static terms of the reverse_goal_path type into
the .c files it generates for every Mercury program being compiled with
debugging requires it to have access to the definition of that type and all
its components. The best way to do this is to put all those types into a new
builtin module in the library (a debugging equivalent of e.g.
profiling_builtin.m). We cannot put the definition of the list type into
that module without causing considerable backward incompatibilities.
mdbcomp/mdbcomp.goal_path.m:
Make the change described above.
Add some more predicates implementing abstract operations on goal
paths.
browser/declarative_tree.m:
compiler/goal_path.m:
compiler/goal_util.m:
compiler/hlds_goal.m:
compiler/introduce_parallelism.m:
compiler/mode_ordering.m:
compiler/push_goals_together.m:
compiler/rbmm.condition_renaming.m:
compiler/trace_gen.m:
compiler/tupling.m:
compiler/unneeded_code.m:
deep_profiler/autopar_costs.m:
deep_profiler/autopar_reports.m:
deep_profiler/autopar_search_callgraph.m:
deep_profiler/autopar_search_goals.m:
deep_profiler/create_report.m:
deep_profiler/message.m:
deep_profiler/program_representation_utils.m:
deep_profiler/read_profile.m:
deep_profiler/recursion_patterns.m:
deep_profiler/var_use_analysis.m:
Conform to the change in representation. In some cases, remove
predicates whose only job was to manipulate wrappers. In others,
replace concrete operations on lists of steps with abstract operations
on goal paths.
compiler/mode_constraints.m:
Comment out some code that I do not understand, which I think never
worked (not surprising, since the whole module has never been
operational).
mdbcomp/slice_and_dice.m:
Since this diff changes the types representing goal paths, it also
changes their default ordering, as implemented by builtin.compare.
When ordering slices and dices by goal paths, make the ordering
explicitly work on the forward goal path, since ordering by the
reverse goal path (the actual data being used) gives nonintuitive
results.
library/list.m:
Speed up some code.
mdbcomp/feedback.automatic_parallelism.m:
Fix some formatting.
Branches: main
Change the argument order of many of the predicates in the map, bimap, and
multi_map modules so they are more conducive to the use of state variable
notation, i.e. make the order the same as in the sv* modules.
Prepare for the deprecation of the sv{bimap,map,multi_map} modules by
removing their use throughout the system.
library/bimap.m:
library/map.m:
library/multi_map.m:
As above.
NEWS:
Announce the change.
Separate out the "highlights" from the "detailed listing" for
the post-11.01 NEWS.
Reorganise the announcement of the Unicode support.
benchmarks/*/*.m:
browser/*.m:
compiler/*.m:
deep_profiler/*.m:
extras/*/*.m:
mdbcomp/*.m:
profiler/*.m:
tests/*/*.m:
ssdb/*.m:
samples/*/*.m
slice/*.m:
Conform to the above change.
Remove any dependencies on the sv{bimap,map,multi_map} modules.
Estimated hours taken: 3
Branches: main
Fix some usability issues with mdprof_feedback.
deep_profiler/mdprof_feedback.m:
If the user does not ask for any info to be put into the feedback file,
report this as an error, instead of taking a long time to process the
Deep.data file and then generating a feedback file containing no
information.
Remove obsolete options.
Give the still useful options names that don't take up half a line.
Their old names are still preserved for backward compatibility.
Add an option for specifying a speedup threshold: do not apply a
parallelization if its predicted speedup is less than this threshold.
Report errors by printing error messages, not by throwing exceptions.
Preface error messages with the program name, so users know where they
come from.
mdbcomp/feedback.automatic_parallelism.m:
Add a field specifying the speedup threshold to the autopar options
structure.
Change the representation of the parallelise_dep_conjs type to make
clearer what it means.
deep_profiler/autopar_search_goals.m:
Apply the speedup threshold.
deep_profiler/autopar_calc_overlap.m:
deep_profiler/autopar_find_best_par.m:
Conform to the changes above.
conjunctions. These fixes ensure that our implementation now matches the
algorithms in our paper.
Benchmarking can now begin for the paper.
deep_profiler/mdprof_fb.automatic_parallelism.m:
Remove build_candidate_par_conjunction_maps since it's no-longer called.
Fix a bug where candidate procedures where generated with no candidate
conjunctions in them.
Fix a bug where !ConjNum was not incremented in a loop, this caused
SparkDelay to be calculated incorrectly when calculating the cost of a
parallel conjunction.
Account for the cost of calling signal in the right place when calculating
the cost of a parallel conjunction.
Conform to changes in measurements.m.
deep_profiler/mdprof_feedback.m:
Add the command line option for the barrier cost during parallel execution.
deep_profiler/measurements.m:
The incomplete parallel exec metrics structure now tracks deadtime due to
futures explicitly. Previously it was calculated from other values.
Conform to the parallel execution time calculations in
mdprof_fb.automatic_parallelism.m. Each conjunct is delayed by:
SparkDelay * (ConjNum - 1) except for the first.
Fix signal costs, they're now stored with the conjunct that incurred them
rather than the one that waited on the variable. This also prevents them
from being counted more than once.
Added support for the new parallel execution overhead 'barrier cost'.
mdbcomp/feedback.automatic_parallelism.m:
Added support for the new parallel execution overhead 'barrier cost'.
Modified the parallel execution metrics so that different overheads are
accounted for separately.
Changed a comment so that it clarifies how the range of goals in the
push_goal type should be interpreted.
mdbcomp/feedback.m:
Increment feedback_version.
Estimated hours taken: 2
deep_profiler/mdprof_db.automatic_parallelism.m:
Merge sets of push goals.
mdbcomp/feedback.automatic_parallelism.m:
Add back the field that Paul deleted, since it is useful.
that the recursive call looks cheap in cases where pushing goals is requited.
deep_profiler/mdprof_fb.automatic_parallelism.m:
If pushing a goal and attempting a paralleisation fails then return the
single costly goals to our caller so that it can attempt to push and
parallelise these goals with it's own.
Whether a goal is above the call site threshold or not no-longer depends on
the goal type. This switch has been removed.
Add marks where pushes should be merged.
Return pushes from goal_get_conjunctions_worth_parallelising and fill in
the push goals list in the candidate procedure structure.
Pretty-print the goal push list for a candidate procedure.
mdbcomp/feedback.automatic_parallelism.m:
Remove the maybe push goal field from candidate conjunctions,
There is already a list of push goals in the candidate procedure structure.
mdbcomp/feedback.m:
Increment feedback_version.
affect that when trying to parallelise a loop that we assume the recursive call
will execute the recursive case once followed by the base case. If this
parallelisation is optimistic, then it is optimistic to parallelise the whole
loop.
deep_profiler/mdprof_fb.automatic_parallelism.m:
As above.
Track the containing goal map for a procedure's implicit parallelism
analysis.
deep_profiler/var_use_analysis.m:
Fix the checks for module boundaries, they where placed in the wrong places.
Handle recursive var use analysis by induction.
Move the checks for unbounded recursion in this code to places that make
more sense for the new analysis by induction.
Duplicate the variable use analysis to create a specific one for computing
variable use in the recursive and base cases.
Documented this module's trace flags.
deep_profiler/measurement_utils.m:
Fix the calculation of disjuncts of probabilities.
mdbcomp/mdbcomp.goal_path.m:
Add another version of create_goal_id_array that takes a default value for
each array slot.
mdbcomp/feedback.m:
Increment feedback_version to reflect Zoltan's push goals changes.
mdbcomp/feedback.automatic_parallelism.m:
Add a note asking people to increment feedback_version if they change any
structures here.
deep_profiler/Mercury.options:
Documented var_use_analysis' trace flags.
library where it can be used by the deep profiler.
Also move the goal path code from program_representation.m to the new module,
goal_path.m in mdbcomp/
mdbcomp/goal_path.m:
New module containing goal path code.
mdbcomp/program_representation.m:
Original location of goal path code.
compiler/goal_path.m:
Move some of this goal_path code into mdbcomp/goal_path.m
mdbcomp/feedback.automatic_parallelisation.m:
mdbcomp/rtti_access.m:
mdbcomp/slice_and_dice.m:
mdbcomp/trace_counts.m:
browser/debugger_interface.m:
browser/declarative_execution.m:
browser/declarative_tree.m:
compiler/build_mode_constraints.m:
compiler/call_gen.m:
compiler/code_info.m:
compiler/continuation_info.m:
compiler/coverage_profiling.m:
compiler/deep_profiling.m:
compiler/format_call.m:
compiler/goal_path.m:
compiler/goal_util.m:
compiler/hlds_data.m:
compiler/hlds_goal.m:
compiler/hlds_out_goal.m:
compiler/hlds_out_pred.m:
compiler/hlds_pred.m:
compiler/interval.m:
compiler/introduce_parallelism.m:
compiler/layout_out.m:
compiler/llds.m:
compiler/mode_constraint_robdd.m:
compiler/mode_constraints.m:
compiler/mode_ordering.m:
compiler/ordering_mode_constraints.m:
compiler/polymorphism.m:
compiler/post_typecheck.m:
compiler/prog_rep.m:
compiler/prop_mode_constraints.m:
compiler/push_goals_together.m:
compiler/rbmm.condition_renaming.m:
compiler/smm_common.m:
compiler/stack_layout.m:
compiler/stack_opt.m:
compiler/trace_gen.m:
compiler/tupling.m:
compiler/type_constraints.m:
compiler/typecheck.m:
compiler/unify_gen.m:
compiler/unneeded_code.m:
deep_profiler/Mmakefile:
deep_profiler/analysis_utils.m:
deep_profiler/coverage.m:
deep_profiler/create_report.m:
deep_profiler/display_report.m:
deep_profiler/dump.m:
deep_profiler/mdprof_fb.automatic_parallelism.m:
deep_profiler/message.m:
deep_profiler/old_query.m:
deep_profiler/profile.m:
deep_profiler/program_representation_utils.m:
deep_profiler/read_profile.m:
deep_profiler/recursion_patterns.m:
deep_profiler/report.m:
deep_profiler/var_use_analysis.m:
slice/Mmakefile:
slice/mcov.m:
Conform to the move of the goal path code.
This feedback information is part of automatic parallelisation feedback. It
describes cases where goals after a branch goal but in the same conjunction
should be pushed into the branches of the branching goal. This can allow the
pushed goal to be parallelised against goals that already exist in one or more
arms of the branch goal without parallelising the whole branch goal.
This change simply creates the data-structures within the feedback framework on
which this feature will be based.
nmdbcomp/feedback.automatic_parallelism.m:
Introduce new push_goal structure that describes the transformation.
mdbcomp/feedback.m:
Incremented feedback format version number.
deep_profiler/mdprof_fb.automatic_parallelism.m:
compiler/implicit_parallelism.m:
Conform to changes in feedback.automatic_parallelism.m.
The code to generate or use this feedback has not been implemented, that
will come later.
Estimated hours taken: 80
Branches: main
The existing representation of goal_paths is suboptimal for several reasons.
- Sometimes we need forward goal paths (e.g. to look up goals), and sometimes
we need reverse goal paths (e.g. when computing goal paths in the first
place). We had two types for them, but
- their names, goal_path and goal_path_consable, were not expressive, and
- we could store only one of them in goal_infos.
- Testing whether goal A is a subgoal of goal B is quite error-prone using
either form of goal paths.
- Using a goal path as a key in a map, which several compiler passes want to
do, requires lots of expensive comparisons.
This diff replaces most uses of goal paths with goal ids. A goal id is an
integer, so it can be used as a key in faster maps, or even in arrays.
Every goal in the body of a procedure gets its id allocated in a depth first
search. Since we process each goal before we dive into is descendants,
the goal representing the whole body of a procedure always gets goal id 0.
The depth first traversal also builds up a map (the containing goal map)
that tells us the parent goal of ever subgoal, with the obvious exception
of the root goal itself. From the containing goal map, one can compute
both reverse and forward goal paths. It can also serve as the basis of an
efficient test of whether the goal identified by goal id A is an ancestor
of another goal identified by goal id B. We don't yet use this test,
but I expect we will in the future.
mdbcomp/program_representation.m:
Add the goal_id type.
Replace the existing goal_path and goal_path_consable types
with two new types, forward_goal_path and reverse_goal_path.
Since these now have wrappers around the list of goal path steps
that identify each kind of goal path, it is now ok to expose their
representations. This makes several compiler passes easier to code.
Update the set of operations on goal paths to work on the new data
structures.
Add a couple of step types to represent lambdas and try goals.
Their omission prior to this would have been a bug for constraint-based
mode analysis, or any other compiler pass prior to the expansion out
of lambda and try goals that wanted to use goal paths to identify
subgoals.
browser/declarative_tree.m:
mdbcomp/rtti_access.m:
mdbcomp/slice_and_dice.m:
mdbcomp/trace_counts.m:
slice/mcov.m:
deep_profiler/*.m:
Conform to the changes in goal path representation.
compiler/hlds_goal:
Replace the goal_path field with a goal_id field in the goal_info,
indicating that from now on, this should be used to identify goals.
Keep a reverse_goal_path field in the goal_info for use by RBMM and
CTGC. Those analyses were too hard to convert to using goal_ids,
especially since RBMM uses goal_paths to identify goals in multi-pass
algorithms that should be one-pass and should not NEED to identify
any goals for later processing.
compiler/goal_path:
Add predicates to fill in goal_ids, and update the predicates
filling in the now deprecated reverse goal path fields.
Add the operations needed by the rest of the compiler
on goal ids and containing goal maps.
Remove the option to set goal paths using "mode equivalent steps".
Constraint based mode analysis now uses goal ids, and can now
do its own equivalent optimization quite simply.
Move the goal_path module from the check_hlds package to the hlds
package.
compiler/*.m:
Conform to the changes in goal path representation.
Most modules now use goal_ids to identify goals, and use a containing
goal map to convert the goal ids to goal paths when needed.
However, the ctgc and rbmm modules still use (reverse) goal paths.
library/digraph.m:
library/group.m:
library/injection.m:
library/pprint.m:
library/pretty_printer.m:
library/term_to_xml.m:
Minor style improvements.
Estimated hours taken: 2
Branches: main
Add the predicates sorry, unexpected and expect to library/error.m.
compiler/compiler_util.m:
library/error.m:
Move the predicates sorry, unexpected and expect from compiler_util
to error.
Put the predicates in error.m into the same order as their
declarations.
compiler/*.m:
Change imports as needed.
compiler/lp.m:
compiler/lp_rational.m:
Change imports as needed, and some minor cleanups.
deep_profiler/*.m:
Switch to using the new library predicates, instead of calling error
directly. Some other minor cleanups.
NEWS:
Mention the new predicates in the standard library.
a conjunction. Now (by default) the search will stop creating choice points if
it has already created too many choice points.
deep_profiler/mdprof_fb.automatic_parallelism.m:
Fix a large number of whitespace problems, such as trailing whitespace at
the end of lines.
Never attempt to parallelise goals that arn't det or cc_multi.
Remove the original greedy search, it's now an option in the branch and
bound search code. Note that the greedy search algorithm has changed and
sacrifices more solutions for runtime than before.
Note that there are bugs remaining in a few cases causing incorrect
parallel execution times to be calculated for dependant parallelisations.
deep_profiler/mdprof_feedback.m:
Conform to changes in mdbcomp/feedback.automatic_parallelism.m.
Update parsing of options for the choice of best parallelsation algorithm.
deep_profiler/branch_and_bound.m:
Allow branch and bound code to track how many 'alternatives' have been
created and alter the search in response to this.
Branch and bound code must now be impure as it may call these impure
predicates.
Flush the output stream in debugging trace goals for branch and bound.
deep_profiler/measurements.m:
Adjust the interface to the parallelsation metrics structure, so that it is
easier to use with the new parallelsation search code.
Changes to the goal costs code:
Rename zero_goal_cost to dead_goal_cost, it is the cost of goals that are
never executed.
Modify atomic_goal_cost to take as a parameter the number of calls made to
this goal.
add_goal_costs has been renamed to add_goal_costs_seq since it computes
the cost of a sequential conjunction of goals.
The goal_cost_csq type has changed to track the number of calls made to
trivial goals.
deep_profiler/message.m:
Added a notice message to be used when the candidate parallel conjunction
is not det or cc_multi.
mdbcomp/feedback.automatic_parallelism.m:
Modify the alternatives for 'best parallelisation algorithm'.
This type now represents the new ways of selecting complete vs greedy
algorithms.
mdbcomp/program_representation.m:
Add a multi-moded detism_components/3 predicate and refactor
detism_get_solutions/1 and detism_get_can_fail/1 to call it.
Add a multi-moded detism_committed_choice/2 predicate and a
committed_choice type.
Fix whitespace errors in this file.
library/array.m:
modify fetch_items/4 to do bounds checking. This change helped me track
down a bug.
waits for futures across module boundaries, which is usually true.
Add a new option to the feedback tool
--implicit-parallelism-intermodule-var-use. This option re-enables the old
behaviour.
Fix a number of bugs and improve the pretty-printing of candidate parallel
conjunctions.
deep_profiler/var_use_analysis.m:
Implement the new behaviour and allow it to be controlled.
Refactor some code to slightly reduce the number of arguments passed to
predicates.
deep_profiler/mdprof_feedback.m:
Implement the new command line option.
Conform to changes in feedback.automatic_parallelism.m.
deep_profiler/recursion_patterns.m:
Fixed a bug in the handling of can-fail switches.
deep_profiler/mdprof_fb.automatic_parallelism.m:
Fix a bug in the calculation of dependency graphs. All goals are
represented by vertexes and dependencies are edges. The program failed to
create a vertex for a goal that had no edges.
Fix a crash when trying to compute variable use information for a goal that
is never called. This was triggered by providing the new variable use
information in the feedback format.
Using the extra feedback information improve the pretty-printing of
candidate parallelisations.
Conform to changes in feedback.automatic_parallelism.m
Conform to changes in var_use_analysis.m
mdbcomp/feedback.automatic_parallelism.m:
Add the new option to control intermodule variable use analysis.
Provided more information in the candidate parallel conjunctions feedback.
The costs of the goals before and after the parallel conjunction are
now provided.
The cost of every goal is now provided (not just calls)
Variable production and consumption times of the shared variables are
provided for each goal if the analysis evaluated them.
Modified convert_candidate_par_conjunctions_proc/3 and
convert_candidate_par_conjunction/3 to pass a reference to the current
parallel conjunction to their higher order argument.
mdbcomp/feedback.m:
Increment feedback file version number.
deep_profiler/program_representation_utils.m:
Improve the pretty-printing of goal representations, in particular, their
annotations.
deep_profiler/create_report.m:
Conform to changes in var_use_analysis.m.
deep_profiler/display_report.m:
Conform to changes in program_representation_utils.m.
library/lazy.m:
Added a new predicate, read_if_val(Lazy, Value) which is true of Lazy has
already been forced and produced Value.
(No update to NEWS necessary).
mdbcomp/feedback.automatic_parallelism.m:
Remove the concept of 'partitions' from the candidate parallel conjunction
type. We no-longer divide conjunctions into partitions before
parallelising them.
mdbcomp/feedback.m:
Increment the feedback format version number.
compiler/implicit_parallelism.m:
Conform to changes in mdbcomp/feedback.automatic_parallelism.m.
deep_profiler/mdprof_fb.automatic_parallelism.m:
Allow the non-atomic goals to be parallelised against one-another.
Modify the goal annotations used internally, many annotations used only for
calls are now used for any goal type.
Variable use information is now stored in a map from variable name to lazy
use data for every goal, not just for the arguments of calls.
Do not partition conjunctions before attempting to parallelise them.
Make the adjust_time_for_waits tolerate floating point errors more easily.
Format costs with commas and, in most cases, two decimal places.
deep_profiler/var_use_analysis.m:
Export a new predicate var_first_use that computes the first use of a
variable within a goal. This predicate uses a new typeclass to retrieve
coverage data from any goal that can implement the typeclass.
deep_profiler/measurements.m:
Added a new abstract type for measuring the cost of a goal, goal_cost_csq.
This is like cs_cost_csq except that it can represent trivial goals (which
don't have a call count).
deep_profiler/coverage.m:
Added deterministic versions of the get_coverage_* predicates.
deep_profiler/program_representation_utils.m:
Made initial_inst_map more generic in its type signature.
Add a new predicate, atomic_goal_is_call/2 which can be used instead of a
large switch on an atomic_goal_rep value.
deep_profiler/message.m:
Rename a message type to make it more general, this is required now that we
compute variable use information for arbitrary goals, not just calls.
library/list.m:
Add map3_foldl.
NEWS:
Announced change to list.m.
This patch fixes various problems, the most significant is the calculation of
variable use information. The parallelisation analysis uses deep profiling
data. In other words, profiling data that is attached to context information
referring to not just the procedure but the chain of calls leading to that
invocation of that procedure (modulo recursion). The variable use analysis did
not use deep profiling data, therefore comparing the time that a variable is
produced with a call to the time in total of that call was not sound, and
sometimes resulted in information that is not possible, such as a variable
being produced or consumed after the call that produces or consumes it has
exited.
This change-set updates the variable use analysis to use deep profiling data to
avoid these problems. At the same time it provides more accurate information
to the automatic parallelisation pass. This is possible because of an earlier
change that allowed the coverage data to use deep profiling data.
In its current state, the parallelisation analysis now finishes without errors
and computes meaningful results when analysing a profile of the mercury
compiler's execution.
deep_profiler/report.m:
The proc var use report is now a call site dynamic var use report.
1) It now uses deep profiling data.
2) It makes more sense from the callers perspective so it's now based
around a call site rather than a proc.
Add inst subtypes to the recursion_type type.
deep_profiler/query.m:
The proc var use query is now a call site dynamic var use query, see
report.m.
deep_profiler/var_use_analysis.m:
Fix a bug here and in mdprof_fb.automatic_parallelism.m: If a
variable is consumed by a call and appears in it's argument list more than
once, take the earliest consumption time rather than the one for the
earliest argument.
Variable use analysis now uses recursion_patterns.m to correctly compute
the cost of recursive calls. It also uses 'deep' profiler data.
Only measure variable use relative to the entry into a procedure, rather
than either relative to the entry or exit. This allows us to simplify a
lot of code.
deep_profiler/create_report.m:
The proc var use info report is now a call site dynamic var use info
report.
Move some utility code from here to the new analysis_utils.m module.
deep_profiler/display_report.m:
Conform to changes in report.m.
Improve the information displayed for variable first-use time
reports.
deep_profiler/mdprof_fb.automatic_parallelism.m:
Conform to changes in report.m
Refactored the walk down the clique tree. This no-longer uses the
clique reports from the deep profiling tool.
We now explore the same static procedure more than once. It may be best to
parallelise it in some contexts rather than others but for now we assume
that the benefits in some context are worth the costs without benefit in
the other contexts. This is better than reaching a context where it is
undesirable first and never visiting a case where parallelisation is
desirable.
Fix a bug in the calculation of how much parallelisation is used by
parallelisations in a clique's parents. This used to trigger an
assertion.
Don't try to parallelize anything in the "exception" module.
There's probably other builtin code we should skip over here.
Removed an overzealous assertion that was too easily triggered by the
inaccuracies of IEEE-754 arithmetic.
Compute variable use information lazily for each variable in each call. I
believe that this has made our implementation much faster as it no-longer
computes information that is never used.
Refactor and move build_recursive_call_site_cost_map to the new
module analysis_utils.m where it can be used by other analyses.
Call site cost maps now use the cs_cost_csq type to store costs,
code in this module now conforms to this change.
Conform to changes in messages.m
deep_profiler/recursion_patterns.m:
Export a new predicate, recursion_type_get_maybe_avg_max_depth/2. This
retrieves the average maximum recursion depth from recursion types that know
this information.
Move code that builds a call site cost map for a procedure to
analysis_utils.m where it can be used by other analyses.
deep_profiler/analysis_utils.m:
Added a new module containing various utility predicates for profile
analysis.
deep_profiler/coverage.m:
Added an extra utility predicate get_coverage_after/2.
deep_profiler/message.m:
Each message has a location that it refers to, a new location type has
been added: call_site_dynamic.
Added a new warning that can be used to describe when a call site's
argument's use time cannot be computed.
Added new predicates for printing out messages whose level is below a
certain threshold. These predicates can be called from io trace goals.
Message levels start at 0 and currently go to 4, more critical messages
have lower levels. The desired verbosity level is stored in a module local
mutable.
deep_profiler/mdprof_feedback.m:
Move the message printing code from here to message.m.
deep_profiler/old_html_format.m:
deep_profiler/old_query.m:
Conform to changes in query.m.
mdbcomp/feedback.automatic_parallelism.m:
Added a new function for computing the 'cpu time' of a parallel
computation.
library/lazy.m:
Moved lazy.m from extras to the standard library.
library/list.m:
Add a new predicate member_index0/3. Like member/2 except it also gives
the zero-based index of the current element within the list.
library/maybe.m:
Add two new insts.
maybe_yes(I) for the maybe type's yes/1 constructor.
maybe_error_ok(I) for the maybe_error type's ok/1 constructor.
library/Mercury.options:
Add a work around for compiling lazy.m with intermodule optimisations.
NEWS:
Update news file for the addition of lazy.m and the member_index0 predicate
in list.m
deep_profiler/.cvsignore:
Ignore feedback.automatic_parallelism.m which is copied by Mmakefile from
the mdbcomp/ directory.
Move automatic parallelisation specific code to a new module
mdbcomp/feedback.automatic_parallelism.m.
mdbcomp/feedback.m:
mdbcomp/feedback.automatic_parallelism.m:
As above.
slice/Mmakefile
deep_profiler/Mmakefile
Copy the new file into the current working directory when with the other
mdbcomp files.
compiler/implicit_parallelism.m:
deep_profiler/mdprof_fb.automatic_parallelism.m:
deep_profiler/mdprof_feedback.m:
deep_profiler/measurements.m:
Import the new module to access code that used to be in feedback.m
Remove unused module imports.