Commit Graph

24 Commits

Author SHA1 Message Date
Zoltan Somogyi
06f81f1cf0 Add end_module declarations ...
.. to modules which did not yet have them.
2022-01-09 10:36:15 +11:00
Zoltan Somogyi
3c07fc2121 Use explicit streams in deep_profiler/*.m.
deep_profiler/analysis_utils.m:
deep_profiler/autopar_find_best_par.m:
deep_profiler/autopar_reports.m:
deep_profiler/autopar_search_callgraph.m:
deep_profiler/autopar_search_goals.m:
deep_profiler/callgraph.m:
deep_profiler/canonical.m:
deep_profiler/cliques.m:
deep_profiler/coverage.m:
deep_profiler/dump.m:
deep_profiler/mdprof_cgi.m:
deep_profiler/mdprof_create_feedback.m:
deep_profiler/mdprof_dump.m:
deep_profiler/mdprof_procrep.m:
deep_profiler/mdprof_report_feedback.m:
deep_profiler/mdprof_test.m:
deep_profiler/profile.m:
deep_profiler/read_profile.m:
deep_profiler/recursion_patterns.m:
deep_profiler/startup.m:
deep_profiler/var_use_analysis.m:
    Replace implicit streams with explicit streams.

    In some places, simplify some code, often using constructs such as
    string.format that either did not exist or were too expensive to use
    when the original code was written.

    Consistenly use the spelling StdErr over Stderr.

    In mdbprof_dump.m, put filename and reason-for-failing-to-open-that-file
    in the right order in an error message.

deep_profiler/DEEP_FLAGS.in:
    Turn on --warn-implicit-stream-calls for the entire deep_profiler
    directory.

mdbcomp/program_representation.m:
mdbcomp/trace_counts.m:
    Replace implicit streams with explicit streams. These are the two mdbcomp
    modules that (a) used to use implicit streams, and (2) are used by the
    deep profiler.

mdbcomp/Mercury.options:
    Turn on --warn-implicit-stream-calls for these two modules.

slice/mcov.m:
slice/mtc_union.m:
    Conform to the changes in mdbcomp.
2021-03-06 18:30:50 +11:00
Zoltan Somogyi
95f8f56716 Delete unneeded $module args from calls to expect/unexpected. 2019-07-03 22:37:19 +02:00
Zoltan Somogyi
9095985aa8 Fix more warnings from --warn-inconsistent-pred-order-clauses.
deep_profiler/*.m:
    Fix inconsistencies between (a) the order in which functions and predicates
    are declared, and (b) the order in which they are defined.

    In most modules, either the order of the declarations or the order
    of the definitions made sense, and I changed the other to match.
    In some modules, neither made sense, so I changed *both* to an order
    that *does* make sense (i.e. it has related predicates together).

    In query.m, put the various commands in the same sensible order
    as the code processing them.

    In html_format.m, merge two exported functions together, since
    they can't be used separately.

    In some places, put dividers between groups of related
    functions/predicates, to make the groups themselves more visible.

    In some places, fix comments or programming style.

deep_profiler/DEEP_FLAGS.in:
    Since all the modules in this directory are now free from any warnings
    generated by --warn-inconsistent-pred-order-clauses, specify that option
    by default in this directory to keep it that way.
2017-04-30 15:48:13 +10:00
Zoltan Somogyi
270170416d Bring the style of deep_profiler/* up-to-date.
deep_profiler/*.m:
    Replace ( C -> T ; E ) if-then-elses with (if C then T else E ).

    Replace calls to error/1 with calls to unexpected/3.

    Add some module qualifications where this makes the code easier to read.
2015-07-15 15:30:22 +02:00
Zoltan Somogyi
340c5300e6 Fix spelling in the deep profiler.
Fix some other issues as well that I found while fixing the spelling.

mdbcomp/feedback.automatic_parallelism.m:
deep_profiler/autopar_find_best_par.m:
deep_profiler/mdprof_create_feedback.m:
    Rename the best_par_algorithm type to alg_for_finding_best_par,
    since the old name was misleading. Perform the same rename for
    another type based on it, and the option specifying it.

    Remove the functor estimate_speedup_by_num_vars, since it hasn't
    been used by anything in a long time, and won't in the future.

deep_profiler/autopar_calc_overlap.m:
deep_profiler/autopar_costs.m:
deep_profiler/autopar_reports.m:
deep_profiler/autopar_search_callgraph.m:
deep_profiler/autopar_search_goals.m:
deep_profiler/coverage.m:
deep_profiler/create_report.m:
deep_profiler/dump.m:
deep_profiler/mdprof_report_feedback.m:
deep_profiler/measurement_units.m:
deep_profiler/measurements.m:
deep_profiler/message.m:
deep_profiler/query.m:
deep_profiler/recursion_patterns.m:
deep_profiler/report.m:
deep_profiler/startup.m:
deep_profiler/var_use_analysis.m:
mdbcomp/mdbcomp.goal_path.m:
mdbcomp/program_representation.m:
    Conform to the above. Fix spelling errors. In some places, improve
    comments and/or variable names.
2014-12-20 23:05:38 +11:00
Zoltan Somogyi
2d0bfc0674 The algorithm that decides whether the order independent state update
Estimated hours taken: 120
Branches: main

The algorithm that decides whether the order independent state update
transformation is applicable in a given module needs access to the list
of oisu pragmas in that module, and to information about the types
of variables in the procedures named in those pragmas. This diff
puts this information in Deep.procrep files, to make them available
to the autoparallelization feedback program, to which that algorithm
will later be added.

Compilers that have this diff will generate Deep.procrep files in a new,
slightly different format, but the deep profiler will be able to read
Deep.procrep files not just in the new format, but in the old format as well.

runtime/mercury_stack_layout.h:
	Add to module layout structures the fields holding the new information
	we want to put into Deep.procrep files. This means three things:

	- a bytecode array in module layout structures encoding the list
	  of oisu pragmas in the module;
	- additions to the bytecode arrays in procedure layout structures
	  mapping the procedure's variables to their types; and
	- a bytecode array containing the encoded versions of those types
	  themselves in the module layout structure. This allows us to
	  represent each type used in the module just once.

	Since there is now information in module layout structures that
	is needed only for deep profiling, as well as information that is
	needed only for debugging, the old arrangement that split a module's
	information between two structures, MR_ModuleLayout (debug specific
	info) and MR_ModuleCommonLayout (info used by both debugging and
	profiling), is no longer approriate. We could add a third structure
	containing profiling-specific info, but it is simpler to move
	all the info into just one structure, some of whose fields
	may not be used. This wastes only a few words of memory per module,
	but allows the runtime system to avoid unnecessary indirections.

runtime/mercury_types.h:
	Remove the type synonym for the deleted type.

runtime/mercury_grade.h:
	The change in mercury_stack_layout.h destroys binary compatibility
	with previous versions of Mercury for debug and deep profiling grades,
	so bump their grade-component-specific version numbers.

runtime/mercury_deep_profiling.c:
	Write out the information in the new fields in module layout
	structures, if they are filled in.

	Since this changes the format of the Deep.procrep file, bump
	its version number.

runtime/mercury_deep_profiling.h:
runtime/mercury_stack_layout.c:
	Conform to the change to mercury_stack_layout.h.

mdbcomp/program_representation.m:
	Add to module representations information about the oisu pragmas
	defined in that module, and the type table of the module.
	Optionally add to procedure representations a map mapping
	the variables of the procedure to their types.

	Rename the old var_table type to be the var_name_table type,
	since it contains just names. Make the var to type map separate,
	since it will be there only for selected procedures.

	Modify the predicates reading in module and procedure representations
	to allow them to read in the new representation, while still accepting
	the old one. Use the version number in the Deep.procrep file to decide
	which format to expect.

mdbcomp/rtti_access.m:
	Add functions to encode the data representations that this module
	also decodes.

	Conform to the changes above.

mdbcomp/feedback.automatic_parallelism.m:
	Conform the changes above.

mdbcomp/prim_data.m:
	Fix layout.

compiler/layout.m:
	Update the compiler's representation of layout structures
	to conform to the change to runtime/mercury_stack_layout.h.

compiler/layout_out.m:
	Output the new parts of module layout structures.

compiler/opt_debug.m:
	Allow the debugging of code referring to the new parts of
	module layout structures.

compiler/llds_out_file.m:
	Conform to the move to a single module layout structure.

compiler/prog_rep_tables.m:
	This new module provided mechanisms for building the string table
	and the type table components of module layouts. The string table
	part is old (it is moved here from stack_layout.m); the type table
	part is new.

	Putting this code in a module of its own allows us to remove
	a circular dependency between prog_rep.m and stack_layout.m;
	instead, both now just depend on prog_rep_tables.m.

compiler/ll_backend.m:
	Add the new module.

compiler/notes/compiler_design.html:
	Describe the new module.

compiler/prog_rep.m:
	When generating the representation of a module for deep profiling,
	include the information needed by the order independent state update
	analysis: the list of oisu pragmas in the module, if any, and
	information about the types of variables in selected procedures.

	To avoid having these additions increasing the size of the bytecode
	representation too much, convert some fixed 32 bit numbers in the
	bytecode to use variable sized numbers, which will usually be 8 or 16
	bits.

	Do not use predicates from bytecode_gen.m to encode numbers,
	since there is nothing keeping these in sync with the code that
	reads them in mdbcomp/program_representation.m. Instead, use
	new predicates in program_representation.m itself.

compiler/stack_layout.m:
	Generate the new parts of module layouts.

	Remove the code moved to prog_rep_tables.m.

compiler/continuation_info.m:
compiler/proc_gen.m:
	Make some more information available to stack_layout.m.

compiler/prog_data.m:
	Fix some formatting.

compiler/introduce_parallelism.m:
	Conform to the renaming of the var_table type.

compiler/follow_code.m:
	Fix the bug that used to cause the failure of the
	hard_coded/mode_check_clauses test case in deep profiling grades.

deep_profiler/program_representation_utils.m:
	Output the new parts of module and procedure representations,
	to allow the correctness of this change to be tested.

deep_profiler/mdprof_create_feedback.m:
	If we cannot read the Deep.procrep file, print a single error message
	and exit, instead of continuing with an analysis that will generate
	a whole bunch of error messages, one for each attempt to access
	a procedure's representation.

deep_profiler/mdprof_procrep.m:
	Give this program an option that specifies what file it is to
	look at; do not hardwire in "Deep.procrep" in the current directory.

deep_profiler/report.m:
	Add a report type that just prints the representation of a module.
	It returns the same information as mdprof_procrep, but from within
	the deep profiler, which can be more convenient.

deep_profiler/create_report.m:
deep_profiler/display_report.m:
	Respectively create and display the new report type.

deep_profiler/query.m:
	Recognize a query asking for the new report type.

deep_profiler/autopar_calc_overlap.m:
deep_profiler/autopar_find_best_par.m:
deep_profiler/autopar_reports.m:
deep_profiler/autopar_search_callgraph.m:
deep_profiler/autopar_search_goals.m:
deep_profiler/autopar_types.m:
deep_profiler/branch_and_bound.m:
deep_profiler/coverage.m:
deep_profiler/display.m:
deep_profiler/html_format.m:
deep_profiler/mdprof_test.m:
deep_profiler/measurements.m:
deep_profiler/query.m:
deep_profiler/read_profile.m:
deep_profiler/recursion_patterns.m:
deep_profiler/top_procs.m:
deep_profiler/top_procs.m:
	Conform to the changes above.

	Fix layout.

tests/debugger/declarative/dependency.exp2:
	Add this file as a possible expected output. It contains the new
	field added to module representations.
2012-10-24 04:59:55 +00:00
Paul Bone
a4048455bc Minor coverage profiling improvement.
While writing the background section of my thesis I found a case where
coverage profiling can be trivially and obviously improved.  These
changes make this small improvement.  Even though the improvement is
minor, it seems obvious so I'd rather make it so that I don't have to
avoid describing it in the thesis.

In if-then-else expressions with a condition that has at most one
solution the coverage at the beginning of the else branch can be inferred
from the coverage before the ITE and before the then branch.  We can
avoid creating a coverage point at the beginning of the else branch.

I haven't updated the deep profiling version number since a newer
inference tool will happily read older profiles (which means I don't
have to adjust the feedback tests).  However an older tool may not
auto-parallelise a newer profile.

deep_profiler/coverage.m:
    As above.

compiler/coverage_profiling.m:
    Conform with the updates in deep_profiler/coverage.m

    Fix trailing whitespace.
2012-05-16 06:27:09 +00:00
Julien Fischer
9f68c330f0 Change the argument order of many of the predicates in the map, bimap, and
Branches: main

Change the argument order of many of the predicates in the map, bimap, and
multi_map modules so they are more conducive to the use of state variable
notation, i.e. make the order the same as in the sv* modules.

Prepare for the deprecation of the sv{bimap,map,multi_map} modules by
removing their use throughout the system.

library/bimap.m:
library/map.m:
library/multi_map.m:
	As above.
NEWS:
	Announce the change.

	Separate out the "highlights" from the "detailed listing" for
	the post-11.01 NEWS.

	Reorganise the announcement of the Unicode support.

benchmarks/*/*.m:
browser/*.m:
compiler/*.m:
deep_profiler/*.m:
extras/*/*.m:
mdbcomp/*.m:
profiler/*.m:
tests/*/*.m:
ssdb/*.m:
samples/*/*.m
slice/*.m:
	Conform to the above change.

	Remove any dependencies on the sv{bimap,map,multi_map} modules.
2011-05-03 04:35:04 +00:00
Zoltan Somogyi
59b0edacbe New module for calculating the overlap between the conjuncts of a
Estimated hours taken: 2

deep_profiler/autopar_calc_overlap.m:
	New module for calculating the overlap between the conjuncts of a
	parallelised conjunction. Its contents are taken from the old
	autopar_search_callgraph.m.

deep_profiler/autopar_costs.m:
	New module for calculating the costs of goals. Its contents
	are taken from the old autopar_search_callgraph.m.

deep_profiler/autopar_reports.m:
	New module for creating reports. Its contents are taken from
	the old autopar_search_callgraph.m.

deep_profiler/autopar_search_goals.m:
	New module for searching goals for parallelizable conjunctions.
	Its contents are taken from the old autopar_search_callgraph.m.

deep_profiler/autopar_search_callgraph.m:
	Remove the code moved to other modules.

deep_profiler/mdprof_fb.automatic_parallelism.m:
	Add the new modules.

deep_profiler/*.m:
	Remove unnecessary imports.
	Fix copyright years on the new modules.

browser/*.m:
compiler/*.m:
mdbcomp/*.m:
	Remove unnecessary imports.

library/Mercury.options:
	Make it possible to compile a whole workspace with
	--warn-unused-imports by turning that option off for type_desc.m
	(which has a necessary import that --warn-unused-imports thinks
	is unused).
2011-01-27 08:03:54 +00:00
Paul Bone
2070f42b24 Refactor goal annotations in the deep profiler.
Goal annotations have previously been attached to goals using type-polymorphism
and in some cases type classes.  This has become clumsy as new annotations are
created.  Using the goal_id code introduced recently, this change associates
annotations with goals by storing them in an array indexed by goal ids.  Many
analyses have been updated to make use of this code.  This code should also be
faster as less allocation is done when annotating a goal as the goal
representation does not have to be reconstructed.

mdbcomp/mdbcomp.goal_path.m:
    Add predicates for working with goal attribute arrays.  These are
    polymorphic arrays that are indexed by goal id and can be used to associate
    information with goals.

deep_profiler/report.m:
    The procrep coverage info report now stores the coverage annotations in a
    goal_attr_array.

deep_profiler/coverage.m:
    The coverage analysis now returns its result in a goal_attr_array rather
    than by annotation the goal directly.

    The interface for the coverage module has changed, it now allows
    programmers to pass a goal_rep to it directly.  This makes it easier to
    call from other analyses.

    The coverage analysis no longer uses the calls_and_exits structure.
    Instead it uses the cost_and_callees structure like many other analyses.
    This also makes it easier to perform this annotation and others using only
    a single call site map structure.

    Moved add_coverage_point_to_map/5 from create_report.m to coverage.m.

deep_profiler/analysis_utils.m:
    Made cost_and_callees structure polymorphic so that any type can be used to
    represent the callees.  (So that either static or dynamic callees can be
    used).

    Added the number of exit port counts to the cost_and_callees structure.

    Added build_static_call_site_cost_and_callees_map/4.

    Rename build_call_site_cost_and_callees_map/4 to
    build_dynamic_call_site_cost_and_callees_map/4.

deep_profiler/var_use_analysis.m:
    Update the var_use_analysis to use coverage information provided in a
    goal_attr_array.

deep_profiler/recursion_patterns.m:
    Update the recursion analysis to use coverage information provided in a
    goal_attr_array.

deep_profiler/program_representation_utils.m:
    Add label_goals/4 to label goals with goal ids and build a map of goal ids
    to goal paths.

    Update pretty printing fucntions to work with either annotation on the
    goals themselves or provided by a higher order value.  The higher order
    argument maps nicly to the function goal_get_attribute/3 in goal_path.m

deep_profiler/mdprof_fb.automatic_parallelism.m:
    Modify goal_annotate_with_instmap, it now returns the instmap annotations
    in a goal_attr_array.

    Conform to changes in:
        program_representation_utils.m
        coverage.m
        var_use_analysis.m

deep_profiler/message.m:
    Updated messagee to more correctly express the problems that
    mdprof_fb.automatic_parallelism.m may encounter.

deep_profiler/create_report.m:
    Conform to changes in coverage.m.

    Make use of code in analysis_utils.m to prepare call site maps for coverage
    analysis.

deep_profiler/recursion_patterns.m:
deep_profiler/var_use_analysis.m:
    Conform to changes in analysis_utils.m.

deep_profiler/display_report.m:
    Conform to changes in program_representation_utils.m.
2011-01-17 01:47:19 +00:00
Paul Bone
681e8be040 A bug fix and some other corrections for the deep profiler tools.
deep_profiler/analysis_utils.m:
    Handle calculating the recursive call costs at the base case by returning
    an empty dictionary since there are none.  Previously a dictionary
    containing incorrect values was returned.

    Remove a 'is det' from a pred declaration that has sepearte mode
    declarations.

deep_profiler/coverage.m:
    Adjust layout slightly.

    Fixed trailing whitespace.

deep_profiler/recursion_patterns.m:
    Fix an incorrect comment.

deep_profiler/measurements.m:
    Add recursion_depth_is_base_case/1.

    Check for negative depths in recursion_depth_descend/2.
2011-01-13 04:44:28 +00:00
Paul Bone
d43239d6a7 Move some of the goal path code from compiler/goal_path.m to the mdbcomp
library where it can be used by the deep profiler.

Also move the goal path code from program_representation.m to the new module,
goal_path.m in mdbcomp/

mdbcomp/goal_path.m:
    New module containing goal path code.

mdbcomp/program_representation.m:
    Original location of goal path code.

compiler/goal_path.m:
    Move some of this goal_path code into mdbcomp/goal_path.m

mdbcomp/feedback.automatic_parallelisation.m:
mdbcomp/rtti_access.m:
mdbcomp/slice_and_dice.m:
mdbcomp/trace_counts.m:
browser/debugger_interface.m:
browser/declarative_execution.m:
browser/declarative_tree.m:
compiler/build_mode_constraints.m:
compiler/call_gen.m:
compiler/code_info.m:
compiler/continuation_info.m:
compiler/coverage_profiling.m:
compiler/deep_profiling.m:
compiler/format_call.m:
compiler/goal_path.m:
compiler/goal_util.m:
compiler/hlds_data.m:
compiler/hlds_goal.m:
compiler/hlds_out_goal.m:
compiler/hlds_out_pred.m:
compiler/hlds_pred.m:
compiler/interval.m:
compiler/introduce_parallelism.m:
compiler/layout_out.m:
compiler/llds.m:
compiler/mode_constraint_robdd.m:
compiler/mode_constraints.m:
compiler/mode_ordering.m:
compiler/ordering_mode_constraints.m:
compiler/polymorphism.m:
compiler/post_typecheck.m:
compiler/prog_rep.m:
compiler/prop_mode_constraints.m:
compiler/push_goals_together.m:
compiler/rbmm.condition_renaming.m:
compiler/smm_common.m:
compiler/stack_layout.m:
compiler/stack_opt.m:
compiler/trace_gen.m:
compiler/tupling.m:
compiler/type_constraints.m:
compiler/typecheck.m:
compiler/unify_gen.m:
compiler/unneeded_code.m:
deep_profiler/Mmakefile:
deep_profiler/analysis_utils.m:
deep_profiler/coverage.m:
deep_profiler/create_report.m:
deep_profiler/display_report.m:
deep_profiler/dump.m:
deep_profiler/mdprof_fb.automatic_parallelism.m:
deep_profiler/message.m:
deep_profiler/old_query.m:
deep_profiler/profile.m:
deep_profiler/program_representation_utils.m:
deep_profiler/read_profile.m:
deep_profiler/recursion_patterns.m:
deep_profiler/report.m:
deep_profiler/var_use_analysis.m:
slice/Mmakefile:
slice/mcov.m:
    Conform to the move of the goal path code.
2011-01-13 00:36:56 +00:00
Zoltan Somogyi
a2cd0da5b3 The existing representation of goal_paths is suboptimal for several reasons.
Estimated hours taken: 80
Branches: main

The existing representation of goal_paths is suboptimal for several reasons.

- Sometimes we need forward goal paths (e.g. to look up goals), and sometimes
  we need reverse goal paths (e.g. when computing goal paths in the first
  place). We had two types for them, but

  - their names, goal_path and goal_path_consable, were not expressive, and
  - we could store only one of them in goal_infos.

- Testing whether goal A is a subgoal of goal B is quite error-prone using
  either form of goal paths.

- Using a goal path as a key in a map, which several compiler passes want to
  do, requires lots of expensive comparisons.

This diff replaces most uses of goal paths with goal ids. A goal id is an
integer, so it can be used as a key in faster maps, or even in arrays.
Every goal in the body of a procedure gets its id allocated in a depth first
search. Since we process each goal before we dive into is descendants,
the goal representing the whole body of a procedure always gets goal id 0.
The depth first traversal also builds up a map (the containing goal map)
that tells us the parent goal of ever subgoal, with the obvious exception
of the root goal itself. From the containing goal map, one can compute
both reverse and forward goal paths. It can also serve as the basis of an
efficient test of whether the goal identified by goal id A is an ancestor
of another goal identified by goal id B. We don't yet use this test,
but I expect we will in the future.

mdbcomp/program_representation.m:
	Add the goal_id type.

	Replace the existing goal_path and goal_path_consable types
	with two new types, forward_goal_path and reverse_goal_path.
	Since these now have wrappers around the list of goal path steps
	that identify each kind of goal path, it is now ok to expose their
	representations. This makes several compiler passes easier to code.

	Update the set of operations on goal paths to work on the new data
	structures.

	Add a couple of step types to represent lambdas and try goals.
	Their omission prior to this would have been a bug for constraint-based
	mode analysis, or any other compiler pass prior to the expansion out
	of lambda and try goals that wanted to use goal paths to identify
	subgoals.

browser/declarative_tree.m:
mdbcomp/rtti_access.m:
mdbcomp/slice_and_dice.m:
mdbcomp/trace_counts.m:
slice/mcov.m:
deep_profiler/*.m:
	Conform to the changes in goal path representation.

compiler/hlds_goal:
	Replace the goal_path field with a goal_id field in the goal_info,
	indicating that from now on, this should be used to identify goals.

	Keep a reverse_goal_path field in the goal_info for use by RBMM and
	CTGC. Those analyses were too hard to convert to using goal_ids,
	especially since RBMM uses goal_paths to identify goals in multi-pass
	algorithms that should be one-pass and should not NEED to identify
	any goals for later processing.

compiler/goal_path:
	Add predicates to fill in goal_ids, and update the predicates
	filling in the now deprecated reverse goal path fields.

	Add the operations needed by the rest of the compiler
	on goal ids and containing goal maps.

	Remove the option to set goal paths using "mode equivalent steps".
	Constraint based mode analysis now uses goal ids, and can now
	do its own equivalent optimization quite simply.

	Move the goal_path module from the check_hlds package to the hlds
	package.

compiler/*.m:
	Conform to the changes in goal path representation.

	Most modules now use goal_ids to identify goals, and use a containing
	goal map to convert the goal ids to goal paths when needed.
	However, the ctgc and rbmm modules still use (reverse) goal paths.

library/digraph.m:
library/group.m:
library/injection.m:
library/pprint.m:
library/pretty_printer.m:
library/term_to_xml.m:
	Minor style improvements.
2010-12-20 07:47:49 +00:00
Zoltan Somogyi
8a28e40c9b Add the predicates sorry, unexpected and expect to library/error.m.
Estimated hours taken: 2
Branches: main

Add the predicates sorry, unexpected and expect to library/error.m.

compiler/compiler_util.m:
library/error.m:
	Move the predicates sorry, unexpected and expect from compiler_util
	to error.

	Put the predicates in error.m into the same order as their
	declarations.

compiler/*.m:
	Change imports as needed.

compiler/lp.m:
compiler/lp_rational.m:
	Change imports as needed, and some minor cleanups.

deep_profiler/*.m:
	Switch to using the new library predicates, instead of calling error
	directly. Some other minor cleanups.

NEWS:
	Mention the new predicates in the standard library.
2010-12-15 06:30:36 +00:00
Paul Bone
91e60619b0 Remove the concept of 'partitions' from the candidate parallel conjunction
mdbcomp/feedback.automatic_parallelism.m:
    Remove the concept of 'partitions' from the candidate parallel conjunction
    type.  We no-longer divide conjunctions into partitions before
    parallelising them.

mdbcomp/feedback.m:
    Increment the feedback format version number.

compiler/implicit_parallelism.m:
    Conform to changes in mdbcomp/feedback.automatic_parallelism.m.

deep_profiler/mdprof_fb.automatic_parallelism.m:
    Allow the non-atomic goals to be parallelised against one-another.

    Modify the goal annotations used internally, many annotations used only for
    calls are now used for any goal type.

    Variable use information is now stored in a map from variable name to lazy
    use data for every goal, not just for the arguments of calls.

    Do not partition conjunctions before attempting to parallelise them.

    Make the adjust_time_for_waits tolerate floating point errors more easily.

    Format costs with commas and, in most cases, two decimal places.

deep_profiler/var_use_analysis.m:
    Export a new predicate var_first_use that computes the first use of a
    variable within a goal.  This predicate uses a new typeclass to retrieve
    coverage data from any goal that can implement the typeclass.

deep_profiler/measurements.m:
    Added a new abstract type for measuring the cost of a goal, goal_cost_csq.
    This is like cs_cost_csq except that it can represent trivial goals (which
    don't have a call count).

deep_profiler/coverage.m:
    Added deterministic versions of the get_coverage_* predicates.

deep_profiler/program_representation_utils.m:
    Made initial_inst_map more generic in its type signature.

    Add a new predicate, atomic_goal_is_call/2 which can be used instead of a
    large switch on an atomic_goal_rep value.

deep_profiler/message.m:
    Rename a message type to make it more general, this is required now that we
    compute variable use information for arbitrary goals, not just calls.

library/list.m:
    Add map3_foldl.

NEWS:
    Announced change to list.m.
2010-10-14 04:02:22 +00:00
Paul Bone
881039cfed Correct problems in the automatic parallelism analysis.
This patch fixes various problems, the most significant is the calculation of
variable use information.  The parallelisation analysis uses deep profiling
data.  In other words, profiling data that is attached to context information
referring to not just the procedure but the chain of calls leading to that
invocation of that procedure (modulo recursion).  The variable use analysis did
not use deep profiling data, therefore comparing the time that a variable is
produced with a call to the time in total of that call was not sound, and
sometimes resulted in information that is not possible, such as a variable
being produced or consumed after the call that produces or consumes it has
exited.

This change-set updates the variable use analysis to use deep profiling data to
avoid these problems.  At the same time it provides more accurate information
to the automatic parallelisation pass.  This is possible because of an earlier
change that allowed the coverage data to use deep profiling data.

In its current state, the parallelisation analysis now finishes without errors
and computes meaningful results when analysing a profile of the mercury
compiler's execution.

deep_profiler/report.m:
    The proc var use report is now a call site dynamic var use report.
       1) It now uses deep profiling data.
       2) It makes more sense from the callers perspective so it's now based
          around a call site rather than a proc.

    Add inst subtypes to the recursion_type type.

deep_profiler/query.m:
    The proc var use query is now a call site dynamic var use query, see
    report.m.

deep_profiler/var_use_analysis.m:
    Fix a bug here and in mdprof_fb.automatic_parallelism.m: If a
    variable is consumed by a call and appears in it's argument list more than
    once, take the earliest consumption time rather than the one for the
    earliest argument.

    Variable use analysis now uses recursion_patterns.m to correctly compute
    the cost of recursive calls.  It also uses 'deep' profiler data.

    Only measure variable use relative to the entry into a procedure, rather
    than either relative to the entry or exit.  This allows us to simplify a
    lot of code.

deep_profiler/create_report.m:
    The proc var use info report is now a call site dynamic var use info
    report.

    Move some utility code from here to the new analysis_utils.m module.

deep_profiler/display_report.m:
    Conform to changes in report.m.

    Improve the information displayed for variable first-use time
    reports.

deep_profiler/mdprof_fb.automatic_parallelism.m:
    Conform to changes in report.m

    Refactored the walk down the clique tree.  This no-longer uses the
    clique reports from the deep profiling tool.

    We now explore the same static procedure more than once.  It may be best to
    parallelise it in some contexts rather than others but for now we assume
    that the benefits in some context are worth the costs without benefit in
    the other contexts.  This is better than reaching a context where it is
    undesirable first and never visiting a case where parallelisation is
    desirable.

    Fix a bug in the calculation of how much parallelisation is used by
    parallelisations in a clique's parents.  This used to trigger an
    assertion.

    Don't try to parallelize anything in the "exception" module.
    There's probably other builtin code we should skip over here.

    Removed an overzealous assertion that was too easily triggered by the
    inaccuracies of IEEE-754 arithmetic.

    Compute variable use information lazily for each variable in each call.  I
    believe that this has made our implementation much faster as it no-longer
    computes information that is never used.

    Refactor and move build_recursive_call_site_cost_map to the new
    module analysis_utils.m where it can be used by other analyses.

    Call site cost maps now use the cs_cost_csq type to store costs,
    code in this module now conforms to this change.

    Conform to changes in messages.m

deep_profiler/recursion_patterns.m:
    Export a new predicate, recursion_type_get_maybe_avg_max_depth/2.  This
    retrieves the average maximum recursion depth from recursion types that know
    this information.

    Move code that builds a call site cost map for a procedure to
    analysis_utils.m where it can be used by other analyses.

deep_profiler/analysis_utils.m:
    Added a new module containing various utility predicates for profile
    analysis.

deep_profiler/coverage.m:
    Added an extra utility predicate get_coverage_after/2.

deep_profiler/message.m:
    Each message has a location that it refers to, a new location type has
    been added: call_site_dynamic.

    Added a new warning that can be used to describe when a call site's
    argument's use time cannot be computed.

    Added new predicates for printing out messages whose level is below a
    certain threshold.  These predicates can be called from io trace goals.
    Message levels start at 0 and currently go to 4, more critical messages
    have lower levels.  The desired verbosity level is stored in a module local
    mutable.

deep_profiler/mdprof_feedback.m:
    Move the message printing code from here to message.m.

deep_profiler/old_html_format.m:
deep_profiler/old_query.m:
    Conform to changes in query.m.

mdbcomp/feedback.automatic_parallelism.m:
    Added a new function for computing the 'cpu time' of a parallel
    computation.

library/lazy.m:
    Moved lazy.m from extras to the standard library.

library/list.m:
    Add a new predicate member_index0/3.  Like member/2 except it also gives
    the zero-based index of the current element within the list.

library/maybe.m:
    Add two new insts.
        maybe_yes(I) for the maybe type's yes/1 constructor.
        maybe_error_ok(I) for the maybe_error type's ok/1 constructor.

library/Mercury.options:
    Add a work around for compiling lazy.m with intermodule optimisations.

NEWS:
    Update news file for the addition of lazy.m and the member_index0 predicate
    in list.m

deep_profiler/.cvsignore:
    Ignore feedback.automatic_parallelism.m which is copied by Mmakefile from
    the mdbcomp/ directory.
2010-10-07 02:38:10 +00:00
Paul Bone
cb03062118 Fix a coverage propagation bug.
Coverage propagation only relies on coverage points for code that it cannot
infer coverage from call site port counts alone.  However since using dynamic
coverage points I had not updated this code to also use port counts from
dynamic call sites rather than static call sites.

This patch fixes this by using port counts from dynamic call sites when doing
coverage propagation with dynamic coverage points.

deep_profiler/coverage.m:
    Introduce a new type calls_and_exits which stores the number of calls and
    exits for a given call site.

    procrep_annotate_with_coverage now uses calls_and_exits rather than
    own_prof_info structures to represent call sites.

deep_profiler/create_report.m:
    Use port counts from dynamic call sites when creating dynamic coverage
    reports.

deep_profiler/mdprof_test.m:
    Add functionality that allows us to test creation of dynamic call site
2010-09-23 04:33:19 +00:00
Paul Bone
7e7d77e23f Make coverage profiling data 'deep'.
The deep profiler associates measurements with their context in the call graph
modulo recursion.  This has been true for all measurements except for coverage
profiling data.  This patch allows coverage data to be associated with
ProcDynamic structures so that it is keyed by this context not just the static
procedure.  This new behaviour is the default the old option of static coverage
profiling is still available for testing, as is no coverage profiling.  Note
that, as before, coverage profiling is supported by default however coverage
points are not inserted by default.

This change will be used to measure the depth of recursion, and therefore the
average cost of recursion as well as the likely times when variables are
produced in calls for the automatic parallelisation analysis.

runtime/mercury_conf_param.h:
    Create three new preprocessor macros:
        MR_DEEP_PROFILING_COVERAGE - defined when coverage profiling is
            enabled.
        MR_DEEP_PROFILING_COVERAGE_STATIC - defined when static coverage
            profiling is being used.
        MR_DEEP_PROFILING_COVERAGE_DYNAMIC - defined when dynamic coverage
            profiling is being used.

runtime/mercury_deep_profiling.h:
    Update data structures to support dynamic coverage profiling.

    Use conditional compilation to allow us to test the deep profiler in three
    different modes, without coverage profiling, with static coverage profiling
    and with dynamic coverage profiling.

    Rename MR_PROFILING_MALLOC, since it takes a type rather than a size in
    bytes it should be called MR_PROFILING_NEW to conform with existing malloc
    and new functions.

runtime/mercury_deep_profiling.c:
    Avoid a C compiler warning.

    MR_write_out_coverage_point has been removed, it's replaced with:
        MR_write_out_coverage_points_static and
        MR_write_out_coverage_points_dynamic.
    These write out more than one coverage point and write out either static or
    dynamic coverage points.

    Write a 64bit flags value (a bitfield) to the header of the Deep.data file.
    This replaces the canonical byte (with a bit).  and the byte that describes
    the word size.  This value also includes two bits describing the whether no
    coverage data, static coverage data or dynamic coverage data is present in
    the file.  A bit is reserved ti indicate if the data is compressed (which
    is not yet supported).

    MR_write_fixed_size_int now writes out 8 byte integers, this is only used
    for some counts present at the beginning of the data file along with the
    new flags value.  It now takes a MR_uint_least64_t integer as it's
    parameter.  The assertion to test for negative numbers has been removed
    since this type is unsigned.

    Increment the Deep.data file format version number.

compiler/layout_out.m:
    Conditionally compile the NULL pointer that represents the coverage points
    list in proc statics.  This is conditional on the
    MR_DEEP_PROFILING_COVERAGE_STATIC macro being defined.

compiler/coverage_profiling.m:
    Add support for generating dynamic coverage points.

compiler/options.m:
compiler/handle_options.m:
    Implement the new developer options for controlling coverage profiling.

library/profiling_builtin.m:
    Specialize increment_coverage_point_count for both static and dynamic
    coverage profiling.  This creates
    increment_{static,dynamic}_coverage_point_count.

deep_profiler/profile.m:
    Add an extra field to profile_stats, this tracks whether the file reader
    should try to read none, static or dynamic coverage data.

    Add an extra field to proc_dynamic, An array of coverage counts wrapped by
    a maybe type.  It's indexed the same as the array of coverage infos in
    proc_static.  This array is present if dynamic coverage profiling is being
    done (the default).

    Modify the coverage_points field in proc static, now there are two fields,
    an array of coverage_point_info values which store compile-time data.  And
    an optional array of coverage points (present if static coverage profiling
    was performed).

    Updated the formatting of the proc static structure.

    Moved the coverage_point type to coverage.m.

    Created a new type, coverage_data_type which enumerates the possibilities
    for coverage profiling: none, static and dynamic.

deep_profiler/coverage.m:
    Move the coverage point type here from profile.m, as the profile data
    structure no longer refers to it directly.

    Create a predicate coverage_point_arrays_to_list/3 which merges coverage
    point information and the counts themselves into coverage points.  This can
    be used to construct a list of coverage points regardless of whether static
    or dynamic coverage points are being used.

deep_profiler/read_profile.m:
    Conform to changes in runtime/mercury_deep_profiling.c.

    Refactored reading of the file header, a new named predicate is now used
    rather than a lambda expression.

    Incremented the Deep.data version number.

deep_profiler/report.m:
    Updated the proc dynamic dump report structure to include a list of
    coverage points.

deep_profiler/create_report.m:
deep_profiler/display_report.m:
    Conform to changes in profile.m.

    The proc dynamic dump now shows coverage information that was contained in
    that proc dynamic.

deep_profiler/canonical.m:
deep_profiler/dump.m:
    Conform to changes in profile.m.

deep_profiler/io_combinator.m:
    Add a 13-arg version of maybe_error_sequence.

deep_profiler/Mercury.options:
    Documented another trace flag.
2010-09-21 01:09:17 +00:00
Paul Bone
6aee01800e Improve the performance of coverage propagation.
Modify coverage propagation code so that it uses less memory.  This makes the
recursion frequency query roughly 8% faster.

Avoid generating call site summary reports for every call site in a procedure
when doing coverage propagation.  Coverage propagation only needs the number of
calls and exits, generating the other data was overkill and expensive.  This
makes the recursion frequency query roughly 46 times faster.  It now finishes
in roughly 50 rather than 36 minutes.  This was tested once the above change
had already been made.  It's possible that the above change had more of an
impact than was measured.

These speed improvements where measured using a profile for the Mercury
compiler's execution.

deep_profiler/coverage.m:
    Create special constructor symbols in our data types for cases where
    execution counts are zero because the code is never executed.  This is
    relatively common.  This uses less memory and causes fewer dereferences
    during execution.

    Change the type of the call sites map passed to the coverage propagation
    code.  It now uses values of the type own_prof_info rather than
    call_site_perf.  The own_prof_info structures are already available in the
    deep data-structure where as the call_site_perf structures must be
    generated manually.

deep_profiler/create_report.m:
    Conform to changes in coverage.m.

    In particular build a call site map mapping goal paths to own_prof_info
    values rather than call_site_perf values.  This is where the second
    performance improvement has been made.

deep_profiler/display_report.m:
    Conform to changes in coverage.m.
2010-08-30 01:12:25 +00:00
Paul Bone
f16e8118bd Implement a linear alternative to the exponential algorithm that determines how
best to parallelise a conjunction.

Made other performance improvements.

mdbcomp/feedback.m:
    Add a field to the candidate_parallel_conjunction_params structure giving
    the preference of algorithm.

    Simplify the parallel exec metrics type here.  It is now used only to
    summarise information that has already been calculated.  The original code
    has been moved into deep_profiler/measurements.m

    Add a field to the candidate_par_conjunction structure giving the index
    within the conjunction of the first goal in the partition.  This is used
    for pretty-printing parallelisation reports.

    Incremented the feedback format version number.

deep_profiler/measurements.m:
    Move the original parallel exec metrics type and code here from
    mdbcomp/feedback.m

deep_profiler/create_report.m:
    Avoid a performance issue by memoizing create_proc_var_use_dump_report
    which is called by the analysis for the same procedure (at different
    dynamic call sites) many times.  In simple cases this more than doubled the
    execution time, in more complicated cases it should perform even better.

    Conform to changes in coverage.m

deep_profiler/mdprof_fb.automatic_parallelism.m:
    Implement the linear algorithm for parallelising a conjunction.

    Since we don't to parallelism specialisation don't try to parallelise the
    same procedure more than once.  This should avoid some performance problems
    but I haven't tested it.

    If it is impossible to generate an independent parallelisation generate a
    dependent one and then report it as something we cannot parallelise.  This
    can help programmers write more independent code.

    Use directed graphs rather than lookup maps to track dependencies.  This
    simplifies some code as the digraph standard library module already has
    code to compute reverse graphs and transitive closures of the graphs.

    Since there are now two parallelisation algorithms; code common to both of
    them has been factored out.

    The objective function used by the branch and bound search has been
    modified to take into account the overheads of parallel execution.  It is:
        minimise(ParTime + ParOverheads X 2.0)
    This way we allow the overheads to increase by 1csc provided that it
    reduces ParTime by more than 2csc.  (csc = call sequence counts)

    When pretty-printing parallelisation reports print each goal in the
    parallelised conjunction with it's new goal path.  This makes debugging
    easier for large procedures.

    Fix a bug where the goal path of scope goals was calculated incorrectly,
    this lead to a thrown exception in the coverage analysis code when it used
    the goalpath to lookup the call site of a call.

deep_profiler/mdprof_feedback.m:
    Support a new command line option for choosing which algorithm to use.
    Additionally the linear algorithm will be used if the problem is above a
    certain size and the exponential algorithm was chosen.  This can be
    configured including the fallback threshold.

    Print the user's choice of algorithm as part of the candidate parallel
    conjunctions report.

deep_profiler/message.m:
    Add an extra log message type for exceptions thrown during auto
    parallelisation.

deep_profiler/program_representation_utils.m:
    The goal_rep pretty printer now prints the goal path for each goal.

deep_profiler/coverage.m:
    procrep_annotate_with_coverage now catches and returns exceptions in a
    maybe_error result.

deep_profiler/cliques.m:
    Copy predicates from the standard library into cliques.m to prevent the
    lack of tail recursion from blowing the stack in some cases.  (cliques.m is
    compiled with --trace minimum).

deep_profiler/callgraph.m:
    Copy list.foldl from the standard library into callgraph.m and transform it
    so that it is less likely to smash the stack in non tail-recursive grades.

deep_profiler/read_profile.m:
    Transform read_nodes so that it is less likely to smash the stack in non
    tail-recursive grades.

deep_profiler/Mercury.options:
    Removed old options that where used to work around a bug.  The bug still
    exists but the work-around moved into the compiler long ago.
2010-08-04 02:25:02 +00:00
Paul Bone
10a612bd68 Improve the performance of the automatic parallelism analysis.
Estimated hours taken: 1.
Branches: main

Improve the performance of the automatic parallelism analysis.

Profiling this code showed that it spent 90% of it's time generating some
error messages used by some sanity checks in the coverage annotation code.
By generating these error messages only when an error occurs the performance
has been increased significantly.

deep_profiler/coverage.m:
	As above.

deep_profiler/message.m:
	Fix an incorrect capitalisation, probably caused by a typo.
2009-04-16 06:15:56 +00:00
Paul Bone
e70295415d Various changes for automatic parallelism, the two major changes are:
Estimated hours taken: 20.
Branches: main

Various changes for automatic parallelism, the two major changes are:

Refactored some of the search for parallel conjunctions to use types that
describe the cost of a call site and the cost of a clique-procedure.  These
new types make it harder for programmers to mistakingly compare values of
either type accidentally.

Where possible, use the body of a clique to determine the cost of recursive
calls at the top level of recursion.  This improves the accuracy of this
calculation significantly.

deep_profiler/mdprof_fb.automatic_parallelism.m:
    As above.

deep_profiler/measurements.m:
    New cost data types as above.

deep_profiler/coverage.m:
    When coverage information completeness tests fail print out the procedure
    where the coverage information is incomplete.

deep_profiler/message.m:
    Introduce a new warning used in the automatic parallelism analysis.

deep_profiler/profile.m:
    Introduce a semidet version of deep_get_progrep_det.

mdbcomp/program_representation.m:
    Introduce a predicate to return the goal_rep from inside a case_rep
    structure.  This can be used as higher order code to turn a case list into
    a goal list for example.

deep_profiler/Mercury.options:
    Keep a commented out MCFLAGS definition that can be used to enable
    debugging output for the automatic parallelism analysis.
2009-04-02 09:49:27 +00:00
Zoltan Somogyi
3ad840ebb2 Increase the deep profiler's cohesion by dividing the
Estimated hours taken: 1
Branches: main

Increase the deep profiler's cohesion by dividing the
program_representation_utils.m module into three files:

- coverage.m containing the coverage analysis algorithm,
- var_use_analysis.m containing the variable use analysis algorithm, and
- program_representation_utils.m itself containing generally useful predicates
  working on program representations.

The last is actually the smallest file.

There are no algorithm changes, only the movement of code and a few minor
fixes of white space and typos in comments.

deep_profiler/coverage.m:
deep_profiler/var_use_analysis.m:
	New modules, as described above.

deep_profiler/program_representation_utils.m:
	Delete the code moved to the new modules.

mdbcomp/program_representation.m:
	Move some utility types and predicates here from
	deep_profiler/program_representation_utils.m, since they may be useful
	for other tasks.

deep_profiler/report.m:
	Move the data types specific to coverage and var use analysis to the
	new modules, along with the utility predicates operating on them.

	Import the new modules as needed.

deep_profiler/create_report.m:
deep_profiler/displayreport.m:
deep_profiler/mdprof_feedback.m:
	Import the new modules as needed.
2008-11-05 03:38:40 +00:00