Skip to content

Commit

Permalink
Update to fparser 0 2 (#373)
Browse files Browse the repository at this point in the history
* Support new and old style of PSyclone command line (no more nemo api etc)

* Fix mypy errors.

* Added missing tests for calling psyclone, and converting old style to new stle arguments and vice versa.

* Updated comment.

* Removed mixing, use a simple regex instead.

* Added support for ifx/icx compiler as intel-llvm class.

* Added support for nvidia compiler.

* Add preliminary support for Cray compiler.

* Added Cray compiler wrapper ftn and cc.

* Follow a more consistent naming scheme for crays, even though the native compiler names are longer (crayftn-cray, craycc-cray).

* Changed names again.

* Renamed cray compiler wrapper to be CrayCcWrapper and CrayFtnWrapper, to avoid confusion with Craycc.

* Fixed incorrect name in comments.

* Additional compilers (#349)

* Moved OBJECT_ARCHIVES from constants to ArtefactSet.

* Moved PRAGMAD_C from constants to ArtefactSet.

* Turned 'all_source' into an enum.

* Allow integer as revision.

* Fixed flake8 error.

* Removed specific functions to add/get fortran source files etc.

* Removed non-existing and unneccessary collections.

* Try to fix all run_configs.

* Fixed rebase issues.

* Added replace functionality to ArtefactStore, updated test_artefacts to cover all lines in that file.

* Started to replace artefacts when files are pre-processed.

* Removed linker argument from linking step in all examples.

* Try to get jules to link.

* Fixed build_jules.

* Fixed other issues raised in reviews.

* Try to get jules to link.

* Fixed other issues raised in reviews.

* Simplify handling of X90 files by replacing the X90 with x90, meaning only one artefact set is involved when running PSyclone.

* Make OBJECT_ARCHIVES also a dict, migrate more code to replace/add files to the default build artefact collections.

* Fixed some examples.

* Fix flake8 error.

* Fixed failing tests.

* Support empty comments.

* Fix preprocessor to not unnecessary remove and add files that are already in the output directory.

* Allow find_soure_files to be called more than once by adding files (not replacing artefact).

* Updated lfric_common so that files created by configurator are written in build (not source).

* Use c_build_files instead of pragmad_c.

* Removed unnecessary str.

* Documented the new artefact set handling.

* Fixed typo.

* Make the PSyclone API configurable.

* Fixed formatting of documentation, properly used ArtefactSet names.

* Support .f and .F Fortran files.

* Removed setter for tool.is_available, which was only used for testing.

* #3 Fix documentation and coding style issues from review.

* Renamed Categories into Category.

* Minor coding style cleanup.

* Removed more unnecessary ().

* Re-added (invalid) grab_pre_build call.

* Fixed typo.

* Renamed set_default_vendor to set_default_compiler_suite.

* Renamed VendorTool to CompilerSuiteTool.

* Also accept a Path as exec_name specification for a tool.

* Move the check_available function into the base class.

* Fixed some types and documentation.

* Fix typing error.

* Added explanation for meta-compiler.

* Improved error handling and documentation.

* Replace mpiifort with mpifort to be a tiny bit more portable.

* Use classes to group tests for git/svn/fcm together.

* Fixed issue in get_transformation script, and moved script into lfric_common to remove code duplication.

* Code improvement as suggested by review.

* Fixed run config

* Added reference to ticket.

* Updated type information.

* More typing fixes.

* Fixed typing warnings.

* As requested by reviewer removed is_working_copy functionality.

* Issue a warning (which can be silenced) when a tool in a toolbox is replaced.

* Fixed flake8.

* Fixed flake8.

* Fixed failing test.

* Addressed issues raised in review.

* Removed now unnecessary operations.

* Updated some type information.

* Fixed all references to APIs to be consistent with PSyclone 2.5.

* Added api to the checksum computation.

* Fixed type information.

* Added test to verify that changing the api changes the checksum.

* Make compiler version a tuple of integers

* Update some tests to use tuple versions

* Explicitly test handling of bad version format

* Fix formatting

* Tidying up

* Make compiler raise an error for any invalid version string

Assume these compilers don't need to be hashed.
Saves dealing with empty tuples.

* Check compiler version string for compiler name

* Fix formatting

* Add compiler.get_version_string() method

Includes other cleanup from PR comments

* Add mpi and openmp settings to BuildConfig, made compiler MPI aware.

* Looks like the circular dependency has been fixed.

* Revert "Looks like the circular dependency has been fixed." ...
while it works with the tests, a real application still triggered it.

This reverts commit 150dc37.

* Don't even try to find a C compiler if no C files are to be compiled.

* Updated gitignore to ignore (recently renamed) documentation.

* Fixed failing test.

* Return from compile Fortran early if there are no files to compiles. Fixed coding style.

* Add MPI enables wrapper for intel and gnu compiler.

* Fixed test.

* Automatically add openmp flag to compiler and linker based on BuildConfig.

* Removed enforcement of keyword parameters, which is not supported in python 3.7.

* Fixed failing test.

* Support more than one tool of a given suite by sorting them.

* Use different version checkout for each compiler vendor with mixins

* Refactoring, remove unittest compiler class

* Fix some mypy errors

* Use 'Union' type hint to fix build checks

* Added option to add flags to a tool.

* Introduce proper compiler wrapper, used this to implement properly wrapper MPI compiler.

* Fixed typo in types.

* Return run_version_command to base Compiler class

Provides default version command that can be overridden for other compilers.
Also fix some incorrect tests
Other tidying

* Add a missing type hint

* Added (somewhat stupid) 'test' to reach 100% coverage of PSyclone tool.

* Simplified MPI support in wrapper.

* More compiler wrapper coverage.

* Removed duplicated function.

* Removed debug print.

* Removed permanently changing compiler attributes, which can cause test failures later.

* More test for C compiler wrapper.

* More work on compiler wrapper tests.

* Fixed version and availability handling, added missing tests for 100% coverage.

* Fixed typing error.

* Try to fix python 3.7.

* Tried to fix failing tests.

* Remove inheritance from mixins and use protocol

* Simplify compiler inheritance

Mixins have static methods with unique names,
overrides only happen in concrete classes

* Updated wrapper and tests to handle error raised in get_version.

* Simplified regular expressions (now tests cover detection of version numbers with only a major version).

* Test for missing mixin.

* Use the parsing mixing from the compiler in a compiler wrapper.

* Use setattr instead of assignment to make mypy happy.

* Simplify usage of compiler-specific parsing mixins.

* Minor code cleanup.

* Updated documentation.

* Simplify usage of compiler-specific parsing mixins.

* Test for missing mixin.

* Fixed test.

* Added missing openmp_flag property to compiler wrapper.

* Don't use isinstance for consistency check, which does not work for CompilerWrappers.

* Fixed isinstance test for C compilation which doesn't work with a CompilerWrapper.

* Use a linker's compiler to determine MPI support. Removed mpi property from CompilerSuite.

* Added more tests for invalid version numbers.

* Added more test cases for invalid version number, improved regex to work as expected.

* Fixed typo in test.

* Fixed flake/mypy errors.

* Combine wrapper flags with flags from wrapped compiler.

* Made mypy happy.

* Fixed test.

* Split tests into smaller individual ones, fixed missing asssert in test.

* Parameterised compiler version tests to also test wrapper.

* Added missing MPI parameter when getting the compiler.

* Fixed comments.

* Order parameters to be in same order for various compiler classes.

* Remove stray character

* Added getter for wrapped compiler.

* Fixed small error that would prevent nested compiler wrappers from being used.

* Added a cast to make mypy happy.

* Add simple getter for linker library flags

* Add getter for linker flags by library

* Fix formatting

* Add optional libs argument to link function

* Reorder and clean up linker tests

* Make sure `Linker.link()` raises for unknown lib

* Add missing type

* Fix typing error

* Add 'libs' argument to link_exe function

* Try to add documentation for the linker libs feature

* Use correct list type in link_exe hint

* Add silent replace option to linker.add_lib_flags

* Fixed spelling mistake in option.

* Clarified documentation.

* Removed unnecessary functions in CompilerWrapper.

* Fixed failing test triggered by executing them in specific order (tools then steps)

* Fixed line lengths.

* Add tests for linker LDFLAG

* Add pre- and post- lib flags to link function

* Fix syntax in built-in lib flags

* Remove netcdf as a built-in linker library

Bash-style substitution is not currently handled

* Configure pre- and post-lib flags on the Linker object

Previously they were passed into the Linker.link() function

* Use more realistic linker lib flags

* Formatting fix

* Removed mixing, use a simple regex instead.

* Added support for ifx/icx compiler as intel-llvm class.

* Added support for nvidia compiler.

* Add preliminary support for Cray compiler.

* Added Cray compiler wrapper ftn and cc.

* Made mpi and openmp default to False in the BuildConfig constructor.

* Removed white space.

* Follow a more consistent naming scheme for crays, even though the native compiler names are longer (crayftn-cray, craycc-cray).

* Changed names again.

* Support compilers that do not support OpenMP.

* Added documentation for openmp parameter.

* Renamed cray compiler wrapper to be CrayCcWrapper and CrayFtnWrapper, to avoid confusion with Craycc.

* Fixed incorrect name in comments.

---------

Co-authored-by: jasonjunweilyu <[email protected]>
Co-authored-by: Luke Hoffmann <[email protected]>
Co-authored-by: Luke Hoffmann <[email protected]>

* Support new and old style of PSyclone command line (no more nemo api etc)

* Fix mypy errors.

* Added missing tests for calling psyclone, and converting old style to new stle arguments and vice versa.

* Added shell tool.

* Try to make mypy happy.

* Removed debug code.

* ToolRepository now only returns default that are available. Updated tests to make tools as available.

* Fixed typos and coding style.

* Support new and old style of PSyclone command line (no more nemo api etc)

* Fix mypy errors.

* Added missing tests for calling psyclone, and converting old style to new stle arguments and vice versa.

* Updated comment.

* Fixed failing tests.

* Updated fparser dependency to version 0.2.

* Replace old code for handling sentinels with triggering this behaviour in fparser. Require config in constructor of Analyser classes.

* Fixed tests for latest changes.

* Removed invalid openmp continuation line - since now fparser fails when trying to parse this line.

* Added test for disabled openmp parsing. Updated test to work with new test file.

* Coding style changes.

* Fix flake issues.

* Fixed double _.

* Removed more accesses to private members.

* Added missing type hint.

* Make flake8 happy.

---------

Co-authored-by: Luke Hoffmann <[email protected]>
Co-authored-by: jasonjunweilyu <[email protected]>
Co-authored-by: Luke Hoffmann <[email protected]>
  • Loading branch information
4 people authored Jan 15, 2025
1 parent 6a2b40f commit 66c1fcd
Show file tree
Hide file tree
Showing 12 changed files with 140 additions and 110 deletions.
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ authors = [
license = {file = 'LICENSE.txt'}
dynamic = ['version', 'readme']
requires-python = '>=3.7, <4'
dependencies = ['fparser']
dependencies = ['fparser >= 0.2']
classifiers = [
'Development Status :: 1 - Planning',
'Environment :: Console',
Expand Down
42 changes: 24 additions & 18 deletions source/fab/parse/c.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,44 +11,48 @@
from pathlib import Path
from typing import List, Optional, Union, Tuple

from fab.dep_tree import AnalysedDependent

try:
import clang # type: ignore
import clang.cindex # type: ignore
except ImportError:
clang = None

from fab.build_config import BuildConfig
from fab.dep_tree import AnalysedDependent
from fab.util import log_or_dot, file_checksum

logger = logging.getLogger(__name__)


class AnalysedC(AnalysedDependent):
"""
An analysis result for a single C file, containing symbol definitions and dependencies.
An analysis result for a single C file, containing symbol definitions and
dependencies.
Note: We don't need to worry about compile order with pure C projects; we can compile all in one go.
However, with a *Fortran -> C -> Fortran* dependency chain, we do need to ensure that one Fortran file
is compiled before another, so this class must be part of the dependency tree analysis.
Note: We don't need to worry about compile order with pure C projects; we
can compile all in one go. However, with a *Fortran -> C -> Fortran*
dependency chain, we do need to ensure that one Fortran file is
compiled before another, so this class must be part of the
dependency tree analysis.
"""
# Note: This subclass adds nothing to it's parent, which provides everything it needs.
# We'd normally remove an irrelevant class like this but we want to keep the door open
# for filtering analysis results by type, rather than suffix.
pass
# Note: This subclass adds nothing to it's parent, which provides
# everything it needs. We'd normally remove an irrelevant class
# like this but we want to keep the door open for filtering
# analysis results by type, rather than suffix.


class CAnalyser(object):
class CAnalyser:
"""
Identify symbol definitions and dependencies in a C file.
"""

def __init__(self):
def __init__(self, config: BuildConfig):

# runtime
self._config = None
self._config = config
self._include_region: List[Tuple[int, str]] = []

# todo: simplifiy by passing in the file path instead of the analysed tokens?
def _locate_include_regions(self, trans_unit) -> None:
Expand Down Expand Up @@ -100,8 +104,7 @@ def _check_for_include(self, lineno) -> Optional[str]:
include_stack.pop()
if include_stack:
return include_stack[-1]
else:
return None
return None

def run(self, fpath: Path) \
-> Union[Tuple[AnalysedC, Path], Tuple[Exception, None]]:
Expand Down Expand Up @@ -149,9 +152,11 @@ def run(self, fpath: Path) \
continue
logger.debug('Considering node: %s', node.spelling)

if node.kind in {clang.cindex.CursorKind.FUNCTION_DECL, clang.cindex.CursorKind.VAR_DECL}:
if node.kind in {clang.cindex.CursorKind.FUNCTION_DECL,
clang.cindex.CursorKind.VAR_DECL}:
self._process_symbol_declaration(analysed_file, node, usr_symbols)
elif node.kind in {clang.cindex.CursorKind.CALL_EXPR, clang.cindex.CursorKind.DECL_REF_EXPR}:
elif node.kind in {clang.cindex.CursorKind.CALL_EXPR,
clang.cindex.CursorKind.DECL_REF_EXPR}:
self._process_symbol_dependency(analysed_file, node, usr_symbols)
except Exception as err:
logger.exception(f'error walking parsed nodes {fpath}')
Expand All @@ -166,7 +171,8 @@ def _process_symbol_declaration(self, analysed_file, node, usr_symbols):
if node.is_definition():
# only global symbols can be used by other files, not static symbols
if node.linkage == clang.cindex.LinkageKind.EXTERNAL:
# This should catch function definitions which are exposed to the rest of the application
# This should catch function definitions which are exposed to
# the rest of the application
logger.debug(' * Is defined in this file')
# todo: ignore if inside user pragmas?
analysed_file.add_symbol_def(node.spelling)
Expand Down
41 changes: 10 additions & 31 deletions source/fab/parse/fortran.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@
from pathlib import Path
from typing import Union, Optional, Iterable, Dict, Any, Set

from fparser.common.readfortran import FortranStringReader # type: ignore
from fparser.two.Fortran2003 import ( # type: ignore
Entity_Decl_List, Use_Stmt, Module_Stmt, Program_Stmt, Subroutine_Stmt, Function_Stmt, Language_Binding_Spec,
Char_Literal_Constant, Interface_Block, Name, Comment, Module, Call_Stmt, Derived_Type_Def, Derived_Type_Stmt,
Expand All @@ -21,6 +20,7 @@
from fparser.two.Fortran2008 import ( # type: ignore
Type_Declaration_Stmt, Attr_Spec_List)

from fab.build_config import BuildConfig
from fab.dep_tree import AnalysedDependent
from fab.parse.fortran_common import iter_content, _has_ancestor_type, _typed_child, FortranAnalyserBase
from fab.util import file_checksum, string_checksum
Expand Down Expand Up @@ -167,15 +167,21 @@ class FortranAnalyser(FortranAnalyserBase):
A build step which analyses a fortran file using fparser2, creating an :class:`~fab.dep_tree.AnalysedFortran`.
"""
def __init__(self, std=None, ignore_mod_deps: Optional[Iterable[str]] = None):
def __init__(self,
config: BuildConfig,
std: Optional[str] = None,
ignore_mod_deps: Optional[Iterable[str]] = None):
"""
:param config: The BuildConfig to use.
:param std:
The Fortran standard.
:param ignore_mod_deps:
Module names to ignore in use statements.
"""
super().__init__(result_class=AnalysedFortran, std=std)
super().__init__(config=config,
result_class=AnalysedFortran,
std=std)
self.ignore_mod_deps: Iterable[str] = list(ignore_mod_deps or [])
self.depends_on_comment_found = False

Expand Down Expand Up @@ -295,33 +301,6 @@ def _process_comment(self, analysed_file, obj):
# without .o means a fortran symbol
else:
analysed_file.add_symbol_dep(dep)
if comment[:2] == "!$":
# Check if it is a use statement with an OpenMP sentinel:
# Use fparser's string reader to discard potential comment
# TODO #327: once fparser supports reading the sentinels,
# this can be removed.
# fparser issue: https://github.com/stfc/fparser/issues/443
reader = FortranStringReader(comment[2:])
try:
line = reader.next()
except StopIteration:
# No other item, ignore
return
try:
# match returns a 5-tuple, the third one being the module name
module_name = Use_Stmt.match(line.strline)[2]
module_name = module_name.string
except Exception:
# Not a use statement in a sentinel, ignore:
return

# Register the module name
if module_name in self.ignore_mod_deps:
logger.debug(f"ignoring use of {module_name}")
return
if module_name.lower() not in self._intrinsic_modules:
# found a dependency on fortran
analysed_file.add_module_dep(module_name)

def _process_subroutine_or_function(self, analysed_file, fpath, obj):
# binding?
Expand Down Expand Up @@ -353,7 +332,7 @@ def _process_subroutine_or_function(self, analysed_file, fpath, obj):
analysed_file.add_symbol_def(name.string)


class FortranParserWorkaround(object):
class FortranParserWorkaround():
"""
Use this class to create a workaround when the third-party Fortran parser is unable to process a valid source file.
Expand Down
76 changes: 51 additions & 25 deletions source/fab/parse/fortran_common.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,14 @@
import logging
from abc import ABC, abstractmethod
from pathlib import Path
from typing import Union, Tuple, Type
from typing import Optional, Tuple, Type, Union

from fparser.common.readfortran import FortranFileReader # type: ignore
from fparser.two.parser import ParserFactory # type: ignore
from fparser.two.utils import FortranSyntaxError # type: ignore

from fab import FabException
from fab.build_config import BuildConfig
from fab.dep_tree import AnalysedDependent
from fab.parse import EmptySourceFile
from fab.util import log_or_dot, file_checksum
Expand Down Expand Up @@ -58,49 +59,61 @@ def _typed_child(parent, child_type: Type, must_exist=False):
# Look for a child of a certain type.
# Returns the child or None.
# Raises ValueError if more than one child of the given type is found.
children = list(filter(lambda child: isinstance(child, child_type), parent.children))
children = list(filter(lambda child: isinstance(child, child_type),
parent.children))
if len(children) > 1:
raise ValueError(f"too many children found of type {child_type}")

if children:
return children[0]

if must_exist:
raise FabException(f'Could not find child of type {child_type} in {parent}')
raise FabException(f'Could not find child of type {child_type} '
f'in {parent}')
return None


class FortranAnalyserBase(ABC):
"""
Base class for Fortran parse-tree analysers, e.g FortranAnalyser and X90Analyser.
Base class for Fortran parse-tree analysers, e.g FortranAnalyser and
X90Analyser.
"""
_intrinsic_modules = ['iso_fortran_env', 'iso_c_binding']

def __init__(self, result_class, std=None):
def __init__(self, config: BuildConfig,
result_class,
std: Optional[str] = None):
"""
:param config: The BuildConfig object.
:param result_class:
The type (class) of the analysis result. Defined by the subclass.
:param std:
The Fortran standard.
"""
self._config = config
self.result_class = result_class
self.f2008_parser = ParserFactory().create(std=std or "f2008")

# todo: this, and perhaps other runtime variables like it, might be better set at construction
# if we construct these objects at runtime instead...
# runtime, for child processes to read
self._config = None
@property
def config(self) -> BuildConfig:
'''Returns the BuildConfig to use.
'''
return self._config

def run(self, fpath: Path) \
-> Union[Tuple[AnalysedDependent, Path], Tuple[EmptySourceFile, None], Tuple[Exception, None]]:
-> Union[Tuple[AnalysedDependent, Path],
Tuple[EmptySourceFile, None],
Tuple[Exception, None]]:
"""
Parse the source file and record what we're interested in (subclass specific).
Parse the source file and record what we're interested in (subclass
specific).
Reloads previous analysis results if available.
Returns the analysis data and the result file where it was stored/loaded.
Returns the analysis data and the result file where it was
stored/loaded.
"""
# calculate the prebuild filename
Expand All @@ -114,9 +127,11 @@ def run(self, fpath: Path) \
# Load the result file into whatever result class we use.
loaded_result = self.result_class.load(analysis_fpath)
if loaded_result:
# This result might have been created by another user; their prebuild folder copied to ours.
# If so, the fpath in the result will *not* point to the file we eventually want to compile,
# it will point to the user's original file, somewhere else. So replace it with our own path.
# This result might have been created by another user; their
# prebuild folder copied to ours. If so, the fpath in the
# result will *not* point to the file we eventually want to
# compile, it will point to the user's original file,
# somewhere else. So replace it with our own path.
loaded_result.fpath = fpath
return loaded_result, analysis_fpath

Expand All @@ -125,43 +140,54 @@ def run(self, fpath: Path) \
# parse the file, get a node tree
node_tree = self._parse_file(fpath=fpath)
if isinstance(node_tree, Exception):
return Exception(f"error parsing file '{fpath}':\n{node_tree}"), None
return (Exception(f"error parsing file '{fpath}':\n{node_tree}"),
None)
if node_tree.content[0] is None:
logger.debug(f" empty tree found when parsing {fpath}")
# todo: If we don't save the empty result we'll keep analysing it every time!
# todo: If we don't save the empty result we'll keep analysing
# it every time!
return EmptySourceFile(fpath), None

# find things in the node tree
analysed_file = self.walk_nodes(fpath=fpath, file_hash=file_hash, node_tree=node_tree)
analysed_file = self.walk_nodes(fpath=fpath, file_hash=file_hash,
node_tree=node_tree)
analysed_file.save(analysis_fpath)

return analysed_file, analysis_fpath

def _get_analysis_fpath(self, fpath, file_hash) -> Path:
return Path(self._config.prebuild_folder / f'{fpath.stem}.{file_hash}.an')
return Path(self.config.prebuild_folder /
f'{fpath.stem}.{file_hash}.an')

def _parse_file(self, fpath):
"""Get a node tree from a fortran file."""
reader = FortranFileReader(str(fpath), ignore_comments=False)
reader.exit_on_error = False # don't call sys.exit, it messes up the multi-processing
reader = FortranFileReader(
str(fpath),
ignore_comments=False,
include_omp_conditional_lines=self.config.openmp)
# don't call sys.exit, it messes up the multi-processing
reader.exit_on_error = False

try:
tree = self.f2008_parser(reader)
return tree
except FortranSyntaxError as err:
# we can't return the FortranSyntaxError, it breaks multiprocessing!
# Don't return the FortranSyntaxError, it breaks multiprocessing!
logger.error(f"\nfparser raised a syntax error in {fpath}\n{err}")
return Exception(f"syntax error in {fpath}\n{err}")
except Exception as err:
logger.error(f"\nunhandled error '{type(err)}' in {fpath}\n{err}")
return Exception(f"unhandled error '{type(err)}' in {fpath}\n{err}")
return Exception(f"unhandled error '{type(err)}' in "
f"{fpath}\n{err}")

@abstractmethod
def walk_nodes(self, fpath, file_hash, node_tree) -> AnalysedDependent:
"""
Examine the nodes in the parse tree, recording things we're interested in.
Examine the nodes in the parse tree, recording things we're
interested in.
Return type depends on our subclass, and will be a subclass of AnalysedDependent.
Return type depends on our subclass, and will be a subclass of
AnalysedDependent.
"""
raise NotImplementedError
5 changes: 3 additions & 2 deletions source/fab/parse/x90.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
from fparser.two.Fortran2003 import Use_Stmt, Call_Stmt, Name, Only_List, Actual_Arg_Spec_List, Part_Ref # type: ignore

from fab.parse import AnalysedFile
from fab.build_config import BuildConfig
from fab.parse.fortran_common import FortranAnalyserBase, iter_content, logger, _typed_child
from fab.util import by_type

Expand Down Expand Up @@ -64,8 +65,8 @@ class X90Analyser(FortranAnalyserBase):
# Makes a parsable fortran version of x90.
# todo: Use hashing to reuse previous analysis results.

def __init__(self):
super().__init__(result_class=AnalysedX90)
def __init__(self, config: BuildConfig):
super().__init__(config=config, result_class=AnalysedX90)

def walk_nodes(self, fpath, file_hash, node_tree) -> AnalysedX90: # type: ignore

Expand Down
10 changes: 4 additions & 6 deletions source/fab/steps/analyse.py
Original file line number Diff line number Diff line change
Expand Up @@ -130,8 +130,10 @@ def analyse(
unreferenced_deps = list(unreferenced_deps or [])

# todo: these seem more like functions
fortran_analyser = FortranAnalyser(std=std, ignore_mod_deps=ignore_mod_deps)
c_analyser = CAnalyser()
fortran_analyser = FortranAnalyser(config=config,
std=std,
ignore_mod_deps=ignore_mod_deps)
c_analyser = CAnalyser(config=config)

# Creates the *build_trees* artefact from the files in `self.source_getter`.

Expand All @@ -144,10 +146,6 @@ def analyse(
# - At this point we have a source tree for the entire source.
# - (Optionally) Extract a sub tree for every root symbol, if provided. For building executables.

# todo: code smell - refactor (in another PR to keep things small)
fortran_analyser._config = config
c_analyser._config = config

# parse
files: List[Path] = source_getter(config.artefact_store)
analysed_files = _parse_files(config, files=files, fortran_analyser=fortran_analyser, c_analyser=c_analyser)
Expand Down
Loading

0 comments on commit 66c1fcd

Please sign in to comment.