diff --git a/.github/workflows/deploy.yml b/.github/workflows/deploy.yml index 0b3f6452e..30bb55d45 100644 --- a/.github/workflows/deploy.yml +++ b/.github/workflows/deploy.yml @@ -49,4 +49,18 @@ jobs: id: deployment uses: actions/deploy-pages@v3 + prettier-markdown: + runs-on: ubuntu-latest + steps: + - name: Check out repository + uses: actions/checkout@v3 + - uses: actions/setup-node@v3 + with: + node-version: 18 + cache: yarn + - name: Install dependencies + run: yarn + - name: Run prettier on md and mdx + run: npx prettier --check --prose-wrap=always "docs/**/*.md" "docs/**/*.mdx" + # TODO: Deploy to some third-party place for preview? \ No newline at end of file diff --git a/docs/.less-developed/future-proof-package-and-import-system.mdx b/docs/.less-developed/future-proof-package-and-import-system.mdx index 155d817c0..679eca585 100644 --- a/docs/.less-developed/future-proof-package-and-import-system.mdx +++ b/docs/.less-developed/future-proof-package-and-import-system.mdx @@ -3,23 +3,51 @@ title: Future Proof Package/Import System description: A future proof package and import system --- -This is a collection of thoughts on the design of a reliable package and import system that is ready for future applications. At this stage, this page mostly represents my personal view (Christian Menard). I will also focus on the C++ target here as this is the target I know best. The C target is not a good example for these considerations as there is a fundamental design issue with the C target. Since the code generator places all code in a single generated `.c` file and does things like `#include reactor.c` to avoid the need for Makefiles, it circumvents many of the issues that come with imports that I will outline here. It simply ignores file scopes and namespaces altogether. +This is a collection of thoughts on the design of a reliable package and import +system that is ready for future applications. At this stage, this page mostly +represents my personal view (Christian Menard). I will also focus on the C++ +target here as this is the target I know best. The C target is not a good +example for these considerations as there is a fundamental design issue with the +C target. Since the code generator places all code in a single generated `.c` +file and does things like `#include reactor.c` to avoid the need for Makefiles, +it circumvents many of the issues that come with imports that I will outline +here. It simply ignores file scopes and namespaces altogether. # The status quo -The current import system is lean and simple. Write `import Bar.lf` in `Foo.lf` and every reactor defined in `Bar.lf` will be visible in the file scope `Foo.lf`. `Bar.lf` is looked up simply by scanning the directory `Foo.lf` is placed in. This works well for the simple programs and tests we have right now, but does not scale. I identify the following problems: - -1. There is no notion of separate namespaces. Every reactor that `Bar.lf` defines becomes visible in `Foo.lf`. If both files define a Reactor `Foo`, there is a name clash and the import would be ill-formed. There should be a mechanism to distinguish the two definitions of `Foo`, such as using fully qualified names: `Foo.Foo` and `Bar.Foo`. - -2. There is no concept for importing files from a directory structure. It is unclear how `Foo.lf` could import `my/lib/Bar.lf`. - -3. There is no concept for packages or libraries that can be installed on the system. How could we import Reactors from a library that someone else provided? - -These are the more obvious issues that we have talked about. However, there are more subtle ones that we haven't been discussed in depth (or at least not in the context of the import system design discussion). The open question is: What does importing a LF file actually mean? Obviously, an import should bring Reactors defined in another files into local scope. But what should happen with the other structures that are part of an LF file, namely target properties and preambles? That is not specified and our targets use a best practice approach. But this is far away from a good design that is scalable and future proof. +The current import system is lean and simple. Write `import Bar.lf` in `Foo.lf` +and every reactor defined in `Bar.lf` will be visible in the file scope +`Foo.lf`. `Bar.lf` is looked up simply by scanning the directory `Foo.lf` is +placed in. This works well for the simple programs and tests we have right now, +but does not scale. I identify the following problems: + +1. There is no notion of separate namespaces. Every reactor that `Bar.lf` + defines becomes visible in `Foo.lf`. If both files define a Reactor `Foo`, + there is a name clash and the import would be ill-formed. There should be a + mechanism to distinguish the two definitions of `Foo`, such as using fully + qualified names: `Foo.Foo` and `Bar.Foo`. + +2. There is no concept for importing files from a directory structure. It is + unclear how `Foo.lf` could import `my/lib/Bar.lf`. + +3. There is no concept for packages or libraries that can be installed on the + system. How could we import Reactors from a library that someone else + provided? + +These are the more obvious issues that we have talked about. However, there are +more subtle ones that we haven't been discussed in depth (or at least not in the +context of the import system design discussion). The open question is: What does +importing a LF file actually mean? Obviously, an import should bring Reactors +defined in another files into local scope. But what should happen with the other +structures that are part of an LF file, namely target properties and preambles? +That is not specified and our targets use a best practice approach. But this is +far away from a good design that is scalable and future proof. ## A quick dive into the C++ code generator -Before I discuss the problems with preambles and target properties, I would like to give you a quick overview of how the C++ code generator works. Consider the following LF program consisting of two files `Foo.lf` and `Bar.lf`: +Before I discuss the problems with preambles and target properties, I would like +to give you a quick overview of how the C++ code generator works. Consider the +following LF program consisting of two files `Foo.lf` and `Bar.lf`: ``` // Bar.lf @@ -44,7 +72,8 @@ reactor Foo { } ``` -Now let us have a look on what the C++ code generator does. It will produce a file structure like this: +Now let us have a look on what the C++ code generator does. It will produce a +file structure like this: ``` CMakeLists.txt @@ -57,7 +86,13 @@ Foo/ Foo.hh ``` -We can ignore `CMakeLists.txt` and `main.cc` for our discussion here. The former specifies how the whole program can be build and the latter contains the `main()` function and some code that is required to get the application up and running. For each processed `.lf` file, the code generator creates a directory ``. For each reactor `` defined in `.lf`, it will create `/.cc` and `/.hh`. The header file declares a class representing the reactor like this: +We can ignore `CMakeLists.txt` and `main.cc` for our discussion here. The former +specifies how the whole program can be build and the latter contains the +`main()` function and some code that is required to get the application up and +running. For each processed `.lf` file, the code generator creates a +directory ``. For each reactor `` defined in `.lf`, it will +create `/.cc` and `/.hh`. The header file declares +a class representing the reactor like this: ``` // Bar/Bar.hh @@ -93,7 +128,9 @@ Bar::r0_body() { } ``` -Similarly, `Foo.hh` and `Foo.cc` will be generated. However, since `Foo.lf` imports `Bar.lf` and instantiated the reactor `Bar` it must be made visible. This is done by an include directive in the generated code like so: +Similarly, `Foo.hh` and `Foo.cc` will be generated. However, since `Foo.lf` +imports `Bar.lf` and instantiated the reactor `Bar` it must be made visible. +This is done by an include directive in the generated code like so: ``` // Foo/Foo.hh @@ -122,7 +159,10 @@ class Foo : public reactor::Reacor { ## The problem with preambles -The problems with preamble in the context of imports were already discussed in a [related issue](https://github.com/pulls), but I would like to summarize the problem here. While the examples above worked nicely even with imports, things get messy as soon as we introduce a preamble. Let's try this: +The problems with preamble in the context of imports were already discussed in a +[related issue](https://github.com/pulls), but I would like to summarize the +problem here. While the examples above worked nicely even with imports, things +get messy as soon as we introduce a preamble. Let's try this: ``` // Bar.lf @@ -159,7 +199,11 @@ reactor Foo } ``` -This would be expected to print `Received {32, hello}`. However, before we can even compile this program, we need to talk about what should happen with the preamble during code generation and how the import affects it. So where should the preamble go? The first thing that comes to mind, is to embed it in the header file `Bar.hh` something like this: +This would be expected to print `Received {32, hello}`. However, before we can +even compile this program, we need to talk about what should happen with the +preamble during code generation and how the import affects it. So where should +the preamble go? The first thing that comes to mind, is to embed it in the +header file `Bar.hh` something like this: ``` // Bar/Bar.hh @@ -183,11 +227,33 @@ class Bar : public reactor::Reacor { }; ``` -If we embed the preamble like this and compile the program ,then the compiler is actually happy and processes all `*.cc` files without any complaints. **But**, there is a huge problem while linking the binary. The linker sees multiple definitions of `bar_func` and has no idea which one to use. Why is that? Well, the definition of `bar_func` is contained in a header file. This should never be done in C/C++! Since includes translate to a plain-text replacement by the preprocessor, `Bar.cc` will contain the full definition of `bar_func`. As `Foo.cc` imports `Foo.hh` which imports `Bar.hh`, also Foo.cc will contain the full definition. And since `main.cc` also has to include `Foo.hh`, `main.cc` will also contain the full definition of `bar_func`. So we have multiple definitions of the same function and the linker rightfully reports this as an error. - -So what should we do? We could place the preamble in `Bar.cc` instead. This ensures that only `Bar.cc` sees the definition of `bar_func`. But then the compiler complains. Neither `Bar.hh` nor `Foo.hh` see type declaration of `bar_t`. Note that there is a dependency of `Foo.lf` on the preamble in `Bar.lf`. The import system should somehow take care of this dependency! Also note that this has not appeared as a problem in C as the code generator places everything in the same compilation unit. `Foo` will see the preamble of `Bar` as long as `Foo` is generated before `Bar`. - -But how to solve it for C++ where the code is split in multiple compilation units (which really should be happening in C as well)? What we do at the moment is annotating the preamble with `private` and `public` keywords. This helps to split the preamble up and decide what to place in the header and what to place in the source file. For instance: +If we embed the preamble like this and compile the program ,then the compiler is +actually happy and processes all `*.cc` files without any complaints. **But**, +there is a huge problem while linking the binary. The linker sees multiple +definitions of `bar_func` and has no idea which one to use. Why is that? Well, +the definition of `bar_func` is contained in a header file. This should never be +done in C/C++! Since includes translate to a plain-text replacement by the +preprocessor, `Bar.cc` will contain the full definition of `bar_func`. As +`Foo.cc` imports `Foo.hh` which imports `Bar.hh`, also Foo.cc will contain the +full definition. And since `main.cc` also has to include `Foo.hh`, `main.cc` +will also contain the full definition of `bar_func`. So we have multiple +definitions of the same function and the linker rightfully reports this as an +error. + +So what should we do? We could place the preamble in `Bar.cc` instead. This +ensures that only `Bar.cc` sees the definition of `bar_func`. But then the +compiler complains. Neither `Bar.hh` nor `Foo.hh` see type declaration of +`bar_t`. Note that there is a dependency of `Foo.lf` on the preamble in +`Bar.lf`. The import system should somehow take care of this dependency! Also +note that this has not appeared as a problem in C as the code generator places +everything in the same compilation unit. `Foo` will see the preamble of `Bar` as +long as `Foo` is generated before `Bar`. + +But how to solve it for C++ where the code is split in multiple compilation +units (which really should be happening in C as well)? What we do at the moment +is annotating the preamble with `private` and `public` keywords. This helps to +split the preamble up and decide what to place in the header and what to place +in the source file. For instance: ``` // Bar.lf @@ -211,49 +277,127 @@ reactor Bar { } ``` -This makes the type `bar_t` visible as part of the public interface of `Bar`. Both the code generated for `Bar` and the code generated for `Foo` will see the definition of `bar_t`. This is realized by placing the public preamble in `Bar.hh` The function `bar_func` is part of `Bar`'s private interface. It is only visible with the reactor definition of `Bar` and is not propagated by an import. This is realized by simply placing the private preamble in `Bar.cc`. This makes the compiler finally happy and when get an executable program private and public preambles provide a mechanism to define what is propagated on an import and what is not. I think this is an important distinction even in languages other than C/C++ that do not have this weird separation of source and header file. +This makes the type `bar_t` visible as part of the public interface of `Bar`. +Both the code generated for `Bar` and the code generated for `Foo` will see the +definition of `bar_t`. This is realized by placing the public preamble in +`Bar.hh` The function `bar_func` is part of `Bar`'s private interface. It is +only visible with the reactor definition of `Bar` and is not propagated by an +import. This is realized by simply placing the private preamble in `Bar.cc`. +This makes the compiler finally happy and when get an executable program private +and public preambles provide a mechanism to define what is propagated on an +import and what is not. I think this is an important distinction even in +languages other than C/C++ that do not have this weird separation of source and +header file. -I am sorry for this lengthy diversion into things that happened in the past where we actually want to talk about how things should work in the future. However, understanding this issue is important and when talking about other solutions we should not forget that it exists. +I am sorry for this lengthy diversion into things that happened in the past +where we actually want to talk about how things should work in the future. +However, understanding this issue is important and when talking about other +solutions we should not forget that it exists. ## The problem with target properties -It is also not well-defined what should happen with target properties when importing a `.lf` file. Apparently the common practice is simply ignoring the existence of other target declarations and only considering the target declaration of the `.lf` that contains the main reactor. I think this works reasonably well for our small programs. But it will cause problems when either programs become larger or we introduce new target properties where it is unclear what piece of code they reference. Let us have a look at the [existing target properties for C++](https://github.com/lf-lang/lingua-franca/wiki/Writing-Reactors-in-Cpp#the-c-target-declaration). How should those different properties be handled on an import? Which scope do they actually apply to? We haven't really talked about this. - -`fast`, `keepalive`, `threads` and `timeout` are easy. They apply to the main reactor. Since we do not import main reactors from other files, it is clear that we really want to use the properties defined in the main compilation unit. So our current strategy works in this case. Although there are still some subtleties. For instance, if a library file defines `keepalive=true` and `fast=false` because it uses physical actions, should any file importing this library file be allowed to override these properties. Probably not, because it doesn't make sense if physical actions are involved. But a careless user of the library might not be aware of that. So maybe it isn't that clear after all. - -`build-type`, `cmake-include`, `compile`, `logging` and `no-runtime-validation` influence how the application is build. They are used for generating the `CMakeLists.txt` file. So their is quite clear: they apply to the whole compilation of the given application. Again it is a simple solution to only consider the target properties of the file containing the main reactor since this can be considered the file that 'drives' the compilation. But what if an imported `.lf` relies on an external library and uses the `cmake-include` property to tell CMake to look this library up, make the library header files visible and link our generated code to that library (fortunately this can be done with 2 lines in CMake). Should this target property really be ignored by our import? Probably not, because it will lead to compile errors if the author of the main `.lf` file does not configure `cmake-include` properly. So there should be some kind of merging mechanism for `cmake-include`. Should this be done for the other properties as well? I am not sure and I actually don't know how the merging would work. - -So this raises a lot of questions that we currently have no answer to. I believe we need to find answers for these questions in order to create a well working import and package system. This gets only more complicated when we add more properties such as the proposed `files` directive. We should really consider what properties actually apply to and if they influence the way imports work. +It is also not well-defined what should happen with target properties when +importing a `.lf` file. Apparently the common practice is simply ignoring the +existence of other target declarations and only considering the target +declaration of the `.lf` that contains the main reactor. I think this works +reasonably well for our small programs. But it will cause problems when either +programs become larger or we introduce new target properties where it is unclear +what piece of code they reference. Let us have a look at the +[existing target properties for C++](https://github.com/lf-lang/lingua-franca/wiki/Writing-Reactors-in-Cpp#the-c-target-declaration). +How should those different properties be handled on an import? Which scope do +they actually apply to? We haven't really talked about this. + +`fast`, `keepalive`, `threads` and `timeout` are easy. They apply to the main +reactor. Since we do not import main reactors from other files, it is clear that +we really want to use the properties defined in the main compilation unit. So +our current strategy works in this case. Although there are still some +subtleties. For instance, if a library file defines `keepalive=true` and +`fast=false` because it uses physical actions, should any file importing this +library file be allowed to override these properties. Probably not, because it +doesn't make sense if physical actions are involved. But a careless user of the +library might not be aware of that. So maybe it isn't that clear after all. + +`build-type`, `cmake-include`, `compile`, `logging` and `no-runtime-validation` +influence how the application is build. They are used for generating the +`CMakeLists.txt` file. So their is quite clear: they apply to the whole +compilation of the given application. Again it is a simple solution to only +consider the target properties of the file containing the main reactor since +this can be considered the file that 'drives' the compilation. But what if an +imported `.lf` relies on an external library and uses the `cmake-include` +property to tell CMake to look this library up, make the library header files +visible and link our generated code to that library (fortunately this can be +done with 2 lines in CMake). Should this target property really be ignored by +our import? Probably not, because it will lead to compile errors if the author +of the main `.lf` file does not configure `cmake-include` properly. So there +should be some kind of merging mechanism for `cmake-include`. Should this be +done for the other properties as well? I am not sure and I actually don't know +how the merging would work. + +So this raises a lot of questions that we currently have no answer to. I believe +we need to find answers for these questions in order to create a well working +import and package system. This gets only more complicated when we add more +properties such as the proposed `files` directive. We should really consider +what properties actually apply to and if they influence the way imports work. ### The work in progress -To be continued... I want to describe here what is happening on the `new_import` and the (potential) problems this brings. +To be continued... I want to describe here what is happening on the `new_import` +and the (potential) problems this brings. ### Possible solutions -To be continued... I would like to show a few possible solutions that have come to mind and that we discussed already. +To be continued... I would like to show a few possible solutions that have come +to mind and that we discussed already. ## Concrete proposal -With the risk of overlooking some of the issues discussed above, I'd like to outline a concrete proposal. To me, at least, it is easier to reason about these issues in a context with a few more constraints. Hopefully, this can serve as a starting point that we can tweak/adjust as needed. Note: this proposal borrows from the previous proposal written by Soroush. Based on my experience with Xtext, I have confidence that what is described below is feasible to implement. +With the risk of overlooking some of the issues discussed above, I'd like to +outline a concrete proposal. To me, at least, it is easier to reason about these +issues in a context with a few more constraints. Hopefully, this can serve as a +starting point that we can tweak/adjust as needed. Note: this proposal borrows +from the previous proposal written by Soroush. Based on my experience with +Xtext, I have confidence that what is described below is feasible to implement. ### Import/export 1. One LF file can contain multiple reactor definitions. 2. There can be at most one main reactor per file. -3. Any reactor class defined outside of the current file has to be imported explicitly. -4. The visibility of a reactor class can be limited using a modifier in the class definition. - -- _Should the default visibility be public or private? I have no strong preference either way._ - -5. An `import` statement **must** specify which reactor classes to import. This is necessary because if we populate a global scope using the `uriImport` mechanism, the local scope provider needs to know which definition to link to if there happen to exist multiple among the set of included files. We could _potentially_ relax this constraint and only report the situation where we know for a fact that there is ambiguity and needs to be resolved by making the imports explicit. We could also deprecate the use of unqualified imports (the original syntax), therefore allow it but warn that it might not work as expected. -6. An LF file in an `import` statement is specified by a path relative to the directory of the file in which the `import` statement occurs or relative to a series of directories in a wider search path. - -- _Eclipse uses `.project` files to identify the root of a project; we can look for that._ -- _We can look for our own kind of manifest files as well. These can list additional locations to search. This is compatible with the idea of developing a package system. I personally like this approach better than using an environment variable._ - -7. I personally find fully qualified names excess generality and have never felt compelled to use them in Java. IMO, they lead to code that's difficult to read and a pain to format. To keep things simple, I suggest we don't support them. Instead, we should provide a mechanism for renaming imported reactor classes to avoid naming conflicts. -8. _Open question: do we want scope modifiers for imports? It seems that extra import statements could be used to increase visibility, so it might not be needed._ +3. Any reactor class defined outside of the current file has to be imported + explicitly. +4. The visibility of a reactor class can be limited using a modifier in the + class definition. + +- _Should the default visibility be public or private? I have no strong + preference either way._ + +5. An `import` statement **must** specify which reactor classes to import. This + is necessary because if we populate a global scope using the `uriImport` + mechanism, the local scope provider needs to know which definition to link to + if there happen to exist multiple among the set of included files. We could + _potentially_ relax this constraint and only report the situation where we + know for a fact that there is ambiguity and needs to be resolved by making + the imports explicit. We could also deprecate the use of unqualified imports + (the original syntax), therefore allow it but warn that it might not work as + expected. +6. An LF file in an `import` statement is specified by a path relative to the + directory of the file in which the `import` statement occurs or relative to a + series of directories in a wider search path. + +- _Eclipse uses `.project` files to identify the root of a project; we can look + for that._ +- _We can look for our own kind of manifest files as well. These can list + additional locations to search. This is compatible with the idea of developing + a package system. I personally like this approach better than using an + environment variable._ + +7. I personally find fully qualified names excess generality and have never felt + compelled to use them in Java. IMO, they lead to code that's difficult to + read and a pain to format. To keep things simple, I suggest we don't support + them. Instead, we should provide a mechanism for renaming imported reactor + classes to avoid naming conflicts. +8. _Open question: do we want scope modifiers for imports? It seems that extra + import statements could be used to increase visibility, so it might not be + needed._ ### Syntax @@ -263,37 +407,75 @@ Import := 'import' ? (',' ?)* 'from' Rename := 'as' ``` -_Note: This syntax could be extended to support packages in addition to paths. But it doesn't make much sense to have this until we have a package manager and package registry._ +_Note: This syntax could be extended to support packages in addition to paths. +But it doesn't make much sense to have this until we have a package manager and +package registry._ -_Current state of the discussion: one unifying syntax vs. different syntax for references to files and packages._ +_Current state of the discussion: one unifying syntax vs. different syntax for +references to files and packages._ ## Preambles -A preamble allows for the inclusion of verbatim target code that may be necessary for reactors to function. Currently, there are two scopes in which preambles can appear: (1) file scope and (2) reactor class scope. Moreover, there exist visibility modifiers to label preambles `private` or `public`. A `public` preamble is intended to contain code that is necessary for the use of a reactor that is in scope. A `private` preamble is intended to contain code that is necessary for the implementation of a reactor that is in scope. Only the C++ code generator can currently effectively separate these LF scope levels. It achieves this by putting each reactor class definition in its own file. LF file scope preambles are currently not supported by the C target, but this appears to be unintentional and would be easy to fix. Reactor class scope preambles are supported by the C target, but there is no isolation of scope; the preamble of one reactor is visible to the one defined after it. To fix this, I see two options: (1) follow the same approach as `CppGenerator` and output separate files, which also means that a Makefile has to be generated in order to compile the result, or (2) leverage block scope within a single file, but this will become complicated and make the generated C code even less humanly readable. - -_We could put aside the problem of name clashes due to the absence of scope isolation in generated C code and fix this later. For the time being, the problem can be circumvented using `.h` files._ +A preamble allows for the inclusion of verbatim target code that may be +necessary for reactors to function. Currently, there are two scopes in which +preambles can appear: (1) file scope and (2) reactor class scope. Moreover, +there exist visibility modifiers to label preambles `private` or `public`. A +`public` preamble is intended to contain code that is necessary for the use of a +reactor that is in scope. A `private` preamble is intended to contain code that +is necessary for the implementation of a reactor that is in scope. Only the C++ +code generator can currently effectively separate these LF scope levels. It +achieves this by putting each reactor class definition in its own file. LF file +scope preambles are currently not supported by the C target, but this appears to +be unintentional and would be easy to fix. Reactor class scope preambles are +supported by the C target, but there is no isolation of scope; the preamble of +one reactor is visible to the one defined after it. To fix this, I see two +options: (1) follow the same approach as `CppGenerator` and output separate +files, which also means that a Makefile has to be generated in order to compile +the result, or (2) leverage block scope within a single file, but this will +become complicated and make the generated C code even less humanly readable. + +_We could put aside the problem of name clashes due to the absence of scope +isolation in generated C code and fix this later. For the time being, the +problem can be circumvented using `.h` files._ ## Target Properties 1. Each file declares a target. -2. All code in all reactors in the same file must agree with the specified target. +2. All code in all reactors in the same file must agree with the specified + target. 3. Additional target properties may be specified. 4. Target properties are not inherited through imports. -5. Any property that needs to be inherited through an import (such as the requirement to link against the pthread library) must be specified as a build dependency instead. +5. Any property that needs to be inherited through an import (such as the + requirement to link against the pthread library) must be specified as a build + dependency instead. ## Build Dependencies -1. It must be possible to specify build dependencies, such as `files`, `sources`, and `protobufs`. -2. We could either allow these definitions to go directly in the `.lf` file, or we could decide to specify them in a package description (i.e., separate file). We could potentially allow both. -3. Build dependencies are inherited through imports (or from package descriptions), and they are never shadowed, always _joined_. +1. It must be possible to specify build dependencies, such as `files`, + `sources`, and `protobufs`. +2. We could either allow these definitions to go directly in the `.lf` file, or + we could decide to specify them in a package description (i.e., separate + file). We could potentially allow both. +3. Build dependencies are inherited through imports (or from package + descriptions), and they are never shadowed, always _joined_. # Unique Reactor Names -The new import system as described above ensures that reactor names within a single `.lf` file are unique. In case reactors with the same name are imported from different `.lf` files, the renaming mechanism needs to be used in order to resolve the name conflict. The same applies if the given `.lf` file defines some reactors and tries to import other reactors with the same name. For instance, consider the LF file in Example 1 below. In the scope of this file, three reactor declarations are visible: `Foo`, `Bar` and `Baz`, although the actual reactors have the same name `Foo`. +The new import system as described above ensures that reactor names within a +single `.lf` file are unique. In case reactors with the same name are imported +from different `.lf` files, the renaming mechanism needs to be used in order to +resolve the name conflict. The same applies if the given `.lf` file defines some +reactors and tries to import other reactors with the same name. For instance, +consider the LF file in Example 1 below. In the scope of this file, three +reactor declarations are visible: `Foo`, `Bar` and `Baz`, although the actual +reactors have the same name `Foo`. ## Examples -Throughout this section, I will be using two LF example programs. Since the markdown syntax does not provide a simple way to label and refer to code listings, I figure its easiest to place them here in a central place and refer to them later by the heading +Throughout this section, I will be using two LF example programs. Since the +markdown syntax does not provide a simple way to label and refer to code +listings, I figure its easiest to place them here in a central place and refer +to them later by the heading ### Example 1 @@ -337,9 +519,18 @@ main reactor Foo { ## Unique Reactor Names in Target Code -While the mechanism above effectively ensures uniqueness in a single LF file, this uniqueness is surprisingly hard to ensure in generated target code. C has an obvious problem here as it places all generated code in a single file. While the name conflict in the above code can be solved by generating code for three reactors named `Bar`, `Baz` and `Foo`, it breaks as soon as another file of the same LF program uses `import Foo from "Bar.lf"`. Then there would be two definitions of the reactor `Foo` that cannot be resolved. +While the mechanism above effectively ensures uniqueness in a single LF file, +this uniqueness is surprisingly hard to ensure in generated target code. C has +an obvious problem here as it places all generated code in a single file. While +the name conflict in the above code can be solved by generating code for three +reactors named `Bar`, `Baz` and `Foo`, it breaks as soon as another file of the +same LF program uses `import Foo from "Bar.lf"`. Then there would be two +definitions of the reactor `Foo` that cannot be resolved. -Now you would probably expect that splitting the generated code into multiple files solves the issue, but unfortunately this is not true. If anything, it makes the problem more subtle. The C++ code generated from Example 1 would likely look something like this: +Now you would probably expect that splitting the generated code into multiple +files solves the issue, but unfortunately this is not true. If anything, it +makes the problem more subtle. The C++ code generated from Example 1 would +likely look something like this: ``` // Foo.hh @@ -356,9 +547,22 @@ class Foo : public reactor::Reactor { }; ``` -This will cause a compile error as there are multiple definitions of `Foo`. While renaming is possible in C++ with the `using` keyword (`typedef` works as well), the thing being renamed needs to be already visible in the scope. So there are multiple definitions of `Foo` as all the files `Bar.hh`, `Baz.hh` and `Foo.hh` define this class. We need a mechanism to distinguish the different definitions of `Foo`. +This will cause a compile error as there are multiple definitions of `Foo`. +While renaming is possible in C++ with the `using` keyword (`typedef` works as +well), the thing being renamed needs to be already visible in the scope. So +there are multiple definitions of `Foo` as all the files `Bar.hh`, `Baz.hh` and +`Foo.hh` define this class. We need a mechanism to distinguish the different +definitions of `Foo`. -There is even another issue that stems from the fact that the semantics of imports in LF are different from the include semantics of C++. Consider the code in Example 2, which is valid LF code. Although `Bar.lf` imports `Foo` and `Foo.lf` imports from `Bar.lf`, the definition of `Foo` in `Baz.lf` is not visible in `Foo.lf`. This 'hiding', however, does not easily propagate to the generated code. In C, there will be an error because both definitions of `Foo` are placed in the same file. In C++, the different definitions of `Foo` are placed in different files, but there will still be an error. The generated C++ code would look something like this: +There is even another issue that stems from the fact that the semantics of +imports in LF are different from the include semantics of C++. Consider the code +in Example 2, which is valid LF code. Although `Bar.lf` imports `Foo` and +`Foo.lf` imports from `Bar.lf`, the definition of `Foo` in `Baz.lf` is not +visible in `Foo.lf`. This 'hiding', however, does not easily propagate to the +generated code. In C, there will be an error because both definitions of `Foo` +are placed in the same file. In C++, the different definitions of `Foo` are +placed in different files, but there will still be an error. The generated C++ +code would look something like this: ``` \\ Baz.hh @@ -393,19 +597,37 @@ class Foo : public reactor::Reactor { }; ``` -This will produce an error due to multiple definitions of `Foo` being visible in `Foo.hh`. The problem is that any include in `Bar.hh` becomes also visible in `Foo.hh`. So there is a name clash due to the way the C++ compiler processes included and that is hard to work around. +This will produce an error due to multiple definitions of `Foo` being visible in +`Foo.hh`. The problem is that any include in `Bar.hh` becomes also visible in +`Foo.hh`. So there is a name clash due to the way the C++ compiler processes +included and that is hard to work around. ## Possible Solutions -In conclusion from the above section, I can say that translating the file based scoping of reactor names that we have in LF to generated target code is not trivial. Any sensible solution will need to establish a mechanism to ensure that any two distinct reactors in LF are also distinct in target code. +In conclusion from the above section, I can say that translating the file based +scoping of reactor names that we have in LF to generated target code is not +trivial. Any sensible solution will need to establish a mechanism to ensure that +any two distinct reactors in LF are also distinct in target code. ### Namespaces -We could introduce some form of a namespace mechanism that allows us to derive fully-qualified names of reactors. This is the preferred solution for me (Christian). Note that by 'namespace' I mean any logical organization of reactors in named groups and not the precise concept of C++ namespaces. In other languages those logical groups are also referred to as modules or packages. Also note that it is only important to be able to assign a fully-qualified name to a reactor, it does not necessarily require that we refer to reactors by their fully-qualified name in LF code. +We could introduce some form of a namespace mechanism that allows us to derive +fully-qualified names of reactors. This is the preferred solution for me +(Christian). Note that by 'namespace' I mean any logical organization of +reactors in named groups and not the precise concept of C++ namespaces. In other +languages those logical groups are also referred to as modules or packages. Also +note that it is only important to be able to assign a fully-qualified name to a +reactor, it does not necessarily require that we refer to reactors by their +fully-qualified name in LF code. #### File based namespaces -In my view, the easiest way to introduce namespaces in LF would be to leverage file system structure. Everything contained in `Foo.lf` would automatically be in the namespace `Foo`. So the FQN of a reactor `Foo` defined in `Foo.lf` would be `Foo.Foo` (or `Foo::Foo`, or some other delimiter). This would solve the name clashes in both of our examples. For Example 1, the generated code could look like this: +In my view, the easiest way to introduce namespaces in LF would be to leverage +file system structure. Everything contained in `Foo.lf` would automatically be +in the namespace `Foo`. So the FQN of a reactor `Foo` defined in `Foo.lf` would +be `Foo.Foo` (or `Foo::Foo`, or some other delimiter). This would solve the name +clashes in both of our examples. For Example 1, the generated code could look +like this: ``` // Foo.hh @@ -469,8 +691,11 @@ class Foo : public reactor::Reactor { } ``` -While this appears to be a promising solution, it is not sufficient to only consider the name of an `*.lf` file to derive the namespace -There could be two files `Foo.lf` in different directories that both define the reactor `Foo`. Thus, we also need to consider the directory structure and introduce hierarchical namespaces. Consider this directory tree: +While this appears to be a promising solution, it is not sufficient to only +consider the name of an `*.lf` file to derive the namespace There could be two +files `Foo.lf` in different directories that both define the reactor `Foo`. +Thus, we also need to consider the directory structure and introduce +hierarchical namespaces. Consider this directory tree: ``` foo/ @@ -480,45 +705,119 @@ foo/ foo.lf # defines reactor Foo ``` -In this example, the two `Foo` reactors would have the fully qualified names `foo.bar.foo.Foo` and `foo.baz.foo.Foo`. -In order for this concept to work, we need the notion of a top-level namespace or directory. Naturally, this would be the package. Therefore, this namespace approach would also require a simple mechanism to define a package. For now this could be rudimentary. Simply placing an empty `lf.yaml` in the `foo/` directory in the above example would be sufficient. In addition to the notion of packages, we would also need a simple mechanism to find packages. However, since packages are something we want to have anyway, it would not hurt to start and implement a rudimentary system now. - -This proposal is a bit at odds with the file based import mechanism described above. While it is clear what the namespace of an `*.lf` file relative to a package directory is, it is unclear what the namespace of an arbitrary file outside a package is. Marten suggested to resolve this by using a default namespace or the global namespace whenever a \*lf that is not part of a package is imported and to make the user responsible for avoiding any conflicts. - -We would also need to restrict the naming of files and directories and ban the usage of the namespace delimiter (`.` or `::` or some other) in file and directory names. In my opinion this is not much of a problem and common practice for many languages. If we decide to use this namespace mechanism, it would probably be better to drop the file based imports and switch to imports by FQN (e.g. `import Foo from foo.bar.foo`) +In this example, the two `Foo` reactors would have the fully qualified names +`foo.bar.foo.Foo` and `foo.baz.foo.Foo`. In order for this concept to work, we +need the notion of a top-level namespace or directory. Naturally, this would be +the package. Therefore, this namespace approach would also require a simple +mechanism to define a package. For now this could be rudimentary. Simply placing +an empty `lf.yaml` in the `foo/` directory in the above example would be +sufficient. In addition to the notion of packages, we would also need a simple +mechanism to find packages. However, since packages are something we want to +have anyway, it would not hurt to start and implement a rudimentary system now. + +This proposal is a bit at odds with the file based import mechanism described +above. While it is clear what the namespace of an `*.lf` file relative to a +package directory is, it is unclear what the namespace of an arbitrary file +outside a package is. Marten suggested to resolve this by using a default +namespace or the global namespace whenever a \*lf that is not part of a package +is imported and to make the user responsible for avoiding any conflicts. + +We would also need to restrict the naming of files and directories and ban the +usage of the namespace delimiter (`.` or `::` or some other) in file and +directory names. In my opinion this is not much of a problem and common practice +for many languages. If we decide to use this namespace mechanism, it would +probably be better to drop the file based imports and switch to imports by FQN +(e.g. `import Foo from foo.bar.foo`) #### A Namespace Directive -As an alternative to the file based namespace mechanism described above, we could also introduce a namespace directive in the LF syntax or as part of the target properties. This would effectively allow the user to specify the namespace that any reactor defined in a file should be part of. This solution would allow to augment the file based import system that we have with a namespace mechanism. It is important to note, however, that this entirely shifts the responsibility for ensuring uniqueness within a namespace to the user. When we derive namespaces from the file path as described above, we can be sure that the resulting namespace only contains unique reactors because we ensure that any LF file only contains unique reactors. If we allow the user to specify the namespace, however, there could easily be two files with the same namespace directive that both define the reactor `Foo`. This approach might also cause problems for target languages where the namespaces relate to concrete file paths such as in Rust, Python or Java. +As an alternative to the file based namespace mechanism described above, we +could also introduce a namespace directive in the LF syntax or as part of the +target properties. This would effectively allow the user to specify the +namespace that any reactor defined in a file should be part of. This solution +would allow to augment the file based import system that we have with a +namespace mechanism. It is important to note, however, that this entirely shifts +the responsibility for ensuring uniqueness within a namespace to the user. When +we derive namespaces from the file path as described above, we can be sure that +the resulting namespace only contains unique reactors because we ensure that any +LF file only contains unique reactors. If we allow the user to specify the +namespace, however, there could easily be two files with the same namespace +directive that both define the reactor `Foo`. This approach might also cause +problems for target languages where the namespaces relate to concrete file paths +such as in Rust, Python or Java. ### Name Mangling -There are other mechanisms to derive unique names apart from namespaces. One that is widely used by compilers is name mangling which replaces or decorates the original name. For instance, we could simply add a number to the name of generated reactors (`Foo1`, `Foo2`, ...) to distinguish multiple LF reactor definitions named `Foo`. What separates our approach from traditional compiler though, is that we are not in control of the full build process and only generate source code to be processed by another compiler. Therefore, any renaming we do when compiling LF code to target code needs to be done with care as it could easily introduce new problems because we are not aware of all the identifiers defined in a target language. For instance if our LF program uses a library that defines the class `Foo3`, adding a third definition of the reactor Foo to the program would lead to an unexpected error that is also hard to debug. - -Soroush also proposed to use a hashing mechanism (for instance a hash of the file name) to decorate reactor names. This would be less likely -to clash with any names defined in some library. However, we would need to make sure that any mechanism we use for generating unique decorated names follows strict rules and generates reproducible names. This reproducibility is crucial for several reasons. - -1. Since even a complex name mangling mechanism has still the chance to produce name clashes with identifiers defined outside of the LF program, those clashes should not occur randomly. There should be either an error or no error on each compilation run. Nondeterministic builds are no fun to deal with. - -2. In case of any errors, it is crucial to be able to reproduce and compare builds across machines and platforms. A platform dependent name mangling algorithm (for instance one that hashes file paths) would make it unnecessary hard to reproduce and debug compile errors. - -3. Somewhere in the future, we might want to be able to compile packages as libraries. Recompilation of the library should never change its API. Moreover, the name mangling algorithm should be robust in the sense that small changes in LF code do not lead to changed identifiers - in the library API. - -All in all, I think it is hard to define an algorithm that generates reproducible and stable names, but maybe someone else has a good idea of how this could be achieved. - -Another obvious disadvantage of the name mangling approach would be that the generated code is less readable. Also any external target code that might want to reference reactors in a library compiled from LF code, would need to know and use the mangled name. +There are other mechanisms to derive unique names apart from namespaces. One +that is widely used by compilers is name mangling which replaces or decorates +the original name. For instance, we could simply add a number to the name of +generated reactors (`Foo1`, `Foo2`, ...) to distinguish multiple LF reactor +definitions named `Foo`. What separates our approach from traditional compiler +though, is that we are not in control of the full build process and only +generate source code to be processed by another compiler. Therefore, any +renaming we do when compiling LF code to target code needs to be done with care +as it could easily introduce new problems because we are not aware of all the +identifiers defined in a target language. For instance if our LF program uses a +library that defines the class `Foo3`, adding a third definition of the reactor +Foo to the program would lead to an unexpected error that is also hard to debug. + +Soroush also proposed to use a hashing mechanism (for instance a hash of the +file name) to decorate reactor names. This would be less likely to clash with +any names defined in some library. However, we would need to make sure that any +mechanism we use for generating unique decorated names follows strict rules and +generates reproducible names. This reproducibility is crucial for several +reasons. + +1. Since even a complex name mangling mechanism has still the chance to produce + name clashes with identifiers defined outside of the LF program, those + clashes should not occur randomly. There should be either an error or no + error on each compilation run. Nondeterministic builds are no fun to deal + with. + +2. In case of any errors, it is crucial to be able to reproduce and compare + builds across machines and platforms. A platform dependent name mangling + algorithm (for instance one that hashes file paths) would make it unnecessary + hard to reproduce and debug compile errors. + +3. Somewhere in the future, we might want to be able to compile packages as + libraries. Recompilation of the library should never change its API. + Moreover, the name mangling algorithm should be robust in the sense that + small changes in LF code do not lead to changed identifiers in the library + API. + +All in all, I think it is hard to define an algorithm that generates +reproducible and stable names, but maybe someone else has a good idea of how +this could be achieved. + +Another obvious disadvantage of the name mangling approach would be that the +generated code is less readable. Also any external target code that might want +to reference reactors in a library compiled from LF code, would need to know and +use the mangled name. ## Unique Reactor Names in our Tools -In our last meeting (Tue 2020-08-04), I said that there are other places where we care about unique names: our tools such as the diagram view or the trace generator that I implemented for C++ and that we cannot ensure that names are unique at the moment. However, while thinking about it a bit more I realized that this is not much of an issue. Ambiguous names of reactor types are not a big problem for the diagram view. Since clicking on the nodes jumps directly to the reactor definition, the ambiguity in the names can easily be resolved. +In our last meeting (Tue 2020-08-04), I said that there are other places where +we care about unique names: our tools such as the diagram view or the trace +generator that I implemented for C++ and that we cannot ensure that names are +unique at the moment. However, while thinking about it a bit more I realized +that this is not much of an issue. Ambiguous names of reactor types are not a +big problem for the diagram view. Since clicking on the nodes jumps directly to +the reactor definition, the ambiguity in the names can easily be resolved. -For the tracing, I realized that it is not the name of the reactor type that matters, but the name of the instance. These are unique fully-qualified names already. For instance `main.foo.bar.r0`, denotes the reaction with priority 0, of a reactor instance called `bar` that is contained by the reactor instance `foo`, which is in turn contained by the main reactor. +For the tracing, I realized that it is not the name of the reactor type that +matters, but the name of the instance. These are unique fully-qualified names +already. For instance `main.foo.bar.r0`, denotes the reaction with priority 0, +of a reactor instance called `bar` that is contained by the reactor instance +`foo`, which is in turn contained by the main reactor. ## Summary -All in all, I think leveraging the file structure for determining the fully qualified names of reactors is the most promising solution. +All in all, I think leveraging the file structure for determining the fully +qualified names of reactors is the most promising solution. -1. It works without any changes in our syntax. Only the code generators need to be updated to support the namespacing. -2. In contrast to name mangling, it allows generation of readable code and also gives the programmer full control of how generated reactors are named. -3. It fits naturally to languages that also support leveraging the file structure to create namespaces (e.g. python or rust). +1. It works without any changes in our syntax. Only the code generators need to + be updated to support the namespacing. +2. In contrast to name mangling, it allows generation of readable code and also + gives the programmer full control of how generated reactors are named. +3. It fits naturally to languages that also support leveraging the file + structure to create namespaces (e.g. python or rust). diff --git a/docs/.less-developed/related-work.mdx b/docs/.less-developed/related-work.mdx index 885c98500..518e9231a 100644 --- a/docs/.less-developed/related-work.mdx +++ b/docs/.less-developed/related-work.mdx @@ -2,50 +2,93 @@ title: Related Work description: Related Work --- + Lingua Franca is focused more on using the best ideas than on being innovative. -Here, we list most closely related work first, then other work with which it may be useful to contrast. +Here, we list most closely related work first, then other work with which it may +be useful to contrast. ## Software Frameworks -* [Rubus](https://link.springer.com/article/10.1007%2Fs10270-020-00795-5). +- [Rubus](https://link.springer.com/article/10.1007%2Fs10270-020-00795-5). -* [Akka framework](https://www.sciencedirect.com/science/article/abs/pii/S0167739X20330739) for distributed Fog computing. +- [Akka framework](https://www.sciencedirect.com/science/article/abs/pii/S0167739X20330739) + for distributed Fog computing. -* [Accessors](http://accessors.org), from Berkeley, a JavaScript-based framework for IoT: This framework is the most direct inspiration for Lingua Franca. The idea behind accessors is to componentize IoT resources by encapsulating them in actors. As such, their interactions can be coordinated under a discrete event semantics [paper](http://www.icyphy.org/pubs/75.html). +- [Accessors](http://accessors.org), from Berkeley, a JavaScript-based framework + for IoT: This framework is the most direct inspiration for Lingua Franca. The + idea behind accessors is to componentize IoT resources by encapsulating them + in actors. As such, their interactions can be coordinated under a discrete + event semantics [paper](http://www.icyphy.org/pubs/75.html). -* [Rebecca](https://rebeca-lang.org). +- [Rebecca](https://rebeca-lang.org). -* The [Kiel Integrated Environment for Layout Eclipse Rich Client](https://www.rtsys.informatik.uni-kiel.de/en/research/kieler/welcome-to-the-kieler-project) (**KIELER**) is a graphical environment for programming using [SCCharts](https://rtsys.informatik.uni-kiel.de/confluence/display/KIELER/SCCharts) (see the [2014 PLDI paper](https://doi.org/10.1145/2594291.2594310)). +- The + [Kiel Integrated Environment for Layout Eclipse Rich Client](https://www.rtsys.informatik.uni-kiel.de/en/research/kieler/welcome-to-the-kieler-project) + (**KIELER**) is a graphical environment for programming using + [SCCharts](https://rtsys.informatik.uni-kiel.de/confluence/display/KIELER/SCCharts) + (see the [2014 PLDI paper](https://doi.org/10.1145/2594291.2594310)). -* **[RTMAPS](https://intempora.com/products/rtmaps#about-rtmaps)**: From Intempora. It has a graphical syntax in a UI and advertises "data is acquired asynchronously and each data sample is captured along with its time stamp at its own pace." You can build your own blocks in C++ or Python. It does, however, look like its not deterministic. +- **[RTMAPS](https://intempora.com/products/rtmaps#about-rtmaps)**: From + Intempora. It has a graphical syntax in a UI and advertises "data is acquired + asynchronously and each data sample is captured along with its time stamp at + its own pace." You can build your own blocks in C++ or Python. It does, + however, look like its not deterministic. ### Usage of the Term Reactor -* **[reactors.io](http://reactors.io/)**: This project, from EPFL, originally used the term "reactive isolates" in a [2015 paper](https://dl.acm.org/citation.cfm?doid=2814228.2814245). In 2016, Prokopec changed the name "reactive isolates" to "reactors" (see [2018 paper](http://doi.org/10.1007/978-3-030-00302-9_5)). They claim their reactors are "actors done right," but they remain nondeterministic. See a [video overview](https://www.youtube.com/watch?v=7lulYWWD4Qo). +- **[reactors.io](http://reactors.io/)**: This project, from EPFL, originally + used the term "reactive isolates" in a + [2015 paper](https://dl.acm.org/citation.cfm?doid=2814228.2814245). In 2016, + Prokopec changed the name "reactive isolates" to "reactors" (see + [2018 paper](http://doi.org/10.1007/978-3-030-00302-9_5)). They claim their + reactors are "actors done right," but they remain nondeterministic. See a + [video overview](https://www.youtube.com/watch?v=7lulYWWD4Qo). -* [Racket implementation of ReactiveML](https://docs.racket-lang.org/reactor/index.html) +- [Racket implementation of ReactiveML](https://docs.racket-lang.org/reactor/index.html) -* **FIXME** https://projectreactor.io/ based on http://www.reactive-streams.org/ +- **FIXME** https://projectreactor.io/ based on http://www.reactive-streams.org/ ### Other Pointers -* **[Reactive Manifesto](https://www.reactivemanifesto.org/)**: Version 2.0, Published in 2014, this position paper defines Reactive Systems as those that are Responsive, Resilient, Elastic and Message Driven. - +- **[Reactive Manifesto](https://www.reactivemanifesto.org/)**: Version 2.0, + Published in 2014, this position paper defines Reactive Systems as those that + are Responsive, Resilient, Elastic and Message Driven. ## Academic Projects -* [I/O Automata](https://en.wikipedia.org/wiki/Input%2Foutput_automaton), from MIT, is a formalism that could be used to model the semantics of Lingua Franca. Timed I/O Automata FIXME: link extend I/O Automata with temporal semantics. They share with LF the notion of reactions to input messages and internal events that change the state of an actor and produce outputs. The behavior of a component is given as a state machine. +- [I/O Automata](https://en.wikipedia.org/wiki/Input%2Foutput_automaton), from + MIT, is a formalism that could be used to model the semantics of Lingua + Franca. Timed I/O Automata FIXME: link extend I/O Automata with temporal + semantics. They share with LF the notion of reactions to input messages and + internal events that change the state of an actor and produce outputs. The + behavior of a component is given as a state machine. -* **FIXME** Hewitt actors. +- **FIXME** Hewitt actors. -* **SyncCharts**: By Charles André. See the [1996 technical report](http://www-sop.inria.fr/members/Charles.Andre/CA%20Publis/SYNCCHARTS/overview.html). +- **SyncCharts**: By Charles André. See the + [1996 technical report](http://www-sop.inria.fr/members/Charles.Andre/CA%20Publis/SYNCCHARTS/overview.html). -* **ReactiveML**: [Website](http://reactiveml.org) +- **ReactiveML**: [Website](http://reactiveml.org) ## Contrasting Work -* [CAPH](http://caph.univ-bpclermont.fr/CAPH/CAPH.html) (a recursive acronym for CAPH Ain't plain HDL), a hardware description language from CNRS, is a fine-grained dataflow language for compiling into FPGAs. The language has no temporal semantics, and although it has a notion of firing rules, it is not clear which of the many variants of dataflow is realized nor whether the MoC is deterministic. The [paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6972018) does not cite any of the prior work on dataflow MoCs. - -* [Robot Operating System (ROS)](https://en.wikipedia.org/wiki/Robot_Operating_System), an open-source project originally from [Willow Garage](https://en.wikipedia.org/wiki/Willow_Garage): ROS provides a publish-and-subscribe server for interaction between components. Version 1 has no timing properties at all. Version 2 has some timing properties such as priorities, but it makes no effort to be deterministic. - -* [RADLER framework](https://sri-csl.github.io/radler/) from SRI, which is based on a publish-and-subscribe architecture similar to ROS. It introduces some timing constructs such as periodic execution and scheduling constraints, but it makes no effort to be deterministic. \ No newline at end of file +- [CAPH](http://caph.univ-bpclermont.fr/CAPH/CAPH.html) (a recursive acronym for + CAPH Ain't plain HDL), a hardware description language from CNRS, is a + fine-grained dataflow language for compiling into FPGAs. The language has no + temporal semantics, and although it has a notion of firing rules, it is not + clear which of the many variants of dataflow is realized nor whether the MoC + is deterministic. The + [paper](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6972018) does + not cite any of the prior work on dataflow MoCs. + +- [Robot Operating System (ROS)](https://en.wikipedia.org/wiki/Robot_Operating_System), + an open-source project originally from + [Willow Garage](https://en.wikipedia.org/wiki/Willow_Garage): ROS provides a + publish-and-subscribe server for interaction between components. Version 1 has + no timing properties at all. Version 2 has some timing properties such as + priorities, but it makes no effort to be deterministic. + +- [RADLER framework](https://sri-csl.github.io/radler/) from SRI, which is based + on a publish-and-subscribe architecture similar to ROS. It introduces some + timing constructs such as periodic execution and scheduling constraints, but + it makes no effort to be deterministic. diff --git a/docs/.less-developed/timing-analysis.mdx b/docs/.less-developed/timing-analysis.mdx index 911102d2f..fb33f1f4c 100644 --- a/docs/.less-developed/timing-analysis.mdx +++ b/docs/.less-developed/timing-analysis.mdx @@ -2,16 +2,20 @@ title: Timing Analysis description: Timing Analysis. --- + # Examples ## Precision-Timed Actuation (discussion Dec 2018) + Given a time unit c, - - H3 reacts sporadically >= 100c (e.g., 10, 120, 230, ...) - - H4 reacts periodically with period 50c (e.g., 0, 50, 100, ...) - - Delay adds 100c to the timestamp of each incoming event - - Actuate shall start executing H5 _before_ r.t. clock exceeds time stamp of incoming events - -``` + +- H3 reacts sporadically >= 100c (e.g., 10, 120, 230, ...) +- H4 reacts periodically with period 50c (e.g., 0, 50, 100, ...) +- Delay adds 100c to the timestamp of each incoming event +- Actuate shall start executing H5 _before_ r.t. clock exceeds time stamp of + incoming events + +``` +--------+ | | +--------+ +-------+ +---------+ | H3 +----------> H1 | | | | | @@ -28,15 +32,16 @@ Given a time unit c, We can construct a dependency graph: -``` +``` H3 ---> H1 ---> H2 ---> H5 | H4 ---------+ -``` +``` A feasible schedule requires that: - - WCET(H3) + WCET(H1) + WCET(H2) \<= 100c - - WCET(H4) + WCET(H1) + WCET(H2) \<= 100c + +- WCET(H3) + WCET(H1) + WCET(H2) \<= 100c +- WCET(H4) + WCET(H1) + WCET(H2) \<= 100c ## Preemption Example @@ -58,14 +63,18 @@ A feasible schedule requires that: This example needs the following: - * r3 needs to preempt r1. - * The event from GPS needs a delay of 300ms between Corr and Ctrl, so Ctrl never sees an older event. +- r3 needs to preempt r1. +- The event from GPS needs a delay of 300ms between Corr and Ctrl, so Ctrl never + sees an older event. If we want to avoid preemption, as this hurts WCET analysis: - * Split reactor Corr. into three (or more) reactors and add a delay of 100 ms after each one. +- Split reactor Corr. into three (or more) reactors and add a delay of 100 ms + after each one. -For both solutions, the scheduler needs a "safe to process" analysis for reaction r3 to execute while r1 is -still executing for an older time-stamped event. +For both solutions, the scheduler needs a "safe to process" analysis for +reaction r3 to execute while r1 is still executing for an older time-stamped +event. -Preemption can be avoided when there are enough cores (or hardware threads in PRET) available to execute r1 and r3 concurrently. \ No newline at end of file +Preemption can be avoided when there are enough cores (or hardware threads in +PRET) available to execute r1 and r3 concurrently. diff --git a/docs/.less-developed/tools.mdx b/docs/.less-developed/tools.mdx index 4006764a3..6425d2ebb 100644 --- a/docs/.less-developed/tools.mdx +++ b/docs/.less-developed/tools.mdx @@ -2,8 +2,12 @@ title: Tools description: LF Tools. --- + # IDE integration -The idea is to build a language server to facilitate the integration with a variety of editors/IDEs. See [Language Server Protocol (LSP)](https://langserver.org/) for more information. + +The idea is to build a language server to facilitate the integration with a +variety of editors/IDEs. See +[Language Server Protocol (LSP)](https://langserver.org/) for more information. ``` +-------------------------------------+ @@ -16,4 +20,9 @@ The idea is to build a language server to facilitate the integration with a vari +-------------------------------------+ ``` -If the LF compiler encounters any syntax errors, it will report them to the editor (the language client). If the LF code compiles, the output will be sent to the target compiler. If the target compiler reports any errors, these, too, will be reported to the editor via the language server. The tricky part is to match target language errors to LF source locations; the language server will have to do some bookkeeping. \ No newline at end of file +If the LF compiler encounters any syntax errors, it will report them to the +editor (the language client). If the LF code compiles, the output will be sent +to the target compiler. If the target compiler reports any errors, these, too, +will be reported to the editor via the language server. The tricky part is to +match target language errors to LF source locations; the language server will +have to do some bookkeeping. diff --git a/docs/.obsolete/language-specification.md b/docs/.obsolete/language-specification.md index 4a72c55b5..400c34a10 100644 --- a/docs/.obsolete/language-specification.md +++ b/docs/.obsolete/language-specification.md @@ -498,7 +498,7 @@ An instance is created with the syntax: > _instance_name_ = **new** _class_name_(_parameters_); -A bank with several instances can be created in one such statement, as explained in the [banks of reactors documentation](<../writing-reactors/multiports-and-banks.mdx#banks-of-reactors>). +A bank with several instances can be created in one such statement, as explained in the [banks of reactors documentation](../writing-reactors/multiports-and-banks.mdx#banks-of-reactors). The _parameters_ argument has the form: diff --git a/docs/.preliminary/generic-types-interfaces-and-inheritance.mdx b/docs/.preliminary/generic-types-interfaces-and-inheritance.mdx index 72ff12617..25ca368c7 100644 --- a/docs/.preliminary/generic-types-interfaces-and-inheritance.mdx +++ b/docs/.preliminary/generic-types-interfaces-and-inheritance.mdx @@ -3,7 +3,8 @@ title: Generic Types, Interfaces, and Inheritance description: Generic Types, Interfaces, and Inheritance (preliminary) --- -_The following topics are meant as collections of design ideas, with the purpose of refining them into concrete design proposals._ +_The following topics are meant as collections of design ideas, with the purpose +of refining them into concrete design proposals._ # Generics @@ -18,11 +19,21 @@ reactor Foo { ## Type Constraints -We could like to combine generics with type constraints of the form `S extends Bar`, where `Bar` refers to a reactor class or interface. The meaning of extending or implementing a reactor class will mean something slightly different from what this means in the target language -- even if it features object orientation (OO). +We could like to combine generics with type constraints of the form +`S extends Bar`, where `Bar` refers to a reactor class or interface. The meaning +of extending or implementing a reactor class will mean something slightly +different from what this means in the target language -- even if it features +object orientation (OO). # Interfaces -While initially being tempted to distinguish interfaces from implementations, in an effort to promote simplicity, we (at least for the moment) propose not to. Only in case reactions and their signatures would be part of an interface and thus should be declared (without supplying an implementation) would there be a material difference between an interface and its implementation. Making reactions and their causality interfaces part of the reactor could prove useful, but it introduces a number of complications: +While initially being tempted to distinguish interfaces from implementations, in +an effort to promote simplicity, we (at least for the moment) propose not to. +Only in case reactions and their signatures would be part of an interface and +thus should be declared (without supplying an implementation) would there be a +material difference between an interface and its implementation. Making +reactions and their causality interfaces part of the reactor could prove useful, +but it introduces a number of complications: - ... @@ -30,6 +41,7 @@ While initially being tempted to distinguish interfaces from implementations, in - A reactor can extend multiple base classes; - Reactions are inherited in the order of declaration; and -- Equally-named ports and actions between subclass and superclass must also be equally typed. +- Equally-named ports and actions between subclass and superclass must also be + equally typed. ## Example diff --git a/docs/.preliminary/import-system.mdx b/docs/.preliminary/import-system.mdx index 03463b0ef..f26371c4a 100644 --- a/docs/.preliminary/import-system.mdx +++ b/docs/.preliminary/import-system.mdx @@ -2,7 +2,9 @@ title: Import System description: Import System (preliminary) --- -_The following topics are meant as collections of design ideas, with the purpose of refining them into concrete design proposals._ + +_The following topics are meant as collections of design ideas, with the purpose +of refining them into concrete design proposals._ # Current Implementation of Imports @@ -10,11 +12,14 @@ The import functionality in Lingua Franca is limited to: import HelloWorld.lf -This can be useful if the `.lf` file is located in the same directory as the file containing the main reactor. +This can be useful if the `.lf` file is located in the same directory as the +file containing the main reactor. -However, several shortcomings exist in this current system which we shall discuss next. +However, several shortcomings exist in this current system which we shall +discuss next. ## Duplicate Reactor Names + Reactors with the same name can cause issues. For example: ``` @@ -22,46 +27,66 @@ import CatsAndPuppies.lf // Contains a Puppy reactor import MeanPuppies.lf // Contains another Puppy reactor ``` -There is no way for the LF program to distinguish between the two `Puppy` reactors. +There is no way for the LF program to distinguish between the two `Puppy` +reactors. -**Note.** With a relatively trivial extension to the current LF import mechanism, it is possible to detect duplicates, but there is no way to circumvent them in the current LF program (i.e., the original names might have to be changed). +**Note.** With a relatively trivial extension to the current LF import +mechanism, it is possible to detect duplicates, but there is no way to +circumvent them in the current LF program (i.e., the original names might have +to be changed). ## Selective Importing + Selective importing is not possible. For example, using - + ``` import CatsAndPuppies.lf ``` - -will import all the reactors contained in the `.lf` file. It would be desirable to selectively import a subset of reactors in another `.lf` file. + +will import all the reactors contained in the `.lf` file. It would be desirable +to selectively import a subset of reactors in another `.lf` file. ## Qualified Paths -Currently, there is no elegant way of importing modules that are not in the same directory. + +Currently, there is no elegant way of importing modules that are not in the same +directory. ## Renaming -All the reactors imported will have the name originally given to them by the original programmer. It might make sense to rename them for the current LF program. + +All the reactors imported will have the name originally given to them by the +original programmer. It might make sense to rename them for the current LF +program. ## Packages -With the current import solution that only uses files, implementing packages in Lingua Franca is not feasible. +With the current import solution that only uses files, implementing packages in +Lingua Franca is not feasible. # Proposed Solution + With inspirations from Python, we propose the following import mechanism: ``` -"import" LF_Trunc_File/module ("," LF_Trunc_File/module)* +"import" LF_Trunc_File/module ("," LF_Trunc_File/module)* | "from" LFTruncFile/module "import" reactor ["as" name] ("," reactor ["as" name] )* - | "from" LF_Trunc_File/module "import" "*" + | "from" LF_Trunc_File/module "import" "*" ``` -Before discussing some examples, let's discuss `LF_Trunc_File/module`. First and foremost, `LF_Truc_File` stands for Lingua Franca Truncated File, which is a `name.lf` file with the `.lf` removed. Therefore, the legacy support for import can be carried over as: +Before discussing some examples, let's discuss `LF_Trunc_File/module`. First and +foremost, `LF_Truc_File` stands for Lingua Franca Truncated File, which is a +`name.lf` file with the `.lf` removed. Therefore, the legacy support for import +can be carried over as: ``` import HelloWorld ``` -Second, the `module` would introduce the notion of packages to Lingua Franca. The content of a module can be located in any path. To enable this facility, modules provide a Lingua Franca Meta file (LFM) that introduces the package name, and the absolute or relative paths of all the LF files that are included in that package. For example: +Second, the `module` would introduce the notion of packages to Lingua Franca. +The content of a module can be located in any path. To enable this facility, +modules provide a Lingua Franca Meta file (LFM) that introduces the package +name, and the absolute or relative paths of all the LF files that are included +in that package. For example: ``` // CatsAndPuppies.LFM @@ -70,13 +95,16 @@ import /home/user/linguafranca/pets/Cats.lf // Absolute paths import pets/Puppies.lf // Relative paths ``` -For a package to be accessible, the `LFM` file needs to be discoverable. For example, it can be automatically added to the current directory or "installed" in a known Lingua Franca path (e.g., `/usr/local/LF/packages` or `/home/user/linguafranca/packages`). +For a package to be accessible, the `LFM` file needs to be discoverable. For +example, it can be automatically added to the current directory or "installed" +in a known Lingua Franca path (e.g., `/usr/local/LF/packages` or +`/home/user/linguafranca/packages`). -With that in mind, let's discuss some examples on how this might work next. -The content of the `HelloWorld.lf` example is as follows: +With that in mind, let's discuss some examples on how this might work next. The +content of the `HelloWorld.lf` example is as follows: ``` -target C; +target C; reactor SayHello { timer t; reaction(t) {= @@ -89,8 +117,9 @@ main reactor HelloWorldTest { ``` Let us create a `Greetings.lf` program based on HelloWorld. + ``` -target C; +target C; import HelloWorld main reactor Greetings { @@ -98,11 +127,15 @@ main reactor Greetings { } ``` -To generate code for `Greetings.lf`, Lingua Franca first searches for a `HelloWorld.lf` file in the same directory as `Greetings.lf`. If not found, it will look for a `HelloWorld.LFM` in the known paths. If none is found, an error is raised. +To generate code for `Greetings.lf`, Lingua Franca first searches for a +`HelloWorld.lf` file in the same directory as `Greetings.lf`. If not found, it +will look for a `HelloWorld.LFM` in the known paths. If none is found, an error +is raised. Now we can demonstrate selective import. For example: + ``` -target C; +target C; from HelloWorld import SayHello main reactor Greetings { @@ -111,11 +144,12 @@ main reactor Greetings { ``` Finally, renaming can be done by using the `as` predicate: + ``` -target C; +target C; from HelloWorld import SayHello as SayGreetings main reactor Greetings { a = new SayHeGreetings(); } -``` \ No newline at end of file +``` diff --git a/docs/.preliminary/reactors-on-patmos.mdx b/docs/.preliminary/reactors-on-patmos.mdx index 4063132fd..9a03b6afa 100644 --- a/docs/.preliminary/reactors-on-patmos.mdx +++ b/docs/.preliminary/reactors-on-patmos.mdx @@ -5,25 +5,27 @@ description: Reactors on Patmos (preliminary) ## Reactors on Patmos -Reactors can be executed on [Patmos](https://github.com/t-crest/patmos), a bare-metal execution platform -that is optimized for time-predictable execution. Well written C programs can be analyzed for their -worst-case execution time (WCET). +Reactors can be executed on [Patmos](https://github.com/t-crest/patmos), a +bare-metal execution platform that is optimized for time-predictable execution. +Well written C programs can be analyzed for their worst-case execution time +(WCET). ### Compiling and Running Reactors -Patmos can run in an FPGA, but there are also two -simulators available: +Patmos can run in an FPGA, but there are also two simulators available: 1. `pasim` a software ISA simulator that is written in C++. -2. `patemu` a cycle-accurate hardware emulator generated from the hardware description. +2. `patemu` a cycle-accurate hardware emulator generated from the hardware + description. -To execute reactions on Patmos, the [Patmos toolchain](https://github.com/t-crest/patmos) needs -to be installed. The web page contains a quick start, detailed information including how to -perform WCET analysis is available in the +To execute reactions on Patmos, the +[Patmos toolchain](https://github.com/t-crest/patmos) needs to be installed. The +web page contains a quick start, detailed information including how to perform +WCET analysis is available in the [Patmos Reference Handbook](http://patmos.compute.dtu.dk/patmos_handbook.pdf). -To execute the "hello world" reactor on Patmos use the LF compiler to generate the C code. -Compile the reactor with the Patmos compiler (in `src-gen`): +To execute the "hello world" reactor on Patmos use the LF compiler to generate +the C code. Compile the reactor with the Patmos compiler (in `src-gen`): patmos-clang Minimal.c -o Minimal.elf @@ -31,8 +33,8 @@ The reactor can be executed on the SW simulator with: pasim Minimal.elf -As Patmos is a bare metal runtime that has no notion of calendar time, its start time -is considered the epoch and the following output will be observed: +As Patmos is a bare metal runtime that has no notion of calendar time, its start +time is considered the epoch and the following output will be observed: ``` Start execution at time Thu Jan 1 00:00:00 1970 @@ -46,8 +48,8 @@ The reactor can also be executed on the hardware emulator of Patmos: patemu Minimal.elf -This execution is considerably slower than the SW simulator, as the concrete hardware -of Patmos is simulated cycle-accurate. +This execution is considerably slower than the SW simulator, as the concrete +hardware of Patmos is simulated cycle-accurate. ### Worst-Case Execution Time Analysis @@ -72,16 +74,17 @@ reactor Work { ``` We want to perform WCET analysis of the single reaction of the Work reactor. -This reaction, depending on the input data, will either perform a multiplication, -which is more expensive in Patmos, or an addition. The WCET analysis shall consider -the multiplication path as the worst-case path. To generate the information for -WCET analysis by the compiler we have to compile the application as follows: +This reaction, depending on the input data, will either perform a +multiplication, which is more expensive in Patmos, or an addition. The WCET +analysis shall consider the multiplication path as the worst-case path. To +generate the information for WCET analysis by the compiler we have to compile +the application as follows: patmos-clang -O2 -mserialize=wcet.pml Wcet.c -We investigate the C source code `Wcet.c` and find that the reaction we -are interested is named `reaction_function1`. Therefore, we invoke WCET analysis -as follows: +We investigate the C source code `Wcet.c` and find that the reaction we are +interested is named `reaction_function1`. Therefore, we invoke WCET analysis as +follows: platin wcet -i wcet.pml -b a.out -e reaction_function1 --report @@ -98,12 +101,11 @@ This results in following report: ... ``` -The analysis gives the WCET of 242 clock cycles for the reaction, -which includes clock cycles for data cache misses. -Further details on the WCET analysis -tool `platin` and e.g., how to annotate loop bounds can be found in the +The analysis gives the WCET of 242 clock cycles for the reaction, which includes +clock cycles for data cache misses. Further details on the WCET analysis tool +`platin` and e.g., how to annotate loop bounds can be found in the [Patmos Reference Handbook](http://patmos.compute.dtu.dk/patmos_handbook.pdf). Note, that the WCET analysis of a reaction does only include the code of the -reaction function, not the cache miss cost of calling the function from -the scheduler or the cache miss cost when returning to the scheduler. +reaction function, not the cache miss cost of calling the function from the +scheduler or the cache miss cost when returning to the scheduler. diff --git a/docs/.preliminary/target-supported-features.mdx b/docs/.preliminary/target-supported-features.mdx index e03afa75f..72a6bb84d 100644 --- a/docs/.preliminary/target-supported-features.mdx +++ b/docs/.preliminary/target-supported-features.mdx @@ -2,9 +2,10 @@ title: Target-Supported Features description: Which features are supported by which target? --- -| Target/Feature | Banks & Multiports | Clock Synchronization | Federation | -| :------------- | :----------: | :-----------: | :-----------: | -| **C** | Y | Y | Y | -| **C++** | Y | N | N | -| **TS** | Y | N | N | -| **Python** | Y | N | N | \ No newline at end of file + +| Target/Feature | Banks & Multiports | Clock Synchronization | Federation | +| :------------- | :----------------: | :-------------------: | :--------: | +| **C** | Y | Y | Y | +| **C++** | Y | N | N | +| **TS** | Y | N | N | +| **Python** | Y | N | N | diff --git a/docs/developer/_category_.yml b/docs/developer/_category_.yml deleted file mode 100644 index 6c97c272b..000000000 --- a/docs/developer/_category_.yml +++ /dev/null @@ -1,4 +0,0 @@ -position: 50 -label: Developer -collapsible: true -collapsed: true \ No newline at end of file diff --git a/docs/developer/contributing.mdx b/docs/developer/contributing.mdx index 278f03ccc..612cefc66 100644 --- a/docs/developer/contributing.mdx +++ b/docs/developer/contributing.mdx @@ -3,5 +3,7 @@ title: Contributing description: Contribute to Lingua Franca. --- -The preferred way to contribute to Lingua Franca is to issue pull requests through [GitHub](https://github.com/lf-lang/lingua-franca). -See the [Contributing](https://github.com/lf-lang/lingua-franca/blob/master/CONTRIBUTING.md) document for more details. +The preferred way to contribute to Lingua Franca is to issue pull requests +through [GitHub](https://github.com/lf-lang/lingua-franca). See the +[Contributing](https://github.com/lf-lang/lingua-franca/blob/master/CONTRIBUTING.md) +document for more details. diff --git a/docs/developer/developer-eclipse-setup-with-oomph.mdx b/docs/developer/developer-eclipse-setup-with-oomph.mdx index d88abcb2c..8a68c9f88 100644 --- a/docs/developer/developer-eclipse-setup-with-oomph.mdx +++ b/docs/developer/developer-eclipse-setup-with-oomph.mdx @@ -4,68 +4,148 @@ description: Developer Eclipse setup with Oomph. --- :::warning -Eclipse does not currently support Kotlin, the language used for some of the target code generators. If you plan to develop Kotlin code, we recommend using [IntelliJ](../developer/developer-intellij-setup.mdx) instead of Eclipse. + +Eclipse does not currently support Kotlin, the language used for some +of the target code generators. If you plan to develop Kotlin code, we recommend +using [IntelliJ](../developer/developer-intellij-setup.mdx) instead of Eclipse. + ::: ## Prerequisites -- Java 17 ([download from Oracle](https://www.oracle.com/java/technologies/downloads/)) -- Each target language may have additional requirements. See the [Target Language Details](<../reference/target-language-details.mdx#requirements>) page and select your target language. +- Java 17 + ([download from Oracle](https://www.oracle.com/java/technologies/downloads/)) +- Each target language may have additional requirements. See the + [Target Language Details](../reference/target-language-details.mdx#requirements) + page and select your target language. ## Oomph Setup -The Eclipse setup with Oomph allows to automatically create a fully configured Eclipse IDE for the development of Lingua Franca. Note that we recommend installing a new instance of Eclipse even if you already have one for other purposes. There is no problem having multiple Eclipse installations on the same machine, and separate installations help prevent cross-project problems. +The Eclipse setup with Oomph allows to automatically create a fully configured +Eclipse IDE for the development of Lingua Franca. Note that we recommend +installing a new instance of Eclipse even if you already have one for other +purposes. There is no problem having multiple Eclipse installations on the same +machine, and separate installations help prevent cross-project problems. -1. If you have previously installed Eclipse and you want to start fresh, then remove or move a hidden directory called `.p2` in your home directory. I do this: +1. If you have previously installed Eclipse and you want to start fresh, then + remove or move a hidden directory called `.p2` in your home directory. I do + this: ```sh mv ~/.p2 ~/.p2.bak ``` -2. Go to the [Eclipse download site](https://www.eclipse.org/downloads/index.php) (https://www.eclipse.org/downloads/index.php) and download the Eclipse Installer for your platform. The site does not advertise that it ships the Oomph Eclipse Installer but downloading Eclipse with the orange download button will give you the installer.\ +2. Go to the + [Eclipse download site](https://www.eclipse.org/downloads/index.php) + (https://www.eclipse.org/downloads/index.php) and download the Eclipse + Installer for your platform. The site does not advertise that it ships the + Oomph Eclipse Installer but downloading Eclipse with the orange download + button will give you the installer.\ **You can skip this step if you already have the installer available on your system.** -3. Starting the installer for the first time will open a window that looks like the following (**if you have previously followed these steps, skip to step 4**):\ +3. Starting the installer for the first time will open a window that looks like + the following (**if you have previously followed these steps, skip to step + 4**):\ ![](./../assets/images/oomph/simple_view.png) -4. Click the Hamburger button at the top right corner and switch to "Advanced Mode". +4. Click the Hamburger button at the top right corner and switch to "Advanced + Mode". -5. Oomph now wants you to select the base Eclipse distribution for your development. We recommend to use "Eclipse IDE for Java and DSL Developers". As product version we recommend to use "Latest Release (...)". \ - **Important**: Lingua Franca tools require Java 17. Under Java VM, please select Java 17.\ +5. Oomph now wants you to select the base Eclipse distribution for your + development. We recommend to use "Eclipse IDE for Java and DSL Developers". + As product version we recommend to use "Latest Release (...)". \ + **Important**: Lingua Franca tools require Java 17. Under Java VM, please select + Java 17.\ Then press Next to continue with the project section.\ ![](./../assets/images/oomph/product_selection.png) -6. Next, we need to register the Lingua Franca specific setup in Oomph **(only the first time you use the installer)**. Click the green Plus button at the top right corner. Select "GitHub Projects" as catalog and paste the following URL into the "Resource URI" field: +6. Next, we need to register the Lingua Franca specific setup in Oomph **(only + the first time you use the installer)**. Click the green Plus button at the + top right corner. Select "GitHub Projects" as catalog and paste the following + URL into the "Resource URI" field: `https://raw.githubusercontent.com/icyphy/lingua-franca/master/oomph/LinguaFranca.setup`. - Then press OK. - NOTE: to check out another branch instead, adjust the URL above accordingly. For instance, in order to install the setup from `foo-bar` branch, change the URL to `https://raw.githubusercontent.com/icyphy/lingua-franca/foo-bar/oomph/LinguaFranca.setup`. Also, in the subsequent screen in the wizard, select the particular branch of interest instead of default, which is `master`. - -7. Now Oomph lists the Lingua Franca setup in the "\" directory of the "GitHub Projects" catalog. Check the Lingua Franca entry. A new entry for Lingua Franca will appear in the table at the bottom of the window. Select Lingua Franca and click Next.\ + Then press OK. NOTE: to check out another branch instead, adjust the URL + above accordingly. For instance, in order to install the setup from `foo-bar` + branch, change the URL to + `https://raw.githubusercontent.com/icyphy/lingua-franca/foo-bar/oomph/LinguaFranca.setup`. + Also, in the subsequent screen in the wizard, select the particular branch of + interest instead of default, which is `master`. + +7. Now Oomph lists the Lingua Franca setup in the "\" directory of the + "GitHub Projects" catalog. Check the Lingua Franca entry. A new entry for + Lingua Franca will appear in the table at the bottom of the window. Select + Lingua Franca and click Next.\ ![](./../assets/images/oomph/project_selection.png) -8. Now you can further configure where and how your development Eclipse should be created. Check "Show all variables" to enable all possible configuration options. You can hover over the field labels to get a more detailed explanation of their effects. - -- If you already have cloned the LF repository and you want Eclipse to use this location instead of cloning it into the new IDE environment, you should adjust the "Git clone location rule". -- Preferably, you have a GitHub account with an SSH key uploaded to GitHub. Otherwise, you should adjust the "Lingua Franca GitHub repository" entry to use the https option in the drop-down menu. See [adding an SSH key to your GitHub account](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account). -- If the "JRE 17 location" is empty, you need to install and/or locate a JDK that has at least version 17. +8. Now you can further configure where and how your development Eclipse should + be created. Check "Show all variables" to enable all possible configuration + options. You can hover over the field labels to get a more detailed + explanation of their effects. + +- If you already have cloned the LF repository and you want Eclipse to use this + location instead of cloning it into the new IDE environment, you should adjust + the "Git clone location rule". +- Preferably, you have a GitHub account with an SSH key uploaded to GitHub. + Otherwise, you should adjust the "Lingua Franca GitHub repository" entry to + use the https option in the drop-down menu. See + [adding an SSH key to your GitHub account](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account). +- If the "JRE 17 location" is empty, you need to install and/or locate a JDK + that has at least version 17. ![](./../assets/images/oomph/project_configuration.png) -9. Click Next to get a summary of what will happen during installation. Click Finish to start. - -10. Once the basic installation is complete, your new Eclipse will start. If it fails to clone the GitHub repository, then you should use the back button in the Oomph dialog and change the way you are accessing the repo (ssh or https). See above. \ - The setup may also fail to clone the repository via SHH if Eclipse cannot find the private ssh key that matches the public key you uploaded to GitHub. You can configure the location of your private key in Eclipse as follows. In the Eclipse IDE, click the menu entry Window -> Preferences (on Mac Apple-Menu -> Preferences) and navigate to General -> Network Connections -> SSH2 in the tree view on the left and configure the SSH home directory and key names according to your computer. After the repo has been cloned, you can safely close the initial Oomph dialog (if not dismissed automatically). You will see a Welcome page that you can close. - -11. In the new Eclipse, it may automatically start building the project, or it may pop up an "Eclipse Updater" dialog. If neither happens, you can click the button with the yellow and blue cycling arrows in the status bar at the bottom. Oomph will perform various operations to configure the Eclipse environment, including the initial code generation for the LF language. This may take some time. Wait until the setup is finished. - -12. If you get compile errors, make sure Eclipse is using Java 17. If you skipped the first step above (removing your `~/.p2` directory), then you may have legacy configuration information that causes Eclipse to mysteriously use an earlier version of Java. Lingua Franca requires Java 17, and will get compiler errors if it uses an earlier version. To fix this, go to the menu `Project->Properties` and select `Java Build Path`. Remove the entry for `JRE System Library [JRE for JavaSE-8]` (or similar). Choose `Add Library` on the right, and choose `JRE System Library`. You should now be able to choose `Workspace default JRE (JRE for JavaSE-17)`. A resulting rebuild should then compile correctly. - -13. When the setup dialog is closed, your LF development IDE is ready. Probably, Eclipse is still compiling some code but when this is finished as well, all error markers on the project should have disappeared. Now, you can start a runtime Eclipse to test the actual Lingua Franca end-user IDE. In the toolbar, click on the small arrow next to the green Start button. There may already be an entry named "Launch Runtime Eclipse", but probably not. To create it, click on "Run Configurations...". Expand the "Eclipse Application" entry, select "Launch Runtime Eclipse", as follows: +9. Click Next to get a summary of what will happen during installation. Click + Finish to start. + +10. Once the basic installation is complete, your new Eclipse will start. If it + fails to clone the GitHub repository, then you should use the back button in + the Oomph dialog and change the way you are accessing the repo (ssh or + https). See above. \ + The setup may also fail to clone the repository via SHH if Eclipse cannot find + the private ssh key that matches the public key you uploaded to GitHub. You can + configure the location of your private key in Eclipse as follows. In the Eclipse + IDE, click the menu entry Window -> Preferences (on Mac Apple-Menu -> Preferences) + and navigate to General -> Network Connections -> SSH2 in the tree view on the + left and configure the SSH home directory and key names according to your computer. + After the repo has been cloned, you can safely close the initial Oomph dialog + (if not dismissed automatically). You will see a Welcome page that you can close. + +11. In the new Eclipse, it may automatically start building the project, or it + may pop up an "Eclipse Updater" dialog. If neither happens, you can click + the button with the yellow and blue cycling arrows in the status bar at the + bottom. Oomph will perform various operations to configure the Eclipse + environment, including the initial code generation for the LF language. This + may take some time. Wait until the setup is finished. + +12. If you get compile errors, make sure Eclipse is using Java 17. If you + skipped the first step above (removing your `~/.p2` directory), then you may + have legacy configuration information that causes Eclipse to mysteriously + use an earlier version of Java. Lingua Franca requires Java 17, and will get + compiler errors if it uses an earlier version. To fix this, go to the menu + `Project->Properties` and select `Java Build Path`. Remove the entry for + `JRE System Library [JRE for JavaSE-8]` (or similar). Choose `Add Library` + on the right, and choose `JRE System Library`. You should now be able to + choose `Workspace default JRE (JRE for JavaSE-17)`. A resulting rebuild + should then compile correctly. + +13. When the setup dialog is closed, your LF development IDE is ready. Probably, + Eclipse is still compiling some code but when this is finished as well, all + error markers on the project should have disappeared. Now, you can start a + runtime Eclipse to test the actual Lingua Franca end-user IDE. In the + toolbar, click on the small arrow next to the green Start button. There may + already be an entry named "Launch Runtime Eclipse", but probably not. To + create it, click on "Run Configurations...". Expand the "Eclipse + Application" entry, select "Launch Runtime Eclipse", as follows: ![](./../assets/images/oomph/run_configurations.png) -Make sure that the Execution Environment shows a version of Java that is at least Java 17. The click on "Run" at the bottom. +Make sure that the Execution Environment shows a version of Java that is at +least Java 17. The click on "Run" at the bottom. -14. A new Eclipse starts where you can write LF programs and also get a diagram representation (but you fist need to open the diagram view by clicking on Window -> Show View -> Other and selecting Diagram in the "KIELER Lightweight Diagrams" folder). You can close the welcome window in the new Eclipse and proceed to creating a new project, as below. +14. A new Eclipse starts where you can write LF programs and also get a diagram + representation (but you fist need to open the diagram view by clicking on + Window -> Show View -> Other and selecting Diagram in the "KIELER + Lightweight Diagrams" folder). You can close the welcome window in the new + Eclipse and proceed to creating a new project, as below. ### Using the Lingua Franca IDE @@ -73,11 +153,13 @@ Start the Lingua Franca IDE, create a project, and create your first LF program: - Select File->New->Project (a General Project is adequate). - Give the project a name, like "test". -- You may want to uncheck `Use default location` and specify a location that you can remember. +- You may want to uncheck `Use default location` and specify a location that you + can remember. - Close the Eclipse welcome window, if it is open. It obscures the project. - Right click on the project name and select New->File. - Give the new a name like "HelloWorld.lf" (with .lf extension). -- **IMPORTANT:** A dialog appears: Do you want to convert 'test' to an Xtext Project? Say YES. +- **IMPORTANT:** A dialog appears: Do you want to convert 'test' to an Xtext + Project? Say YES. - Start typing in Lingua-Franca! Try this: ```lf @@ -90,14 +172,18 @@ Start the Lingua Franca IDE, create a project, and create your first LF program: } ``` -When you save, generated code goes into your project directory, e.g. `/Users/yourname/test`. That directory now has two directories inside it, `src-gen` and `bin`. The first contains the generated C code and the second contains the resulting executable program. Run the program: +When you save, generated code goes into your project directory, e.g. +`/Users/yourname/test`. That directory now has two directories inside it, +`src-gen` and `bin`. The first contains the generated C code and the second +contains the resulting executable program. Run the program: ```sh cd ~/lingua-franca-master/runtime-EclipseXtext/test bin/HelloWorld ``` -The above directory assumes you chose default locations for everything. This should produce output that looks something like this: +The above directory assumes you chose default locations for everything. This +should produce output that looks something like this: ``` ---- Start execution at time Sun Mar 28 10:19:24 2021 @@ -109,21 +195,30 @@ Hello World. This should print "Hello World". -We strongly recommend browsing the system tests, which provide a concise overview of the capabilities of Lingua Franca. You can set up a project in the IDE for this using [these instructions](<../developer/regression-tests.mdx#browsing-and-editing-examples-in-the-lf-ide>). +We strongly recommend browsing the system tests, which provide a concise +overview of the capabilities of Lingua Franca. You can set up a project in the +IDE for this using +[these instructions](../developer/regression-tests.mdx#browsing-and-editing-examples-in-the-lf-ide). ## Working on the Lingua-Franca Compiler The source code for the compiler is in the package `org.icyphy.linguafranca`. - The grammar is in `src/org.icyphy/LinguaFranca.xtext` -- The code generator for the C target is in `src/org.icyphy.generator/CGenerator.xtend` -- The code generator for the TypeScript target is in `src/org.icyphy.generator/TypeScriptGenerator.xtend` -- The code generator for the C++ target is in `src/org.icyphy.generator/CppGenerator.xtend` -- The code generator for the Python target is in `src/org.icyphy.generator/PythonGenerator.xtend` +- The code generator for the C target is in + `src/org.icyphy.generator/CGenerator.xtend` +- The code generator for the TypeScript target is in + `src/org.icyphy.generator/TypeScriptGenerator.xtend` +- The code generator for the C++ target is in + `src/org.icyphy.generator/CppGenerator.xtend` +- The code generator for the Python target is in + `src/org.icyphy.generator/PythonGenerator.xtend` ## Troubleshooting -- GitHub uses port `443` for its ssh connections. In some systems, the default expected port by `git` can be 22, causing a timeout when cloning the repo. This can be fixed by adding the following to `~/.ssh/config`: +- GitHub uses port `443` for its ssh connections. In some systems, the default + expected port by `git` can be 22, causing a timeout when cloning the repo. + This can be fixed by adding the following to `~/.ssh/config`: ``` Host github.com Hostname ssh.github.com diff --git a/docs/developer/developer-intellij-setup.mdx b/docs/developer/developer-intellij-setup.mdx index c2db6bf68..418cb658d 100644 --- a/docs/developer/developer-intellij-setup.mdx +++ b/docs/developer/developer-intellij-setup.mdx @@ -5,12 +5,15 @@ description: Developer IntelliJ Setup. ## Prerequisites -- Java 17 ([download from Oracle](https://www.oracle.com/java/technologies/downloads/)) -- IntelliJ IDEA Community Edition ([download from Jetbrains](https://www.jetbrains.com/idea/download/)) +- Java 17 + ([download from Oracle](https://www.oracle.com/java/technologies/downloads/)) +- IntelliJ IDEA Community Edition + ([download from Jetbrains](https://www.jetbrains.com/idea/download/)) ## Cloning lingua-franca repository -If you have not done so already, clone the lingua-franca repository into your working directory. +If you have not done so already, clone the lingua-franca repository into your +working directory. ```sh $ git clone git@github.com:lf-lang/lingua-franca.git lingua-franca @@ -20,55 +23,88 @@ $ git submodule update --init --recursive ## Opening lingua-franca as IntelliJ Project -To import the Lingua Franca repository as a project, simply run `./gradlew openIdea`. -This will create some project files and then open the project in IntelliJ. +To import the Lingua Franca repository as a project, simply run +`./gradlew openIdea`. This will create some project files and then open the +project in IntelliJ. -When you open the project for the first time, you will see a small pop-up in the lower right corner. +When you open the project for the first time, you will see a small pop-up in the +lower right corner. ![](./../assets/images/intellij/gradle_import.png) Click on Load Gradle Project to import the Gradle configurations. -If you are prompted to a pop-up window asking if you trust the Gradle project, click Trust Project. +If you are prompted to a pop-up window asking if you trust the Gradle project, +click Trust Project. ![](./../assets/images/intellij/trust_gradle_project.png) -Once the repository is imported as a Gradle project, you will see a Gradle tab on the right. +Once the repository is imported as a Gradle project, you will see a -Once the indexing finishes, you can expand the Gradle project and see the set of Tasks. +Gradle tab on the right. + +Once the indexing finishes, you can expand the Gradle project and see the set of +Tasks. ![](./../assets/images/intellij/expand_gradle_tab.png) -You can run any Gradle command from IntelliJ simply by clicking on the Execute Gradle Task icon in the Gradle tab. You are then prompted for the precise command to run. +You can run any Gradle command from IntelliJ simply by clicking on the + +Execute Gradle Task icon in the Gradle tab. You are then prompted for +the precise command to run. ## Setting up run configurations -You can set up a run configuration for running and debugging various Gradle tasks from the Gradle tab, including the code generation through `lfc`. -To set up a run configuration for the run task of `lfc`, expand the application task group under org.lflang \> Tasks, right-click on ⚙️ run, and select Modify Run Configuration.... -This will create a custom run/debug configuration for you. +You can set up a run configuration for running and debugging various Gradle +tasks from the Gradle tab, including the code generation through +`lfc`. To set up a run configuration for the run task of `lfc`, expand the + +application task group under org.lflang \> Tasks, +right-click on ⚙️ run, and select +Modify Run Configuration.... This will create a custom run/debug +configuration for you. -In the Create Run Configuration dialog, click on the text box next to Run, select `cli:lfc:run` from the drop-down menu, and append arguments to be passed to `lfc` using the `--args` flag. For instance, to invoke `lfc` on `test/Cpp/src/HelloWorld.lf`, enter `cli:lfc:run --args 'test/Cpp/src/HelloWorld.lf'` Then click OK. +In the Create Run Configuration dialog, click on the text box next to + +Run, select `cli:lfc:run` from the drop-down menu, and append +arguments to be passed to `lfc` using the `--args` flag. For instance, to invoke +`lfc` on `test/Cpp/src/HelloWorld.lf`, enter `cli:lfc:run --args +'test/Cpp/src/HelloWorld.lf'` Then click OK. ![](./../assets/images/intellij/run_config_lf_program.png) -You will see a new run/debug config added to the top-level menu bar, as shown below. -You can always change the config, for example, changing the `--args`, by clicking Edit Configurations via a drop-down menu. +You will see a new run/debug config added to the top-level menu bar, as shown +below. You can always change the config, for example, changing the `--args`, by +clicking Edit Configurations via a drop-down menu. ![](./../assets/images/intellij/new_runlfc_config.png) ## Running and Debugging -Using the newly added config, you can run and debug the code generator by clicking the play button and the debug button. +Using the newly added config, you can run and debug the code generator by +clicking the play button and the debug button. ![](./../assets/images/intellij/run_debug_buttons.png) -Set up breakpoints before starting the debugger by clicking the space right next to the line numbers. -While debugging, you can run code step-by-step by using the debugger tools. +Set up breakpoints before starting the debugger by clicking the space right next +to the line numbers. While debugging, you can run code step-by-step by using the +debugger tools. ![](./../assets/images/intellij/debugger_screen.png) -By clicking the play button, `lfc` will be invoked, and if compilation is successful, its output can be found, relative to package root of the file under compilation, in `bin` if the target is a compiled language (e.g., C) or in `src-gen` if the target is an interpreted language (e.g., TypeScript). For the `HelloWorld.lf` example above, the binary can be found in `test/Cpp/bin/HelloWorld` and can be executed in the terminal. +By clicking the play button, `lfc` will be invoked, and if compilation is +successful, its output can be found, relative to package root of the file under +compilation, in `bin` if the target is a compiled language (e.g., C) or in +`src-gen` if the target is an interpreted language (e.g., TypeScript). For the +`HelloWorld.lf` example above, the binary can be found in +`test/Cpp/bin/HelloWorld` and can be executed in the terminal. ## Integration Tests -You can also run the integration test from IntelliJ. You will find the targetTest and singleTest tasks in the Gradle tab under org.lflang \> Tasks \> other. Make sure to add a run configuration as shown above and append `-Ptarget=...'` to the `targetTest` command or `-DsingleTest=...` to your `singleTest` command to specify the target (e.g., `C`) or the specific test that you would like to run. +You can also run the integration test from IntelliJ. You will find the + +targetTest and singleTest tasks in the Gradle tab under +org.lflang \> Tasks \> other. Make sure to add a run configuration as +shown above and append `-Ptarget=...'` to the `targetTest` command or +`-DsingleTest=...` to your `singleTest` command to specify the target (e.g., +`C`) or the specific test that you would like to run. diff --git a/docs/developer/downloading-and-building.mdx b/docs/developer/downloading-and-building.mdx index b0e46d0a0..5d5478d0e 100644 --- a/docs/developer/downloading-and-building.mdx +++ b/docs/developer/downloading-and-building.mdx @@ -5,7 +5,8 @@ description: Setting up Lingua Franca for developers. ## Prerequisites -- Java 17 ([download from Oracle](https://www.oracle.com/java/technologies/downloads/)) +- Java 17 + ([download from Oracle](https://www.oracle.com/java/technologies/downloads/)) ## Cloning the Repository @@ -17,7 +18,9 @@ cd lingua-franca git submodule update --init --recursive ``` -Submodules are checked out over HTTPS by default. In case you want to commit to a submodule and use SSH instead, you can simply change the remote. For example, to change the remote of the `reactor-c` submodule, you can do this: +Submodules are checked out over HTTPS by default. In case you want to commit to +a submodule and use SSH instead, you can simply change the remote. For example, +to change the remote of the `reactor-c` submodule, you can do this: ```sh cd core/src/main/resources/lib/c/reactor-c @@ -27,33 +30,41 @@ git remote add origin git@github.com:lf-lang/reactor-c.git ## Building the command line tools -We use [Gradle](https://docs.gradle.org/current/userguide/userguide.html) for building the code within our repository. +We use [Gradle](https://docs.gradle.org/current/userguide/userguide.html) for +building the code within our repository. -For an easy start, the `bin/` directory contains scripts for building and running our command line tools, including the compiler lfc. -Try to run `./bin/lfc-dev --version`. -This will first build `lfc` and then execute it through Gradle. +For an easy start, the `bin/` directory contains scripts for building and +running our command line tools, including the compiler lfc. Try to run +`./bin/lfc-dev --version`. This will first build `lfc` and then execute it +through Gradle. -To build the entire repository, you can simply run `./gradlew build`. -This will build all tools and also run all formatting checks and unit tests. -Note that this does not run our integration tests. -For more details on our testing infrastructure, please refer to the [Regression Test](../developer/regression-tests.mdx) section. +To build the entire repository, you can simply run `./gradlew build`. This will +build all tools and also run all formatting checks and unit tests. Note that +this does not run our integration tests. For more details on our testing +infrastructure, please refer to the +[Regression Test](../developer/regression-tests.mdx) section. -If you only want to build without running any tests, you can use `./gradlew assemble` instead. -Both the assemble and the build task will create a distribution package containing our command line tools in `build/distribution`. +If you only want to build without running any tests, you can use +`./gradlew assemble` instead. Both the assemble and the build task will create a +distribution package containing our command line tools in `build/distribution`. There is also an installed version of this package in `build/install/lf-cli/`. -If you run `build/install/lf-cli/bin/lfc` this will run lfc as it was last build. -Thus, you can choose if you want to use `bin/lfc-dev`, which first builds `lfc` using the latest code and then runs it, or if you prefer to run `./gradlew build` and then separately invoke `build/install/lf-cli/bin/lfc`. +If you run `build/install/lf-cli/bin/lfc` this will run lfc as it was last +build. Thus, you can choose if you want to use `bin/lfc-dev`, which first builds +`lfc` using the latest code and then runs it, or if you prefer to run +`./gradlew build` and then separately invoke `build/install/lf-cli/bin/lfc`. ## IDE Integration -You can use any editor or IDE that you like to work with the code base. -However, we would suggest to choose an IDE that comes with good Java (and -ideally Kotlin) support and that integrates well with Gradle. -We recommend to use our [IntelliJ setup](../developer/developer-intellij-setup.mdx). +You can use any editor or IDE that you like to work with the code base. However, +we would suggest to choose an IDE that comes with good Java (and ideally Kotlin) +support and that integrates well with Gradle. We recommend to use our +[IntelliJ setup](../developer/developer-intellij-setup.mdx). ## Building IDEs -Currently, we provide two IDEs that support Lingua Franca programs. -Their source code is located in external repositories. -We have a [Lingua Franca extension](https://github.com/lf-lang/vscode-lingua-franca) for VS code and an Eclipse based IDE called [Epoch](https://github.com/lf-lang/epoch). -Please refer to the READMEs for build instructions. +Currently, we provide two IDEs that support Lingua Franca programs. Their source +code is located in external repositories. We have a +[Lingua Franca extension](https://github.com/lf-lang/vscode-lingua-franca) for +VS code and an Eclipse based IDE called +[Epoch](https://github.com/lf-lang/epoch). Please refer to the READMEs for build +instructions. diff --git a/docs/developer/regression-tests.mdx b/docs/developer/regression-tests.mdx index 84e175146..e35fa8ac4 100644 --- a/docs/developer/regression-tests.mdx +++ b/docs/developer/regression-tests.mdx @@ -3,69 +3,117 @@ title: Regression Tests description: Regression Tests for Lingua Franca. --- -Lingua Franca comes with an extensive set of regression tests that are executed on various platforms automatically whenever an update is pushed to the LF repository. There are two categories of tests: - -- **Unit tests** are Java or Kotlin methods in our code base that are labeled with the `@Test` directive. These tests check individual functions of the code generation infrastructure. These are located in the `src/test` directory of each subroject within the repository. -- **Integration tests** are complete Lingua Franca programs that are compiled and executed automatically. A test passes if it successfully compiles and runs to completion with normal termination (return code 0). These tests are located in the `test` directory at the root of the LF repo, with one subdirectory per target language. -Their implementation can be found in the `core/src/integrationTest` directory. -The integration tests are also executed through JUnit using methods with `@Test` directives, but they are executed separately. +Lingua Franca comes with an extensive set of regression tests that are executed +on various platforms automatically whenever an update is pushed to the LF +repository. There are two categories of tests: + +- **Unit tests** are Java or Kotlin methods in our code base that are labeled + with the `@Test` directive. These tests check individual functions of the code + generation infrastructure. These are located in the `src/test` directory of + each subroject within the repository. +- **Integration tests** are complete Lingua Franca programs that are compiled + and executed automatically. A test passes if it successfully compiles and runs + to completion with normal termination (return code 0). These tests are located + in the `test` directory at the root of the LF repo, with one subdirectory per + target language. Their implementation can be found in the + `core/src/integrationTest` directory. The integration tests are also executed + through JUnit using methods with `@Test` directives, but they are executed + separately. ## Running the Tests From the Command Line -To run all unit tests, simply run `./gradlew test`. Note that also the normal build tasks `./gradlew build` runs all the unit tests. +To run all unit tests, simply run `./gradlew test`. Note that also the normal +build tasks `./gradlew build` runs all the unit tests. +The integration tests can be run using the `integrationTest` task. However, +typically it is not desired to run all tests for all targets locally as it will +need the right target tooling and will take a long time. -The integration tests can be run using the `integrationTest` task. However, typically it is not desired to run all tests for all targets locally as it will need the right target tooling and will take a long time. +To run only the integration tests for one target, we provide the `targetTest` +gradle task. For instance, you can use the following command to run all Rust +tests: -To run only the integration tests for one target, we provide the `targetTest` gradle task. For instance, you can use the following command to run all Rust tests: ``` ./gradlew targetTest -Ptarget=Rust ``` -You can specify any valid target. If you run the task without specifying the target property `./gradlew targetTest` it will produce an error message and list all available targets. +You can specify any valid target. If you run the task without specifying the +target property `./gradlew targetTest` it will produce an error message and list +all available targets. The `targetTest` task is essentially a convenient shortcut for the following: + ``` ./gradew core:integrationTest --test org.lflang.tests.runtime.Test.* ``` -If you prefer have more control over which tests are executed, you can also use this more verbose version. -It is also possible to run a subset of the tests. For example, the C tests are organized into the following categories: +If you prefer have more control over which tests are executed, you can also use +this more verbose version. + +It is also possible to run a subset of the tests. For example, the C tests are +organized into the following categories: -* **generic** tests are `.lf` files located in `$LF/test/C/src`. -* **concurrent** tests are `.lf` files located in `$LF/test/C/src/concurrent`. -* **federated** tests are `.lf` files located in `$LF/test/C/src/federated`. -* **multiport** tests are `.lf` files located in `$LF/test/C/src/multiport`. +- **generic** tests are `.lf` files located in `$LF/test/C/src`. +- **concurrent** tests are `.lf` files located in `$LF/test/C/src/concurrent`. +- **federated** tests are `.lf` files located in `$LF/test/C/src/federated`. +- **multiport** tests are `.lf` files located in `$LF/test/C/src/multiport`. To invoke only the C tests in the `concurrent` category, for example, run this: + ``` ./gradlew core:integrationTest --tests org.lflang.tests.runtime.CTest.runConcurrentTests ``` -Sometimes it is convenient to only run a single specific test case. This can be done with the `singleTest` task. For instance: +Sometimes it is convenient to only run a single specific test case. This can be +done with the `singleTest` task. For instance: + ``` ./gradlew singleTest -DsingleTest=test/C/src/Minimal.lf ``` ## Reporting Bugs -If you encounter a bug or add some enhancement to Lingua Franca, then you should create a regression test either as a system test or a unit test and issue a pull request. System tests are particularly easy to create since they are simply Lingua Franca programs that either compile and execute successfully (the test passes) or fail either to compile or execute. +If you encounter a bug or add some enhancement to Lingua Franca, then you should +create a regression test either as a system test or a unit test and issue a pull +request. System tests are particularly easy to create since they are simply +Lingua Franca programs that either compile and execute successfully (the test +passes) or fail either to compile or execute. ## Testing Architecture -System tests can be put in any subdirectory of `$LF/test` or `$LF/example`. -Any `.lf` file within these directories will be treated as a system test unless they are within a directory named `failing`, in which case they will be ignored. -The tests are automatically indexed by our JUnit-based test infrastructure, which is located in the package `core/src/integrationTest`. Each target has its own class in the `runtime` package, with a number of test methods that correspond to particular test categories, such as `generic`, `concurrent`, `federated`, etc. A test can be associated with a particular category by placing it in a directory that matches its name. For instance, we can create a test (e.g., `Foo.lf`) in `test/C/src/concurrent`, which will then get indexed under the target `C` in the category `concurrent`. Files placed directly in `test/C/src` will be considered `generic` `C` tests, and a file in a directory `concurrent/federated` will be indexed as `federated` (corresponding to the nearest containing directory). - -**Caution**: adding a _new_ category requires updating an enum in `TestRegistry.java` and adding a `@Test`-labeled method to `TestBase`. +System tests can be put in any subdirectory of `$LF/test` or `$LF/example`. Any +`.lf` file within these directories will be treated as a system test unless they +are within a directory named `failing`, in which case they will be ignored. The +tests are automatically indexed by our JUnit-based test infrastructure, which is +located in the package `core/src/integrationTest`. Each target has its own class +in the `runtime` package, with a number of test methods that correspond to +particular test categories, such as `generic`, `concurrent`, `federated`, etc. A +test can be associated with a particular category by placing it in a directory +that matches its name. For instance, we can create a test (e.g., `Foo.lf`) in +`test/C/src/concurrent`, which will then get indexed under the target `C` in the +category `concurrent`. Files placed directly in `test/C/src` will be considered +`generic` `C` tests, and a file in a directory `concurrent/federated` will be +indexed as `federated` (corresponding to the nearest containing directory). + +**Caution**: adding a _new_ category requires updating an enum in +`TestRegistry.java` and adding a `@Test`-labeled method to `TestBase`. ### Known Failures -Sometimes it is useful to retain tests that have a known failure that should be addressed at a later point. Such tests can simply be put in a directory called `failing`, which will tell our test indexing code to exclude it. +Sometimes it is useful to retain tests that have a known failure that should be +addressed at a later point. Such tests can simply be put in a directory called +`failing`, which will tell our test indexing code to exclude it. ### Test Output -Tests are grouped by target and category. It is also reported when, for a given category, there are other targets that feature tests that are missing for the target under test. Tests that either do not have a main reactor or are marked as known failures are reported as "ignored." For all the tests that were successfully indexed, it is reported how many passed. For each failing test, diagnostics are reported that should help explain the failure. Here is some sample output for `Ctest.runConcurrentTests`, which runs tests categorized as `concurrent` for the `C` target: +Tests are grouped by target and category. It is also reported when, for a given +category, there are other targets that feature tests that are missing for the +target under test. Tests that either do not have a main reactor or are marked as +known failures are reported as "ignored." For all the tests that were +successfully indexed, it is reported how many passed. For each failing test, +diagnostics are reported that should help explain the failure. Here is some +sample output for `Ctest.runConcurrentTests`, which runs tests categorized as +`concurrent` for the `C` target: ``` CTest > runConcurrentTests() STANDARD_OUT @@ -97,11 +145,14 @@ CTest > runConcurrentTests() STANDARD_OUT ## Code Coverage -Code coverage is automatically recorded when running tests. -A combined report for each subproject can be created by running `./gradlew jacocoTestReport`. -For the `core` subproject, the html report will be located in `build/reports/html/index.html`. -Note that this report will only reflect the coverage of the test that have actually executed. +Code coverage is automatically recorded when running tests. A combined report +for each subproject can be created by running `./gradlew jacocoTestReport`. For +the `core` subproject, the html report will be located in +`build/reports/html/index.html`. Note that this report will only reflect the +coverage of the test that have actually executed. ## Continuous Integration -Each push or pull request will trigger all tests to be run on GitHub Actions. It's configuration can be found [here](https://github.com/lf-lang/lingua-franca/tree/master/.github/workflows). +Each push or pull request will trigger all tests to be run on GitHub Actions. +It's configuration can be found +[here](https://github.com/lf-lang/lingua-franca/tree/master/.github/workflows). diff --git a/docs/developer/running-benchmarks.mdx b/docs/developer/running-benchmarks.mdx index d43f2d97c..86c11ae83 100644 --- a/docs/developer/running-benchmarks.mdx +++ b/docs/developer/running-benchmarks.mdx @@ -5,16 +5,23 @@ description: Running Benchmarks. # Running Benchmarks -The LF repository contains a series of benchmarks in the `benchmark` directory. There is also a flexible benchmark runner that automates the process of running benchmarks for various settings and collecting results from those benchmarks. It is located in `benchmark/runner`. -The runner is written in python and is based on [hydra](https://hydra.cc/docs/intro), a tool for dynamically creating hierarchical configurations by composition +The LF repository contains a series of benchmarks in the `benchmark` directory. +There is also a flexible benchmark runner that automates the process of running +benchmarks for various settings and collecting results from those benchmarks. It +is located in `benchmark/runner`. The runner is written in python and is based +on [hydra](https://hydra.cc/docs/intro), a tool for dynamically creating +hierarchical configurations by composition ## Prerequisites ### Install Python dependencies -The benchmark runner is written in Python and requires a working Python3 installation. It also requires a few python packages to be installed. Namely, `hydra-core`, `cogapp` and `pandas`. +The benchmark runner is written in Python and requires a working Python3 +installation. It also requires a few python packages to be installed. Namely, +`hydra-core`, `cogapp` and `pandas`. -It is recommended to install the dependencies and execute the benchmark runner in a virtual environment. For instance, this can be done with `virtualenv`: +It is recommended to install the dependencies and execute the benchmark runner +in a virtual environment. For instance, this can be done with `virtualenv`: ```sh virtualenv ~/virtualenvs/lfrunner -p python3 @@ -29,7 +36,8 @@ pip install -r benchmark/runner/requirements.txt ### Compile lfc -For running LF benchmarks, the command-line compiler `lfc` needs to be built. Simply run +For running LF benchmarks, the command-line compiler `lfc` needs to be built. +Simply run ```sh bin/build-lfc @@ -37,7 +45,8 @@ bin/build-lfc in the root directory of the LF repository. -Also, the environment variable `LF_PATH` needs to be set and point to the location of the LF repository. This needs to be an absolute path. +Also, the environment variable `LF_PATH` needs to be set and point to the +location of the LF repository. This needs to be an absolute path. ```sh export LF_PATH=/path/to/lf @@ -45,7 +54,13 @@ export LF_PATH=/path/to/lf ### Setup Savina -Currently all of our benchmarks are ported from the [Savina actor benchmark suite](https://doi.org/10.1145/2687357.2687368). In order to compare our LF implementations with actor based implementation, the Savina benchmark suite needs to be downloaded and compiled. Note that we require a modified version of the Savina suite, that adds support for specifying the number of worker threads and that includes CAF implementations of most benchmarks. +Currently all of our benchmarks are ported from the +[Savina actor benchmark suite](https://doi.org/10.1145/2687357.2687368). In +order to compare our LF implementations with actor based implementation, the +Savina benchmark suite needs to be downloaded and compiled. Note that we require +a modified version of the Savina suite, that adds support for specifying the +number of worker threads and that includes CAF implementations of most +benchmarks. To download and build Savina, run the following commands: @@ -55,13 +70,17 @@ cd savina mvn install ``` -Building Savina requires a Java 8 JDK. Depending on the local setup, `JAVA_HOME` might need to be adjusted before running `mvn` in order to point to the correct JDK. +Building Savina requires a Java 8 JDK. Depending on the local setup, `JAVA_HOME` +might need to be adjusted before running `mvn` in order to point to the correct +JDK. ```sh export JAVA_HOME=/path/to/jdk8 ``` -Before invoking the benchmark runner, the environment variable `SAVINA_PATH` needs to be set and point to the location of the Savina repository using an absolute path. +Before invoking the benchmark runner, the environment variable `SAVINA_PATH` +needs to be set and point to the location of the Savina repository using an +absolute path. ```sh export SAVINA_PATH=/path/to/savina @@ -69,7 +88,8 @@ export SAVINA_PATH=/path/to/savina #### CAF -To further build the CAF benchmarks, CAF 0.16.5 needs to be downloaded, compiled and installed first: +To further build the CAF benchmarks, CAF 0.16.5 needs to be downloaded, compiled +and installed first: ```sh git clone --branch "0.16.5" git@github.com:actor-framework/actor-framework.git @@ -92,32 +112,50 @@ The CAF benchmarks are used in these two publications: ## Running a benchmark -A benchmark can simply be run by specifying a benchmark and a target. For instance +A benchmark can simply be run by specifying a benchmark and a target. For +instance ```sh cd benchmark/runner ./run_benchmark.py benchmark=savina_micro_pingpong target=lf-c ``` -runs the Ping Pong benchmark from the Savina suite using the C-target of LF. Currently, supported targets are `lf-c`, `lf-cpp`, `akka`, and `caf` where `akka` corresponds to the Akka implementation in the original Savina suite and `caf` corresponds to a implementation using the [C++ Actor Framework](https://www.actor-framework.org/) . +runs the Ping Pong benchmark from the Savina suite using the C-target of LF. +Currently, supported targets are `lf-c`, `lf-cpp`, `akka`, and `caf` where +`akka` corresponds to the Akka implementation in the original Savina suite and +`caf` corresponds to a implementation using the +[C++ Actor Framework](https://www.actor-framework.org/) . -The benchmarks can also be configured. The `threads` and `iterations` parameters apply to every benchmark and specify the number of worker threads as well as how many times the benchmark should be run. Most benchmarks allow additional parameters. For instance, the Ping Pong benchmark sends a configurable number of pings that be set via the `benchmark.params.messages` configuration key. Running the Akka version of the Ping Pong benchmark for 1000 messages, 1 thread and 12 iterations could be done like this: +The benchmarks can also be configured. The `threads` and `iterations` parameters +apply to every benchmark and specify the number of worker threads as well as how +many times the benchmark should be run. Most benchmarks allow additional +parameters. For instance, the Ping Pong benchmark sends a configurable number of +pings that be set via the `benchmark.params.messages` configuration key. Running +the Akka version of the Ping Pong benchmark for 1000 messages, 1 thread and 12 +iterations could be done like this: ```sh ./run_benchmark.py benchmark=savina_micro_pingpong target=akka threads=1 iterations=12 benchmark.params.messages=1000 ``` -Each benchmark run produces an output directory in the scheme `outputs//