|
1 | 1 | # RLC |
2 | 2 |
|
| 3 | +**Rulebook** is a language for complex interactive subsystems (reinforcement learning environments, videogames, UIs with graph-like transitions, multistep procedures, ...). |
3 | 4 |
|
4 | | - |
| 5 | +Rulebook is compiled and statically checked, the key and innovative feature of the language are [Action functions with SPIN properties](https://rl-language.github.io/language_tour.html#action-functions), which help to: |
5 | 6 |
|
6 | | -From a single simple description, generate all tools you may need. Test your rules with fuzzers, generate machine learning agents, use it in C or python, embed it in graphical engines and on the web. All automatically. |
| 7 | +* [store, load, print, replay, modify](https://rl-language.github.io/language_tour.html#spin-functions-implications) both execution traces and the program state |
| 8 | +* **automatically test** your interactive code using off-the-shelf [fuzzers](https://rl-language.github.io/language_tour.html#automatic-testing>), [proofs](https://rl-language.github.io/language_tour.html#finite-interactive-programs) and [reinforcement learning](https://rl-language.github.io/language_tour.html#reinforcement-learning>) |
| 9 | +* write [self-configuring UIs](./language_tour.html#self-configuring-uis), where UIs can inspect the underlying program they present and configure themselves accordingly. |
| 10 | +* [automatically remote execute](./language_tour.html#remote-execution) interactive code over the network. |
7 | 11 |
|
8 | | -Rulebook contains unique mechanisms that cannot be implemented in mainstream languages such as c, cpp and python that mathematically guarantees asymptotically less code to write the same rules by allowing composition and reuse of sequences of actions. |
| 12 | +Rulebook: |
9 | 13 |
|
10 | | -The following table compares the number of lines of code required to implement a given game in Rulebook and CPP, excluding their header files. The number of lines they require scales quadratically with respect to the complexity of the game. Our does not. |
11 | | - |
| 14 | +* **aids**, not replaces [C, C++, C#, Python, and Godot Script](./language_tour.html#compatibility) (just like SQL aids but not replaces those languages) |
| 15 | +* produces a single shared library (or webassembly if targeting the web) with the same ABI as C that you can embed in your software, wrapped into generated file native to your language. |
12 | 16 |
|
13 | | -Furthermore, we have performances comparable to CPP implementations. |
14 | | -The following is the time required to play out 1024 game traces generated ahead of time, thus only measuiring the time required construct a game and to apply actions. |
15 | | - |
| 17 | +Our key proof of concept example is [4Hammer](https://github.com/rl-language/4Hammer) . A never before implemented reinforcement learning environment with huge amounts of user actions in only ~5k lines of code (including graphical code). It runs in the browser and on desktop and all the features described in this section are present. |
16 | 18 |
|
17 | 19 | ### Installation |
18 | 20 |
|
|
0 commit comments