You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-47Lines changed: 1 addition & 47 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,53 +2,8 @@
2
2
3
3
[rationale, tutorials and documentation](https://rl-language.github.io)
4
4
5
-
**Rulebook** is a language for complex interactive subsystems (reinforcement learning environments, videogames, UIs with graph-like transitions, multistep procedures, ...).
5
+
The Rulebook language lets you write complex interactive systems with the simplicity of synchronous code while retaining all the advantages and none of the disadvantages of asynchronous code. Projects that benefit from Rulebook include reinforcement-learning environments, video games, sophisticated UIs, and more.
6
6
7
-
Rulebook is compiled and statically checked, the key and innovative feature of the language are [Action functions with SPIN properties](https://rl-language.github.io/language_tour.html#action-functions), which help to:
8
-
9
-
*[store, load, print, replay, modify](https://rl-language.github.io/language_tour.html#spin-functions-implications) both execution traces and the program state
10
-
***automatically test** your interactive code using off-the-shelf [fuzzers](https://rl-language.github.io/language_tour.html#automatic-testing), [proofs](https://rl-language.github.io/language_tour.html#finite-interactive-programs) and [reinforcement learning](https://rl-language.github.io/language_tour.html#reinforcement-learning)
11
-
* write [self-configuring UIs](https://rl-language.github.io//language_tour.html#self-configuring-uis), where UIs can inspect the underlying program they present and configure themselves accordingly.
12
-
*[automatically remote execute](https://rl-language.github.io/language_tour.html#remote-execution) interactive code over the network.
13
-
14
-
Rulebook:
15
-
16
-
***aids**, not replaces [C, C++, C#, Python, and Godot Script](https://rl-language.github.io/language_tour.html#compatibility) (just like SQL aids but not replaces those languages)
17
-
* produces a single shared library (or webassembly if targeting the web) with the same ABI as C that you can embed in your software, wrapped into generated file native to your language.
18
-
19
-
Our key proof of concept example is [4Hammer](https://github.com/rl-language/4Hammer) . A never before implemented reinforcement learning environment with huge amounts of user actions in only ~5k lines of code (including graphical code). It runs in the browser and on desktop and all the features described in this section are present.
20
-
21
-
Zero mallocs unless explicitly requested by the user.
22
-
23
-
### Installation
24
-
25
-
Install rlc with:
26
-
```
27
-
pip install rl_language
28
-
```
29
-
30
-
If you don't want to use the off-the-self machine learning tools, you can install instead `rl_language_core` which has no dependencies beside numpy.
31
-
32
-
Create a file to test it is working, and fill it with the following content.
33
-
```
34
-
# file.rl
35
-
36
-
@classes
37
-
act play() -> Game:
38
-
frm score = 0.0
39
-
act win(Bool do_it)
40
-
if do_it:
41
-
score = 1.0
42
-
```
43
-
44
-
Then run with:
45
-
46
-
```
47
-
# On mac only run: export SDKROOT=$(xcrun --sdk macosx --show-sdk-path)
48
-
rlc-learn file.rl --steps-per-env 100 -o net # ctrl+c to interrupt after a while
49
-
rlc-probs file.rl net
50
-
```
51
-
It will to learn pass true to `win` to maximize `score`, as reported by the second command.
52
7
53
8
[Paper for Reinforcement Learning users](https://arxiv.org/abs/2504.19625)
54
9
@@ -60,7 +15,6 @@ It will to learn pass true to `win` to maximize `score`, as reported by the seco
0 commit comments