- Project Description
- Framework structure
- Project Structure
- Clone the repo
- Requirements
- Compiling
- Running
- References
- Known issues
- Future works
- Contributors
The goal of this project is to create a general multi-agent planning framework which employs LLMs capabilities to populate a Prolog knowledge base.
Briefly, the main folders are:
llm_kb_gen
includes the code to call the LLM to generate both the high-level and low-level KBs.prolog_planner
contains the code for planning using Prolog, both the total-order planning and the achievers and resource extraction.python_interface
includes the Python code to:- automatically call the Prolog planner,
- set-up and solve the MILP problem,
- generate the behaviour tree to be executed.
behaviour_tree
contains the C++ and ROS2 code for behaviour tree construction and communications. It exploits BehaviourTree.CPP.prolog_project
includes the ROS2 node (motion node and planner node)- scripts: Contains the two node plus the utilities
- msg: Contains the
.msg
file for ROS communication
The repo uses external libraries for some modules, so one must either clone it with all the submodules:
$ git clone --recurse-submodules [email protected]:idra-lab/PLANTOR.git
or initialize the submodules at a later time:
$ git clone [email protected]:idra-lab/PLANTOR.git
$ git submodule update --init --recursive --remote
Install the SWI-Prolog interpreter.
# Refresh your local package index first:
$ sudo apt update
# Then install Node.js:
$ sudo apt install nodejs
# Verify the installation by checking the Node.js version:
$ node -v
# Navigate to the 'llm_ui' directory and install necessary node packages:
$ cd llm_ui/
$ npm install
There are a number of Python dependencies to be installed before being able to run the code. You can do so by running:
$ python3 -m pip install -r requirements.txt
SUGGESTION: before installing the requirements, install the virtualenv
package with python and create a virtual environment
$ python3 -m pip install -U virtualenv
$ virtualenv venv
$ source venv/bin/activate
The requirements.txt
file contains the relative path to the other requirements.txt
files. Otherwise install the packages for the single modules manually.
To compile and execute the behaviour tree part, a C++ development environment is required. Also ROS2 must be installed and working. The framework was tested using ROS2 Humble, which can be installed following this guide for Ubuntu 22.04 (native), or by using RobotStack on almost every other operating system (works also with Mac with M* CPUs).
The other needed dependencies are BehaviourTree.CPP and BehaviourTree.ROS2, which should have been downloaded as submodules in the Clone the repo section.
If you are planning to use ROS2 and not to manually compile behaviour_tree
, you can skip this paragraph. The framework uses BehaviourTree.CPP and BehaviourTree.ROS2 to execute and monitor the BTs. Please, if you are compiling with cmake and not colcon, visit the website to install the due dependencies (i.e., gtest, ZeroMQ and SQlite). Obviously, the nodes of the tree must also be changed accordingly, since the default ones are nodes sending messages to topics.
In order to correctly run the framework, we need behaviour trees to be working and for them to be working we need to be able to compile them, so please verify that you have a working ROS2 environment before proceeding. Once you are sure you have, enter the behaviour_tree
directory and run colcon
.
$ cd behaviour_tree
$ colcon build && source install/setup.bash
This should automatically compile the BehaviourTree.{CPP,ROS2}
packages as well as the behaviour_tree
package used by the framework.
It's possible to both run the whole framework at once (next section), or to run single components (the other sections).
To start the server, use the following command:
$ cd server/
$ python server.py
To run the web user interface:
$ cd llm_ui/
$ npm start
To run the whole framework you just need to run the Python script in the main directory.
$ python3 run_framework.py
Conversation data between a user and an assistant is stored in YAML files. To make this conversational data usable for fine-tuning purposes—improving the abilities of such language models—it needs to be converted into the JSONL as a means of ensuring compatibility and integration with fine-tuning.
YAML to JSONL Conversion for GPT Fine-Tuning
Fine-tuning large language models like GPT entails well-formatted data in a suitable format. YAML is a human-readable and structured format, while GPT models utilize the JSONL format for fine-tuning. Thus, It is necessary to convert YAML files into JSONL in order to ensure compatibility.
Execution: Run the converter script, specifying the input YAML file(s) and desired output directory:
python3 dataset_generator.py -y <path_to_yaml_file_1> <path_to_yaml_file_2> <path_to_yaml_file_3>
To shuffle the data during conversion:
python3 dataset_generator.py -y <path_to_yaml_file_1> <path_to_yaml_file_2> <path_to_yaml_file_3> -s true
You can run the knowledge creation by calling the python script gpt_convo.py
. It uses few-shots learning to teach the LLM how to respond. The examples are in the few-shots.yaml
file, but other files can be added by using hte -y/--yaml-files
arguments:
python3 llm_kb_gen/gtp_convo.py -y <path_to_yaml_file_1> <path_to_yaml_file_2> <path_to_yaml_file_3>
If not YAML file is passed, the default one will be used.
Notice that the structure of the YAML file should be:
entries:
system_msg:
role:
content:
convo:
0:
Q:
role:
content:
A:
role:
content:
1:
Q:
role:
content:
A:
role:
content:
You can run a series of tests
@misc{saccon2023prolog,
title={When Prolog meets generative models: a new approach for managing knowledge and planning in robotic applications},
author={Enrico Saccon and Ahmet Tikna and Davide De Martini and Edoardo Lamon and Marco Roveri and Luigi Palopoli},
year={2023},
eprint={2309.15049},
archivePrefix={arXiv},
primaryClass={cs.RO}
}
- LLMs fine-tuning is not currently working
- Get the blocks info with machine learning methods (e.g. neuro problog)
- Optimize the makespan selecting the blocks that are faster to build
- Enrico Saccon: [email protected]
- Ahmet Tikna: [email protected]
- Syed Ali Usama: [email protected]
- Davide De Martini: [email protected]
- Edoardo Lamon: [email protected]
- Marco Roveri: [email protected]
- Luigi Palopoli: [email protected]