Skip to content

Latest commit

 

History

History
35 lines (20 loc) · 2.42 KB

README.md

File metadata and controls

35 lines (20 loc) · 2.42 KB

⚠️ This repository is a part of an academical project for the Heriot-Watt University, no third-party contributions are accepted.

liaisons-experiments

Overview

The present repository proposes a benchmarking framework to easily evaluate LLMs' capacity to predict argument relations at the micro-scale level (only guessing the relation between pairs of arguments). The framework is intended to be used with a preprocessed sample of a dataset from the IBM Debater project.

Results

You can found my own results here.

About Contributions

As mentioned earlier, this work is part of an academic project for the validation of my Master's Degree at Heriot-Watt University, preventing me from accepting any contributions until the final release of my project. Thank you for your understanding.

Associated Works

This work is part of a collection of works whose ultimate goal is to deliver a framework to automatically analyze social media content (e.g., X, Reddit) to extract their argumentative value and predict their relations, leveraging Large Language Models' (LLMs) abilities:

About the Development Team

This project is solely conducted by me, Guilhem Santé. I am a postgraduate student pursuing the MSc in Artificial Intelligence at Heriot-Watt University in Edinburgh.

Special Thanks

I would like to credits Andrew Ireland, my supervisor for this project.