Skip to content

A LangChain client to easily experiments and benchmarks Large Language Models at the argument relation prediction task

License

Notifications You must be signed in to change notification settings

coding-kelps/liaisons-experiments

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

34 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

⚠️ This repository is a part of an academical project for the Heriot-Watt University, no third-party contributions are accepted.

liaisons-experiments

Overview

The present repository proposes a benchmarking framework to easily evaluate LLMs' capacity to predict argument relations at the micro-scale level (only guessing the relation between pairs of arguments). The framework is intended to be used with a preprocessed sample of a dataset from the IBM Debater project.

Results

You can found my own results here.

About Contributions

As mentioned earlier, this work is part of an academic project for the validation of my Master's Degree at Heriot-Watt University, preventing me from accepting any contributions until the final release of my project. Thank you for your understanding.

Associated Works

This work is part of a collection of works whose ultimate goal is to deliver a framework to automatically analyze social media content (e.g., X, Reddit) to extract their argumentative value and predict their relations, leveraging Large Language Models' (LLMs) abilities:

About the Development Team

This project is solely conducted by me, Guilhem Santé. I am a postgraduate student pursuing the MSc in Artificial Intelligence at Heriot-Watt University in Edinburgh.

Special Thanks

I would like to credits Andrew Ireland, my supervisor for this project.

About

A LangChain client to easily experiments and benchmarks Large Language Models at the argument relation prediction task

Topics

Resources

License

Stars

Watchers

Forks