Skip to content

Commit db92906

Browse files
authored
Update README.md
1 parent f9a2e31 commit db92906

File tree

1 file changed

+3
-4
lines changed

1 file changed

+3
-4
lines changed

README.md

+3-4
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,9 @@
1-
# Dynamic Probabilistic Inclusion of Literals for Concept Learning (DPCL)
1+
# Generalized Convergence Analysis of Tsetlin Machines: A Probabilistic Approach to Concept Learning
22

33
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
44
[![Python Version](https://img.shields.io/badge/python-3.11-blue.svg)](https://www.python.org/downloads/release/python-3110/)
55
[![TensorFlow Version](https://img.shields.io/badge/tensorflow-2.13.0-brightgreen.svg)](https://www.tensorflow.org/)
66

7-
DPCL is an innovative Tsetlin Machine scheme that learns concepts through propositional formulas. It's efficient in various applications and demonstrates effectiveness compared with state-of-the-art classifiers.
8-
97
## Table of Contents
108
- [Introduction](#introduction)
119
- [Installation](#installation)
@@ -14,7 +12,8 @@ DPCL is an innovative Tsetlin Machine scheme that learns concepts through propos
1412

1513
## Introduction
1614

17-
Tsetlin Machine (TM) is a recent intriguing machine learning tool that learns concepts through propositional formulas. This repository contains the implementation of Dynamic Probabilistic inclusion of literals for Concept Learning (DPCL), a new Tsetlin Machine scheme with dedicated feedback tables and dynamic clause-dependent inclusion/exclusion probabilities.
15+
Tsetlin Machines (TMs) have garnered increasing interest for their ability to learn concepts via propositional formulas and their proven efficiency across various application domains. Despite this, the convergence proof for the TMs, particularly for the AND operator (_conjunction_ of literals), in the generalized case (inputs greater than two bits) remains an open problem. This paper aims to fill this gap by presenting a comprehensive convergence analysis of Tsetlin automaton-based Machine Learning algorithms. We introduce a novel framework, referred to as Probabilistic Concept Learning (PCL), which simplifies the TM structure while incorporating dedicated feedback mechanisms and dedicated inclusion/exclusion probabilities for literals. Given $n$ features, PCL aims to learn a set of conjunction clauses $C_i$ each associated with a distinct inclusion probability $p_i$. Most importantly, we establish a theoretical proof confirming that, for any clause $C_k$, PCL converges to a conjunction of literals when $0.5 < p_k <1 $.
16+
This result serves as a stepping stone for future research on the convergence properties of Tsetlin automaton-based learning algorithms. Our findings not only contribute to the theoretical understanding of Tsetlin Machines but also have implications for their practical application, potentially leading to more robust and interpretable machine learning models.
1817

1918
## Installation
2019

0 commit comments

Comments
 (0)