A Flask web application that analyzes X (formerly Twitter) user posts for toxicity using a machine learning model. No Twitter/X API required - the app scrapes posts via Nitter instances and displays toxicity scores with a modern X-like UI.
It has a Window GUI version which can also be used to do the same without opening the browser.
You can try the live demo of the app here: https://x-toxicity-detection.onrender.com
demo.mp4
| Light | Dark |
|---|---|
![]() |
![]() |
![]() |
![]() |
You can see a pie chart which portrays the percentage of tweets that are toxic and non-toxic. It can be viewed by clicking on the view Pie Chart button which is located below following and followers count.
No API keys required! This app uses Nitter instances to fetch tweets.
git clone https://github.com/mantreshkhurana/x-toxicity-detection-flask.git
cd x-toxicity-detection-flask
python -m venv venv
source venv/bin/activate # on windows: venv\Scripts\activate
pip install -r requirements.txt
python app.pygit clone https://github.com/mantreshkhurana/x-toxicity-detection-flask.git
cd x-toxicity-detection-flask
pip install -r requirements.txt
python app.pyNavigate to http://127.0.0.1:5000/ in your web browser to use the app.
python app.pyRun the app in a window GUI:
python app.py --window
# or
python app.py -wUse a custom port:
python app.py --port 8000
# or
python app.py -p 8000- Search for a X user's recent tweets
- Dark/Light mode toggle
- View a pie chart for profile's toxicity ratio
- View user's profile picture, name, username, following and followers count
- View images in tweets
- View retweets and likes count for each tweet
- View the date and time of each tweet
- X-like feed layout
- Simple bot protection
- Native GUI window support
- No API keys required (uses Nitter scraping)
- Images/Videos toxicity detection
- Enter a Twitter/X username and the number of tweets to analyze
- The app scrapes tweets from available Nitter instances
- Each tweet is analyzed using a logistic regression model trained on hate speech data
- Tweets are displayed with color coding (green for non-toxic, red for toxic)
- An overall toxicity ratio is calculated and can be viewed as a pie chart
The toxicity detection model uses a CountVectorizer for text feature extraction and Logistic Regression for classification. A tweet is flagged as toxic if the model predicts a probability of 65% or higher.
x-toxicity-detection-flask/
├── app.py # main flask application
├── models/
│ └── hate_speech_model.csv
├── static/
│ ├── css/
│ │ ├── style.css # main stylesheet (imports modules)
│ │ ├── base.css # reset and typography
│ │ ├── header.css # header and navigation
│ │ ├── search.css # search bar and bot protection
│ │ ├── profile.css # profile card styles
│ │ ├── tweet.css # tweet card styles (X-like UI)
│ │ └── components.css # footer, modals, errors
│ ├── js/
│ │ └── script.js
│ └── images/
│ ├── favicon.ico
│ └── hate_speech.svg
├── templates/
│ ├── index.html
│ ├── results.html
│ └── error.html
├── images/
│ └── logo.png
├── assets/
│ └── screenshots/
├── .gitignore
├── README.md
└── requirements.txtContributions are welcome! You can contribute to this project by forking it and making a pull request.
After forking:
git clone https://github.com/<your-username>/x-toxicity-detection-flask.git
cd x-toxicity-detection-flask
git checkout -b <your-branch-name>
# after adding your changes
git add .
git commit -m "your commit message"
git push origin <your-branch-name>





