The Toxicity Detector is an AI-powered solution for detecting and classifying toxic content, including fake news and biased statements. It utilizes advanced natural language processing techniques, specifically a fine-tuned RoBERTa transformer-based model, to accurately identify and categorize toxic sentences.
- FastAPI-based API: Provides endpoints for making toxicity classification predictions using the trained RoBERTa model.
- Transformer-based Model: Utilizes the RoBERTa model, a state-of-the-art transformer architecture, for accurate and reliable toxicity detection.
- Streamlit User Interface: Offers a user-friendly interface for interacting with the toxicity detector, allowing users to input sentences and receive toxicity predictions in real-time.
-
Clone the repository:
git clone https://github.com/your-username/toxicity-detector.git
-
Install the dependencies:
pip install -r requirements.txt
-
Run the FastAPI server:
uvicorn fast:app --host 0.0.0.0 --port 8000
-
Run the Streamlit user interface:
streamlit run streamlit.py
-
Open your browser and visit
http://localhost:8501
to access the Toxicity Detector user interface.
- Use the Streamlit user interface to select a sentence from the dropdown menu.
- Click the "Predict" button to obtain the toxicity classification prediction for the selected sentence.
- The user interface will display the predicted toxicity category, such as "No Bias," "Hate Speech," "Fake News," "Political Bias," "Racial Bias," or "Gender Bias."
Contributions to the Toxicity Detector project are welcome! If you encounter any issues or have suggestions for improvements, please feel free to open an issue or submit a pull request. Remember to follow the project's code of conduct.
This project is licensed under the MIT License.
We would like to express our gratitude to the developers and contributors of the FastAPI, Hugging Face Transformers, and Streamlit libraries, whose work has made this project possible.
For questions or inquiries, please contact Ancastal.