The purpose of our application was to build a bot (called SAF in the images shown below) which warns users against potential predators while engaging in conversations via (online) chatrooms. It is based on predictions made by the LSTM model trained on dangerous and relatively normal conversations.
Welcome Page
Potentially dangerous chats
Normal chats
- Detects potential predators in online chatrooms.
- Provides a chatting interface with the bot which generates responses and predicts whether they're dangerous or safe.
- Displays warning messages when texts are perverted or suspicious.
- User can also provide text inputs to the bot which will then detect how perverted they are.
-
Dependencies:
- Tensorflow
- Keras
- Numpy
- Pandas
- pickle
- NLTK
- Symspellpy
- Streamlit
-
The Bot (application) has been built using Streamlit.
- Model's responses are random and can instead be tailored to fit user's responses or questions.
- The bot makes accurate predictions as to whether a conversation is dangerous or not, most of the time, but requires fine-tuning.
- Each conversation between the bot and user lasts only for one iteration. This can be extended to include actual conversations.
- This bot can be built as an extension instead of a stand-alone application and can be employed in actual online chatrooms.
Naman Garg |
Pooja Ravi |
Breenda Das |
Sadhavi Thapa |
Made with ❤️ by DS Community SRM