-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
0 parents
commit d785611
Showing
5 changed files
with
235 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
venv/* |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,52 @@ | ||
# LLM Security Chatbot | ||
|
||
The LLM Security Chatbot is a cutting-edge tool designed to assist in understanding and researching cybersecurity vulnerabilities. Built with the powerful Llama Large Language Model (LLM) and integrated into a user-friendly interface using Streamlit, this chatbot leverages natural language processing to provide in-depth analysis, actionable insights, and potential mitigation strategies for a wide range of security concerns. | ||
|
||
## Features | ||
|
||
- **Interactive Chat Interface**: Engage in conversational queries and receive detailed responses. | ||
- **Code Snippet Support**: Get examples and explanations with formatted code (still a work in progress) snippets for technical understanding. | ||
- **Conversation History**: Review past queries and responses directly within the application. | ||
- **Exportable Conversations**: Easily export the conversation history for documentation or further analysis. | ||
|
||
## Getting Started | ||
|
||
To get started with the LLM Security Chatbot, follow these steps: | ||
|
||
### Prerequisites | ||
|
||
Ensure you have Python 3.9+ installed on your system. You will also need the `streamlit` and `llama_cpp` packages. | ||
|
||
### Installation | ||
|
||
1. Clone the repository: | ||
```bash | ||
git clone https://github.com/jwalker/llm_security_chatbot.git | ||
``` | ||
2. Navigate to the project directory: | ||
```bash | ||
cd llm_security_chatbot/ | ||
``` | ||
3. Install the required Python packages: | ||
```bash | ||
uv pip install -r requirements.txt | ||
``` | ||
|
||
### Running the Application | ||
|
||
Launch the chatbot by running the Streamlit application: | ||
|
||
```bash | ||
streamlit run app.py | ||
``` | ||
|
||
Visit http://localhost:8501 in your web browser to start interacting with the chatbot. | ||
|
||
### Usage | ||
Enter your cybersecurity-related queries in the text area and hit 'Submit' to receive a response. The chat interface allows for natural language questions and provides detailed explanations, including code examples when relevant. | ||
|
||
### Contributing | ||
Contributions are welcome! If you have suggestions for improvements or want to contribute to the development of the LLM Security Chatbot, please feel free to fork the repository and submit a pull request. | ||
|
||
### Contact | ||
If you have any questions or feedback, please reach out via email: [email protected] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,124 @@ | ||
import streamlit as st | ||
from llama_cpp import Llama | ||
|
||
# Initialize the Llama model | ||
llm = Llama( | ||
model_path="../llama.cpp/models/mistral-7B-v0.1/ggml-model-Q4_K_M.gguf", | ||
n_ctx=4096, | ||
n_gpu_layers=-1, | ||
chat_format="chatml" | ||
) | ||
|
||
st.set_page_config(page_title="💬 Security ChatBot") | ||
st.title('Security ChatBot - Ask me something') | ||
|
||
if 'conversation_history' not in st.session_state: | ||
st.session_state['conversation_history'] = [] | ||
if 'debug_info' not in st.session_state: | ||
st.session_state['debug_info'] = {} | ||
|
||
user_query = st.text_area("Enter your query related to cybersecurity research", '') | ||
|
||
prompt_template = """ | ||
Q: {user_query} | ||
Context: The user is seeking in-depth analysis and actionable insights on cybersecurity vulnerabilities. | ||
Please provide detailed information, potential mitigation strategies, and reference relevant tools or resources. | ||
A: """ | ||
|
||
def display_conversation(): | ||
""" | ||
Display the conversation history with the most recent messages first. | ||
Format messages appropriately, including code blocks. | ||
""" | ||
for exchange in st.session_state['conversation_history']: | ||
# Check for the type of exchange to handle string-only history entries gracefully | ||
if isinstance(exchange, dict): | ||
message, sender = exchange['message'], exchange['sender'] | ||
else: # Fallback for string entries, assuming all older entries are from the user | ||
message, sender = exchange, 'User' | ||
|
||
# Format message as Markdown to properly display code blocks and maintain line breaks | ||
if sender == 'Agent' and '```' in message: | ||
# Assuming code blocks are enclosed in triple backticks | ||
st.markdown(f"**{sender}:**\n```{message.split('```')[1]}```", unsafe_allow_html=True) | ||
else: | ||
st.markdown(f"**{sender}:**\n{message}", unsafe_allow_html=True) # clearly not safe | ||
|
||
def display_debug_info(): | ||
if st.session_state['debug_info']: | ||
st.json(st.session_state['debug_info']) | ||
|
||
def export_conversation_history(): | ||
with open('conversation_history.txt', 'w') as file: | ||
for exchange in reversed(st.session_state['conversation_history']): | ||
if isinstance(exchange, dict): | ||
line = f"{exchange['sender']}: {exchange['message']}\n" | ||
else: | ||
line = f"User: {exchange}\n" | ||
file.write(line) | ||
st.success('Conversation exported successfully!') | ||
|
||
if st.button('Submit'): | ||
if user_query: | ||
st.session_state['conversation_history'].append({"sender": "Researcher", "message": user_query}) | ||
with st.spinner('Analyzing your query...'): | ||
prompt = prompt_template.format(user_query=user_query) | ||
output = llm(prompt, max_tokens=2048, stop=["Q:"], echo=True) | ||
|
||
if 'choices' in output and len(output['choices']) > 0: | ||
raw_response = output['choices'][0]['text'] | ||
user_friendly_response = raw_response.split('A: ')[-1].strip() | ||
|
||
st.session_state['conversation_history'].append({"sender": "Agent", "message": user_friendly_response}) | ||
st.session_state['debug_info'] = output | ||
else: | ||
st.error("The model did not return a valid response. Please try again.") | ||
else: | ||
st.warning("Please enter a query.") | ||
|
||
display_conversation() | ||
# Sidebar for additional controls | ||
with st.sidebar: | ||
st.image('https://files.oaiusercontent.com/file-TAKOjgzaq5efA7OIusxh2Vkw?se=2024-03-20T06%3A55%3A23Z&sp=r&sv=2021-08-06&sr=b&rscc=max-age%3D31536000%2C%20immutable&rscd=attachment%3B%20filename%3Dbf88bd85-a472-465b-8707-3d315307bc9b.webp&sig=CuqICRvq4pHQ45NGIxADC5AcIjrzdutbcrrYLKA73AY%3D', width=100) # Consider adding a logo or related visual | ||
st.markdown('## 🛠 Controls & Tools') | ||
st.markdown("""---""") | ||
|
||
# Reset Conversation Button | ||
col1, col2 = st.columns([1, 4]) | ||
with col1: | ||
if st.button('🔄', help='Reset Conversation'): | ||
st.session_state.conversation_history = [] | ||
st.session_state.debug_info = {} | ||
with col2: | ||
st.markdown("**Reset Conversation**") | ||
|
||
# Export Conversation History Button | ||
col1, col2 = st.columns([1, 4]) | ||
with col1: | ||
if st.button('💾', help='Export Conversation History'): | ||
export_conversation_history() | ||
with col2: | ||
st.markdown("**Export Conversation History**") | ||
|
||
# Show Debug Information Button | ||
col1, col2 = st.columns([1, 4]) | ||
with col1: | ||
if st.button('🐛', help='Show Debug Information'): | ||
display_debug_info() | ||
with col2: | ||
st.markdown("**Show Debug Information**") | ||
|
||
st.markdown("""---""") # Horizontal line for visual separation | ||
|
||
# Link to Blog Post | ||
st.markdown('📖 [Blog post on setup and how the app was built](https://blog.stellersjay.pub)') | ||
|
||
st.markdown("---") # Horizontal line for visual separation | ||
st.markdown("## 📬 Contact & Source Code") | ||
|
||
# GitHub Repo Link | ||
st.markdown("Check out the [GitHub repository](https://github.com/jwalker/your-repo-name) for this project.") | ||
|
||
# Email Contact | ||
st.markdown("Feel free to reach out via email: [jwalker](mailto:[email protected])") | ||
|
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,58 @@ | ||
altair==5.2.0 | ||
attrs==23.2.0 | ||
blinker==1.7.0 | ||
cachetools==5.3.3 | ||
certifi==2024.2.2 | ||
charset-normalizer==3.3.2 | ||
click==8.1.7 | ||
diskcache==5.6.3 | ||
einops==0.7.0 | ||
filelock==3.13.1 | ||
fsspec==2024.3.0 | ||
gguf==0.6.0 | ||
gitdb==4.0.11 | ||
gitpython==3.1.42 | ||
huggingface-hub==0.21.4 | ||
idna==3.6 | ||
jinja2==3.1.3 | ||
jsonschema==4.21.1 | ||
jsonschema-specifications==2023.12.1 | ||
llama-cpp-python==0.2.56 | ||
markdown-it-py==3.0.0 | ||
markupsafe==2.1.5 | ||
mdurl==0.1.2 | ||
mpmath==1.3.0 | ||
networkx==3.2.1 | ||
numpy==1.26.4 | ||
packaging==23.2 | ||
pandas==2.2.1 | ||
pillow==10.2.0 | ||
protobuf==4.25.3 | ||
pyarrow==15.0.2 | ||
pydeck==0.8.0 | ||
pygments==2.17.2 | ||
python-dateutil==2.9.0.post0 | ||
pytz==2024.1 | ||
pyyaml==6.0.1 | ||
referencing==0.34.0 | ||
regex==2023.12.25 | ||
requests==2.31.0 | ||
rich==13.7.1 | ||
rpds-py==0.18.0 | ||
safetensors==0.4.2 | ||
sentencepiece==0.1.99 | ||
six==1.16.0 | ||
smmap==5.0.1 | ||
streamlit==1.32.2 | ||
sympy==1.12 | ||
tenacity==8.2.3 | ||
tokenizers==0.15.2 | ||
toml==0.10.2 | ||
toolz==0.12.1 | ||
torch==2.1.2 | ||
tornado==6.4 | ||
tqdm==4.66.2 | ||
transformers==4.38.2 | ||
typing-extensions==4.10.0 | ||
tzdata==2024.1 | ||
urllib3==2.2.1 |