diff --git a/README.md b/README.md
index 03ec0a1..e1ec321 100644
--- a/README.md
+++ b/README.md
@@ -30,7 +30,7 @@ We provide [pre-built binaries/executables]() for various platforms, making it e
**Note for macOS and iOS users**: *Binaries are not provided due to platform restrictions. Please see the [Compiling on your own](docs/compiling.md) section.*
-**Note for Windows users**: *You may encounter a SmartScreen warning since the binaries aren't signed. Rest assured they are safely built via GitHub CI when downloaded directly from the [Releases section](https://github.com/1runeberg/confichat/releases). You can also view the [full build logs](https://github.com/1runeberg/confichat/actions/workflows/publish_release.yml). And of course you can [build from source](docs/compiling.md).*
+**Note for Windows users**: *You may encounter a SmartScreen warning since the binaries aren't signed. They are safely built via GitHub CI when downloaded directly from the [Releases section](https://github.com/1runeberg/confichat/releases). You can also view the [full build logs](https://github.com/1runeberg/confichat/actions/workflows/publish_release.yml). And of course you can [build from source](docs/compiling.md).*
❤️ If you find this app useful, consider sponsoring us in [GitHub Sponsors](https://github.com/sponsors/1runeberg) to help us secure necessary certificates and accounts for future binary distributions.
@@ -40,7 +40,10 @@ We provide [pre-built binaries/executables]() for various platforms, making it e
### 📖 2. Quick Start Guides
-Get started quickly with **ConfiChat** by following one of our [quick start guides](docs/quickstart.md) depending on whether you want to use local models, online models, or both.
+If you're completely new to offline LLMs, check out this easy [Three-Step guide to get started (including ConfiChat)](https://runeberg.medium.com/getting-started-with-local-ai-llms-in-three-easy-steps-bddebcf26570) - a no-coding, no-dependencies approach.
+
+You can also get started quickly with **ConfiChat** by following one of our [quick start guides](docs/quickstart.md) depending on whether you want to use local models, online models, or both.
+
diff --git a/confichat/.idea/workspace.xml b/confichat/.idea/workspace.xml
index 6b82833..413cb62 100644
--- a/confichat/.idea/workspace.xml
+++ b/confichat/.idea/workspace.xml
@@ -9,13 +9,7 @@
-
-
-
-
-
-
-
+
diff --git a/docs/quickstart.md b/docs/quickstart.md
index b1cfb15..69c8582 100644
--- a/docs/quickstart.md
+++ b/docs/quickstart.md
@@ -62,31 +62,15 @@ ollama pull llama3.1
This command will download the Llama 3.1 model to your local machine.
-### 3. Set Up ConfiChat
+### 3. Run ConfiChat
-Next, download and set up the ConfiChat interface:
-
-- Clone the ConfiChat repository:
- ```bash
- git clone https://github.com/your-repository/ConfiChat.git
- cd ConfiChat
- ```
-
-- Install dependencies:
- ```bash
- flutter pub get
- ```
-
-- Run the application:
- ```bash
- flutter run
- ```
+Next, [download](https://github.com/1runeberg/confichat) and run ConfiChat.
Now, you're ready to start using ConfiChat with your local Llama 3.1 model!
### Additional Resources
-For more detailed instructions and troubleshooting, please visit the [Ollama documentation](https://ollama.com/docs) and the [ConfiChat repository](https://github.com/your-repository/ConfiChat).
+For more detailed instructions and troubleshooting, please visit the [Ollama documentation](https://ollama.com/docs)
---
@@ -104,25 +88,11 @@ To use OpenAI with ConfiChat, you first need to obtain an API key:
Keep your API key secure and do not share it publicly.
-### 2. Set Up ConfiChat
+### 2. Run ConfiChat
-Next, download and set up the ConfiChat interface:
+Next, [download](https://github.com/1runeberg/confichat) and run ConfiChat.
-- Clone the ConfiChat repository:
- ```bash
- git clone https://github.com/your-repository/ConfiChat.git
- cd ConfiChat
- ```
-
-- Install dependencies:
- ```bash
- flutter pub get
- ```
-
-- Run the application:
- ```bash
- flutter run
- ```
+Note: There may be a warning during first run as the binaries are unsigned.
### 3. Configure ConfiChat with Your API Key
@@ -136,7 +106,7 @@ ConfiChat is now configured to use OpenAI for its language model capabilities!
### Additional Resources
-For more detailed instructions and troubleshooting, please visit the [OpenAI documentation](https://platform.openai.com/docs) and the [ConfiChat repository](https://github.com/your-repository/ConfiChat).
+For more detailed instructions and troubleshooting, please visit the [OpenAI documentation](https://platform.openai.com/docs).
---
@@ -152,9 +122,11 @@ Follow the instructions in the [Install Ollama](#1-install-ollama) section above
Follow the instructions in the [Download a Model](#2-download-a-model) section above to download the Llama 3.1 model.
-### 3. Set Up ConfiChat
+### 3. Run ConfiChat
-Follow the instructions in the [Set Up ConfiChat](#3-set-up-confichat) section above.
+[Download](https://github.com/1runeberg/confichat) and run ConfiChat.
+
+Note: There may be a warning during first run as the binaries are unsigned.
### 4. Get Your OpenAI API Key
@@ -198,9 +170,11 @@ llama-server -m /path/to/your/model --port 8080
This command will start the LlamaCpp server, which ConfiChat can connect to for processing language model queries.
-### 3. Set Up ConfiChat
+### 3. Run ConfiChat
+
+[Download](https://github.com/1runeberg/confichat) and run ConfiChat.
-Follow the instructions in the [Set Up ConfiChat](#3-set-up-confichat) section above.
+Note: There may be a warning during first run as the binaries are unsigned.
### Additional Resources