Discover the power of free and open-source coding assistants that 10x your productivity, refine your idea, and help you better understand complex code.
Imagine this: writing lines of code feels like a conversation with a seasoned developer, one who anticipates your next move, suggests efficient solutions, and catches errors before they happen. This is the power of coding assistants, a new breed of AI tools transforming the software development landscape.
For years, developers have relied on traditional tools like syntax highlighting and autocompletion. But these were mere assistants, holding the flashlight. AI Coding assistants take the wheel and propelling your development forward.
Gone are the days of spending hours searching for the right functions or battling syntax errors. With AI by your side, you can write code faster, code smarter, learn quicker, debug easier, and my favorite - refine your ideas like never before.
However, one issue hinders most of us from fully using this tools, the best are behind paywalls and others require good internet. We will explore options that will allow you to use these tools for free, be it offline and/or with internet.
Let's get to it!
The once solitary world of coding is undergoing a transformative shift. AI-powered assistants are emerging as invaluable partners, poised to revolutionize the way developers work, not replace them. But of course if you don't know how to harness their power, you will certainly be left behind while others 10x their productivity. These tools offer a tantalizing array of benefits:
However, with great power comes great responsibility (as they say). Ethical considerations and limitations are also crucial factors to consider:
⚠️ Ethical Concerns: The use of AI-generated code raises questions about ownership and responsible development. Are developers relying too heavily on assistants, hindering their own learning? Moreover, potential biases within the underlying data could lead to discriminatory or unfair outputs. As this technology evolves, navigating these ethical concerns will be crucial.
⚠️ Limitations and Transparency: Despite their impressive capabilities, AI assistants are not perfect. They can sometimes generate incorrect or suboptimal code, and their suggestions may not always align with your specific needs. It's important to remember that these tools are not replacements for your own expertise and judgment; they are best used as powerful companions, not autonomous developers.
By understanding the potential benefits and limitations of coding assistants, you can harness their power to supercharge your productivity, improve your code quality, and accelerate your learning journey. Remember, AI is here to enhance your skills, not replace them. So, embrace the future of coding and let your creativity take flight, guided by your trusty AI co-pilot.
Now lets discover these amazing coding assistants
GitHub Copilot is a cloud-based AI coding assistant or pair programmer developed by GitHub and OpenAI to assist users of Visual Studio Code, Visual Studio, Neovim, and JetBrains integrated development environments (IDEs) through features such as autocompletion, code generation, documentation lookup, chat and much more. However, there is no free tier!
https://github.com/features/copilot
⚠️ Its not fun to discover paid tools, so I will leave it at that with Github Copilot. Please feel free to checkout their service it is an amazing tool, too bad its not free.
Codeium is a AI-powered coding assistant that most of GitHub copilot's features such as code completion, test generation, refactoring suggestions, chat and much more. The best part..... Codeium is FREE forever for individuals. You just signup, install the extension on your favorite editor and start coding with your new AI pair programer.
To get stated open your favorite code editor (must be supported by Codeium); for this demo I will use VSCode. Go to extension tab and search for "Codeium" and once you find it, click install. When it finish, a "Codeium Login" button will appear on the bottom tool bar and click login. A dialog could also open with a login button. Then click Allow and Open with asked by VSCode
Now, when the login page opens create an account or use your google account to sign up. When it finishes it will ask you to allow to open vscode and click on allow.
⚠️ Want an in depth setup process? checkout
That's it, you can now start using Codeium total for free! There are several ways you can use Codeium but the main ones are
AutoCompletion - Automatically generated as you type Code Refactoring In-line - Command + I on Mac or Alt + I on Windows/Linux Separate Chat - Found of the left side bar ({...} icon) Explain Code Generate Documentation Unit Test Generation
⚠️ Before we move on to the next coding assistants, we need to install a tool that is used by all of them to run an LLM locally. Let me introduce you to Ollama
Ollama is a phenomenal tool to run LLMs locally. Its light weight and fast and works on all platforms (though the ease of installation differs).
Installation
Installation is very straight forward. Windows and Mac users can directly download and install ollama and Linux users are provided with a "one line" command. (https://ollama.com/download)
⚠️ After installing ollama, it's icon will appear on windows taskbar and on Mac toolbar.
To check that ollama is correctly installed, open the terminal and type ollama
this will list available commands.
⚠️ You can also visit http://127.0.0.1:11434/ to check if ollama is running.
To run ollama you can use the command ollama serve
. If it is already running it will let you know which port and if it was not it will run ollama.
You can visit Ollama library to discover supported models that you can run using ollama. Once you find one, the instruction to install it provided in the detail section of the model.
We will be using two models on the tools that we are going to discuss below. Both models are from DeepSeek.
ollama run deepseek-coder:1.3b-base-q4_0
ollama run deepseek-coder:6.7b
⚠️ Note: if ollama is not running, executeollama serve
⚠️ No matter which operating system you are using (Mac, Linux or Windows), the command used to run LLMs is the same. You can run the above and the model will be downloaded and ready for use.
⚠️ Note, You can use any supported model with this tools. However, larger size models will be very slow but accurate, while small models will be fast but might hallucinate more. So it is a trade off. Here is my suggestion, for auto-completion use small models (like 1b) , for chat use larger models (like 7b and 13b or more if you have the hardware)
Continue This is an amazing project that can generate, refactor, and explain entire sections of code. It uses natural language to refactor and edit code and it can generate files from scratch. It is truly an amazing project. It works with a huge number of LLM that run locally as well as via service providers. But here, we will connect it to our Ollama instance that we installed earlier.
Currently continue.dev supports VSCode and JetBrain IDEs. Here we will use VSCode to demo this coding assistant. Open VSCode and search for continue in the extensions tab.
After you install it, there will be an icon similar to the below, when you click it the chat interface will open. This is one of the two ways you can interact with Continue.
However, before we start using it we need to setup our LLM. Click the plus icon at the bottom left and then click on Ollama option.
Then scroll down to the bottom and click "Open config.json" and add the following configuration to the config.json file. This is one of the models we download earlier.
{
"title": "DeepSeek-7b",
"model": "deepseek-coder:6.7b",
"completionOptions": {},
"apiBase": "http://localhost:11434",
"provider": "ollama"
}
⚠️ Note the model name must be exactly the same as the one you downloaded, the apiBase and provider are always the same for Ollama not matter what model you are using.
Then save and close it. After that select the "DeepSeek-7b" model from the drop down list at the bottom left. And That's it, you can now start to use Continue
Lets try out the Chat first. Here, I asked it to write a simple FastAPI server with all curd operations. It will even give you instruction on how to run it, very nice!
Lets try another way we can interact with Continue. Press Command + Shift + L on Mac and Ctrl + Shift + L on Windows/Linux and a command palette will open and you can type your request there.
As you can see below it will generate the code and provide you with options to Accept or Reject.
Next lets try to modify or refactor our code and write some tests. Select a portion of the code and press Command + Shift + M on Mac or Ctrl + Shift + M on Windows/Linux and the portion of the code will be copied to the chat window. next type /test
and hit Enter
It not only provided you with the tests but also included what to install and how to run it, amazing!!
⚠️ Continue as a lot of other features so I will leave them to you to explore.
⚠️ Note you can also use LM Studio with Continue.dev!!! Checkout there documentation.