🚀 Build Your Own Github Copilot with Ollama: Continue, LLM Magic, and API Power! ✨
Introduction
- AI & LLMs are everywhere : Let’s be real, everyone and their dog 🐕 is trying to run an LLM these days. It’s the hottest thing since sliced bread — except it’s writing code instead of making sandwiches.
- Developers + AI = Best Friends 💻: AI-powered tools are like having a coding sidekick who works while you sip your coffee ☕. Say goodbye to the days of hunting for a missing semicolon!
- What’s a Copilot? : Think of it as your personal coding assistant 🧠. Copilot writes code for you while you take all the credit (and more coffee breaks).
- The Magic of Copilot : It’s shaking up the tech world 🌍, slashing coding time, and making us all look like geniuses. Seriously, why didn’t we think of this sooner?
Lets not waste time and jump write into the process !!
How do AI assistants like Github copilot work ?
- Installation & Authentication : Start by installing the extension and logging in, ensuring you’ve completed the necessary authentication.
- Keystroke Recording : Once the extension is enabled, your keystrokes start getting recorded.
- Input Parsing : As you type, the extension parses your inputs and preps them with context for the LLM, sending an API call to the servers.
- Context Snapshots : A periodic snapshot of your code is sent to the extension’s server, providing the LLM with context about your codebase.
- LLM Processing : The LLM processes the inputs and sends suggestions back to the extension, facilitated by API calls.
Data Privacy Concerns
This process raises a significant issue: your data gets logged into GitHub’s system. Your codebase may contain sensitive information, such as secret keys, which these LLMs could access. More about this is covered here.
Project Requirements
To build your own Copilot, you’ll need the following:
Node.js: A powerful JavaScript runtime that allows you to run JavaScript on the server side.
Installation Guide: How to Install Node.js
Python: A versatile programming language that’s great for AI and scripting tasks.
Installation Guide: Download and Install Python 3
Make sure you have both installed and ready to go before diving into the project!
Ollama: It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications.
Steps
1. Installing Ollama
After setting up Node.js and Python, it’s time to install Ollama! Choose the installation method that fits your operating system:
- Mac Users 🍏: Download and install Ollama from the official site: Download Ollama for Mac
- Windows Users 🪟: Grab the installer from here: Download Ollama for Windows
- Linux Users 🐧: Use the following command to install Ollama directly:
curl -fsSL https://ollama.com/install.sh | sh
After you are done installing ollama you need some LLM models downloaded which will power the extenstion.
I personally prefer these and commands to install them are below.
ollama run codellama
ollama run deepseek-coder:6.7b
ollama run starcoder2:3b
2. Installing Continue 🔄
Continue is a powerful extension that can be installed on both VSCode and JetBrains IDEs. It helps connect Ollama seamlessly to your development environment.
- For VSCode: You can install Continue directly from the marketplace: Install Continue for VSCode
- For JetBrains: Visit the JetBrains Marketplace to find the extension suitable for your IDE.
This extension will enable you to leverage Ollama’s capabilities directly within your coding environment, making your workflow even smoother!
3. Set Up Authentication on Ollama
a. Ollama Configuration: Ollama generally runs on port 11434, allowing anyone to hit the API. Here’s a sample curl command to access its model and generate answers:
curl --location 'http://localhost:11434/api/generate' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer <lol>' \
--data '{
"model": "starcoder2:3b",
"raw": true,
"keep_alive": 1800,
"options": {
"temperature": 0.01,
"num_predict": 1024,
"stop": [
"<fim_prefix>",
"<fim_suffix>",
"<fim_middle>",
"<file_sep>",
"<|endoftext|>",
"</fim_middle>",
"</code>",
"\n\n",
"\r\n\r\n",
"/src/",
"#- coding: utf-8",
"```",
"t.",
"\nt",
"<file_sep>",
"\nfunction",
"\nclass",
"\nmodule",
"\nexport",
"\nimport"
],
"num_ctx": 4096
},
"prompt": "<fim_prefix>\nnumbers<fim_suffix>\n<fim_middle>"
}'
b. Install Dependencies
Here’s the package.json
for your Express application, which implements the authentication logic:
{
"name": "your-app-name",
"version": "1.0.0",
"description": "A simple Express application",
"main": "index.js",
"scripts": {
"start": "node index.js"
},
"author": "rahul.tah",
"license": "ISC",
"dependencies": {
"axios": "^0.24.0",
"body-parser": "^1.19.0",
"express": "^4.17.1",
"sqlite3": "^5.0.2"
}
}
To install the necessary dependencies, run the following commands:
npm install axios body-parser express sqlite3
c. Authentication Script
Here’s the script implementing authentication:
const express = require("express");
const axios = require("axios");
const bodyParser = require("body-parser");
const sqlite3 = require("sqlite3").verbose();
const app = express();
const port = 3000;
// Initialize SQLite database
const db = new sqlite3.Database('tokens.db');
// Middleware to log requests
const loggingMiddleware = (req, res, next) => {
console.log("Request received:", req.method, req.url);
next();
};
// Authentication Middleware
const authMiddleware = (req, res, next) => {
const authHeader = req.headers['authorization'];
if (!authHeader || !authHeader.startsWith('Bearer ')) {
return res.status(401).json({ error: "Authorization header missing or malformed" });
}
const token = authHeader.split(' ')[1]; // Extract the Bearer token
// Check the token against the SQLite database
db.get('SELECT token FROM tokens WHERE token = ?', [token], (err, row) => {
if (err) {
console.error("Database error:", err);
return res.status(500).json({ error: "Internal server error" });
}
if (!row) {
return res.status(403).json({ error: "Invalid API token" });
}
// Token is valid, proceed to the next middleware or route handler
next();
});
};
// Use body-parsing middleware
app.use(bodyParser.json());
app.use(loggingMiddleware);
app.use(authMiddleware);
// Route to handle requests and forward them to the 11434 service
app.use((req, res, next) => {
const url = `http://localhost:11434${req.url}`;
console.log(`Request received: ${req.url} ${JSON.stringify(req.body)} ${JSON.stringify(req.headers)}`);
axios[req.method.toLowerCase()](url, req.body)
.then((response) => {
res.status(response.status).send(response.data);
})
.catch((error) => {
console.error("Error forwarding request:", error);
res.status(500).json({ error: "Internal server error" });
});
});
app.listen(port, () => {
console.log(`Server listening on port ${port}`);
});
This script sets up an Express server that authenticates requests before forwarding them to the Ollama API.
4. Generate Authentication Token
To allow the authentication script to function, you’ll need to populate the SQLite database with an authentication token. Here’s a simple Python script to generate an API token:
a. Python Script to Generate Token
import sqlite3
import secrets
def generate_token():
token = secrets.token_hex(32)
conn = sqlite3.connect('tokens.db')
cursor = conn.cursor()
cursor.execute('''
CREATE TABLE IF NOT EXISTS tokens (
id INTEGER PRIMARY KEY AUTOINCREMENT,
token TEXT NOT NULL UNIQUE,
description TEXT
)
''')
description = input("Enter a description for this token (e.g., user or service name): ")
cursor.execute('INSERT INTO tokens (token, description) VALUES (?, ?)', (token, description))
conn.commit()
conn.close()
print(f"Generated API Token: {token}")
if __name__ == "__main__":
generate_token()
b. Install Required Libraries
Before running the script, make sure you have the necessary libraries installed. You can install them using pip:
pip install sqlite3
(Note: The sqlite3
module is included in the Python standard library, so you typically won’t need to install it separately.)
c. Run the script
- Save the script as
generate_token.py
. - Execute the script in your terminal:
python generate_token.py
3. Follow the prompt to enter a description for the token.
Once you run the script, it will generate a unique API token, store it in the tokens.db
SQLite database, and display the token for your use.
5. Configuring Continue.dev
Now it’s time to configure the Continue.dev extension, which will connect to our wrapper service and send the authentication token. This configuration will ensure everything works seamlessly together.
Basic Configuration
You can find more configuration settings in the official documentation: Continue.dev Configuration.
Here’s the configuration I’ve set up for Continue:
{
"models": [
{
"title": "codellama",
"provider": "ollama",
"model": "codellama",
"contextLength": 2048,
"apiBase": "http://localhost:3000/",
"apiKey": "<Your generated API key>"
}
],
"customCommands": [
{
"name": "test",
"prompt": "{{{ input }}}\n\nWrite a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
"description": "Write unit tests for highlighted code"
}
],
"tabAutocompleteModel": {
"title": "Starcoder2 3b",
"provider": "ollama",
"model": "deepseek-coder",
"apiBase": "http://localhost:3000/",
"apiKey": "<Your generated API key>"
},
"contextProviders": [
{
"name": "code",
"params": {}
},
{
"name": "docs",
"params": {}
},
{
"name": "diff",
"params": {}
},
{
"name": "terminal",
"params": {}
},
{
"name": "problems",
"params": {}
},
{
"name": "folder",
"params": {}
},
{
"name": "codebase",
"params": {
"nRetrieve": 25,
"nFinal": 5,
"useReranking": true
}
}
],
"slashCommands": [
{
"name": "edit",
"description": "Edit selected code"
},
{
"name": "comment",
"description": "Write comments for the selected code"
},
{
"name": "share",
"description": "Export the current chat session to markdown"
},
{
"name": "cmd",
"description": "Generate a shell command"
},
{
"name": "commit",
"description": "Generate a git commit message"
}
]
}
Once you’ve configured Continue with the settings above, you’re all set to go! You’ve successfully set up your very own GitHub Copilot! 😎
Conclusion
And there you have it! 🎉 You’ve transformed into a coding wizard with your very own AI copilot. 🪄✨ With Ollama, Continue.dev, and a sprinkle of magic (and a few tokens), you’re now equipped to code faster than your coffee can brew! ☕️🚀
So go ahead, unleash your creativity and let those lines of code fly! Just remember: with great power comes great responsibility — don’t let your AI get too carried away! 😄💻
Happy coding! 🥳