Home > Reflections | ⏮️
2025-01-12
🔱 Domination
- I wrote a script to call a locally running ollama server and generate harder comments for each file in a side project.
- I can’t run big models on my Chromebook… No GPU. But I can run llama3.2:3b.
- I tried to ask vscode’s copilot extension to generate header comments for every file without one. It writes great comments, but would only write maybe 5-10 at a time. So it’s a bit tedious.
- writing my own script allows me all the automation and scale I need
- but I’d either need an API token, which I believe requires a paid account for any of the cloud providers
- or I get to us the best model my Chromebook can run on 12 CPU cores and 14G of RAM
- With the smaller model, I’ve found I need to be more careful with my prompts.
- ask small, simple, narrow questions
- and still watch it sometimes ignore explicit instructions…
- but it’s progress, and I’m having fun with it.
- and it’s pretty awesome to leave my laptop on to write for me while I sleep
- one of the prompts that was only sometimes obeyed: write a 1-line description of this code (then insert the code)
- most of the time, it would obey, but sometimes it would write a ton
- I noticed that this happened with larger files
- it made me think 🤔 maybe by the time it’s done reading all the code, it’s “forgotten” the original prompt.
- experiment: add the prompt after the code in addition to before it to see if that helps it follow the guidelines.
- early results: 🎉 it’s worked for at least one of the problem cases! I’ll let it write overnight and see how well it performed on the rest later.
- 💬🔧 I believe this is called prompt engineering.