Customise, run and save LLMs utilizing OLLAMA and the Modelfile
On this article, I’ll present you the right way to use the Modelfile in Ollama to alter how an current LLM (Llama2) behaves when interacting with it. I’ll additionally present you the right way to save your newly personalized mannequin to your private namespace on the Ollama server.
I do know it might probably get a bit complicated with all of the totally different ”llamas” flying round. Simply keep in mind, Ollama is the corporate that allows you to obtain and regionally run many various LLMs. Whereas, Llama2 is a specific LLM created by Meta the proprietor of Fb. Other than this relationship, they don’t seem to be linked in every other method.
If you happen to’ve by no means heard of Ollama earlier than I like to recommend that you simply take a look at my article under the place I’m going into depth on what Ollama is and the right way to set up it in your system.
What’s a modelfile?
In Ollama, a modelfile
refers to a configuration file that defines the blueprint to create and share fashions with Ollama.