Use Ollama with the official Python library
Ollama is a great way to get started with AI by using open-source and publically available large-language models locally on your computer. I wrote previously about how to get started with the experimental OpenAI API, but Ollama has a dedicated Python library that is even simpler.
-
Install the library:
pip3 install ollama
-
Create a new python file:
touch completion.py
-
Add the following code to
completion.py
:import ollama def get_completion(prompt, model): response = ollama.chat(model, messages=[{ 'role': 'user', 'content': prompt, }]) return response['message']['content'] prompt = "What is the chief end of man?" print(get_completion(prompt, "mistral"))
-
Run the file:
python3 completion.py
There is more information available in the
library repo on GitHub,
including examples for streaming responses and a custom client. For even more
documentation on Ollama, check out the
/docs
directory in the main repo.