Skip to main content

Ollama

Ollama is a python library. It allows you to run open-source large language models, such as LLaMA2, locally.

Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. It optimizes setup and configuration details, including GPU usage. For a complete list of supported models and model variants, see the Ollama model library.

See this guide for more details on how to use Ollama with LangChain.

Installation and Setup​

Follow these instructions to set up and run a local Ollama instance. To use, you should set up the environment variables ANYSCALE_API_BASE and ANYSCALE_API_KEY.

LLM​

from langchain_community.llms import Ollama

API Reference:

See the notebook example here.

Chat Models​

Chat Ollama​

from langchain_community.chat_models import ChatOllama

API Reference:

See the notebook example here.

Ollama functions​

from langchain_experimental.llms.ollama_functions import OllamaFunctions

API Reference:

See the notebook example here.

Embedding models​

from langchain_community.embeddings import OllamaEmbeddings

API Reference:

See the notebook example here.


Was this page helpful?


You can leave detailed feedback on GitHub.