
<aside> 💡
Instructions on installing LiteLLM using Docker compose with a database to persist information and use its WebUI to set up some providers, models, and a budget. Also discusses the integration of the tool with OpenWebUI.
</aside>
Revision: 20250525-0 (init: 20250525)
LiteLLM is a lightweight Python library that streamlines Large Language Model (LLM) integration for user-created Teams, offering a unified OpenAI-compatible API Proxy to access popular LLM providers. A feature of the tool is its built-in cost tracking and budget management, which helps monitor and control expenses, allowing teams to set spending limits.
https://docs.litellm.ai/docs/simple_proxy
The LiteLLM Proxy Server acts as an OpenAI API-compatible gateway, enabling seamless access to multiple LLMs where teams can access the proxy to centralize model management, enforce usage policies, and generate virtual API keys with rate limits and budget controls for different users or projects through its proxyUI, which further simplifies spend tracking and team management, making it easy to invite users, allocate resources, and monitor consumption in real time.
https://docs.litellm.ai/docs/proxy/ui
In the following, we will not focus on creating a team or setting a budget, as their documentation covers it. Instead, we will discuss its integration with Dockge and OpenWebUI; the setup of those was covered previously.
Ollama with OpenWebUI (Rev: 20240730-0)
The compose.yaml and example .env files for this Docker compose deployment are available in:
geekierblog-artifacts/20250524-litellm at main · mmartial/geekierblog-artifacts
Deploy this compose.yaml and its corresponding .env file, noting that:
.env file to separate secrets from the common compose.yaml. Comments are added to the environment variable to explain their use.DB_NAME, DB_USER, and DB_PASS are environment variables in the .env file.LITELLM_MASTER_KEY is the password for the WebUI’s admin accountLITELLM_SALT_KEY is the encryption salt for all secrets in the database.labels: to pre-configure some side tools, such as Traefik, WatchTower (see Dockge post), and HomePage entries. Extend or remove as preferred.Traefik Proxy (Rev: 20251213-0)
HomePage: Services Dashboard (Rev: 20251129-0)

To access the WebUI, go to the base URL ending with /ui (or use the link in the page) to get access to your configuration frontend. The admin password is the LITELLM_MASTER_KEY value.

Among the following steps, configure access to models before setting a team and a budget:
Models, add LLM Credentials per Provider for which you have an API key.Models, Add Model based on the Provider (using Existing Credentials) and select from the dropdown the LiteLLM Model Name(s) to use.
Test Connect to confirm all the models are accessible to your API key before using Add ModelModels, All Models to see the list of available models to share.
Input Cost and Output CostTeams tab we can Create New Team by giving it a name, selecting the Models the team can access, setting a Max Budget (USD), and the period for Reset Budget
Virtual Keys tab to Create New Key. Select the created Team, give it a Key Name and select from the Models (which is already a tailored to the selected team) which models that API key can access.
you will not be able to view it againInstallation of OpenWebUI was covered in a previous post:
From the Admin Panel, in Settings -> Connections -> Manage OpenAI API Connections, add (+) a new entry:

compose.yaml, this might be https://ltl.exmple.com)API Key generated during the LiteLLM setup.Once added (with Ollama API disabled), when going to the Settings -> Models list, we will see the entire list of models allowed to be used by this API key.
After this, the added models will appear in the External Models list when selecting New Chat. After a few prompts, we will see our Usage reflected on the LiteLLM dashboard and our Team Usage's Cost and Model Activity.