Open Chat Playground (OCP) is a web UI that is able to connect virtually any LLM from any platform.
-
Updated
Nov 17, 2025 - C#
Open Chat Playground (OCP) is a web UI that is able to connect virtually any LLM from any platform.
Docker Desktop extension that deploys Open WebUI with Docker Model Runner integration in one click
Kiwix ZIM-to-vector RAG system for local, offline LLM knowledge retrieval
A template for your kickstart into GenAI!
A curated collection of Spring Boot projects demonstrating AI and LLM integrations, including examples of AI-powered applications, multi-provider LLM setups, and best practices for Spring AI, modular design, and integration testing.
A flexible, extensible AI agent backend built with NestJS—designed for running local, open-source LLMs (Llama, Gemma, Qwen, DeepSeek, etc.) via Docker Model Runner. Real-time streaming, Redis messaging, web search, and Postgres memory out of the box. No cloud APIs required!
AI-Native Autoscaler for Docker Compose built with cagent + MCP + Model Runner.
This provides sample codes that uses Microsoft.Extensions.AI for locally running LLMs through Docker Model Runner, Foundry Local, Hugging Face and Ollama
Saraf AI is a fully local assistant built with Next.js and Docker. It connects seamlessly to LLMs via Docker Model Runner using Docker Compose.
Multi-agent AI blog generator using Microsoft Agent Framework
AI-first manual checklist builder using PageIndex-style vectorless retrieval + local Gemma4 to generate grounded maintenance checklists with strict citations.
A local OpenClaw setup
A streamlined chat application that leverages Docker Model Runner to serve Large Language Models (LLMs) through a modern Streamlit interface. This project demonstrates containerized LLM deployment with a user-friendly web interface.
A privacy-first, automated code review tool powered by local LLMs. Analyse Git diffs using Docker or Ollama models — no cloud and no data sharing required, just intelligent code reviews on your machine.
Demo of Docker Model Runner in both development and production environments.
A terminal AI chat app powered by Docker Model Runner — no API keys, no cloud, no cost.
Six runnable examples showing how Docker Model Runner exposes the same local model through OpenAI, Anthropic, and Ollama SDKs on a single endpoint — with dev/prod parity against Microsoft Foundry.
This project demonstrates how to configure Spring AI to interact with Ollama and Docker Model Runner
A grounded support triage agent with hybrid retrieval, evidence citations, escalation policies, and multi-provider LLM fallback.
Local AI Chat Agent using Microsoft Agent Framework integrated with Docker Model Runner (DMR) locally
Add a description, image, and links to the docker-model-runner topic page so that developers can more easily learn about it.
To associate your repository with the docker-model-runner topic, visit your repo's landing page and select "manage topics."