<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
  <channel>
    <title><![CDATA[Ollama]]></title>
    <description><![CDATA[Ollama]]></description>
    <link>https://ollama.com/blog</link>
    <lastBuildDate>Sun, 12 Apr 2026 15:51:20 +0000</lastBuildDate>
    <atom:link href="https://ollama.com/blog/rss.xml" rel="self" type="application/rss+xml"></atom:link>
    <item>
      <title><![CDATA[Ollama is now powered by MLX on Apple Silicon in preview]]></title>
      <description><![CDATA[Today, we're previewing the fastest way to run Ollama on Apple silicon, powered by MLX, Apple's machine learning framework.]]></description>
      <link>https://ollama.com/blog/mlx</link>
      <pubDate>Mon, 30 Mar 2026 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/mlx</guid>
    </item>
    <item>
      <title><![CDATA[The simplest and fastest way to setup OpenClaw]]></title>
      <description><![CDATA[Setup OpenClaw in under two minutes with a single Ollama command.]]></description>
      <link>https://ollama.com/blog/openclaw-tutorial</link>
      <pubDate>Mon, 23 Feb 2026 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/openclaw-tutorial</guid>
    </item>
    <item>
      <title><![CDATA[Subagents and web search in Claude Code]]></title>
      <description><![CDATA[Ollama now supports subagents and web search in Claude Code.]]></description>
      <link>https://ollama.com/blog/web-search-subagents-claude-code</link>
      <pubDate>Mon, 16 Feb 2026 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/web-search-subagents-claude-code</guid>
    </item>
    <item>
      <title><![CDATA[OpenClaw]]></title>
      <description><![CDATA[OpenClaw is a personal AI assistant that connects your messaging apps to local AI coding agents, all running on your own device.]]></description>
      <link>https://ollama.com/blog/openclaw</link>
      <pubDate>Sun, 01 Feb 2026 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/openclaw</guid>
    </item>
    <item>
      <title><![CDATA[ollama launch]]></title>
      <description><![CDATA[ollama launch is a new command which sets up and runs coding tools like Claude Code, OpenCode, and Codex with local or cloud models. No environment variables or config files needed.]]></description>
      <link>https://ollama.com/blog/launch</link>
      <pubDate>Fri, 23 Jan 2026 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/launch</guid>
    </item>
    <item>
      <title><![CDATA[Image generation (experimental)]]></title>
      <description><![CDATA[Generate images locally with Ollama on macOS. Windows and Linux support coming soon.]]></description>
      <link>https://ollama.com/blog/image-generation</link>
      <pubDate>Tue, 20 Jan 2026 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/image-generation</guid>
    </item>
    <item>
      <title><![CDATA[Claude Code with Anthropic API compatibility]]></title>
      <description><![CDATA[Ollama is now compatible with the Anthropic Messages API, making it possible to use tools like Claude Code with open models.]]></description>
      <link>https://ollama.com/blog/claude</link>
      <pubDate>Fri, 16 Jan 2026 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/claude</guid>
    </item>
    <item>
      <title><![CDATA[OpenAI Codex with Ollama]]></title>
      <description><![CDATA[Open models can be used with OpenAI's Codex CLI through Ollama. Codex can read, modify, and execute code in your working directory using models such as gpt-oss:20b, gpt-oss:120b, or other open-weight alternatives.]]></description>
      <link>https://ollama.com/blog/codex</link>
      <pubDate>Thu, 15 Jan 2026 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/codex</guid>
    </item>
    <item>
      <title><![CDATA[OpenAI gpt-oss-safeguard]]></title>
      <description><![CDATA[Ollama is partnering with OpenAI and ROOST (Robust Open Online Safety Tools) to bring the latest gpt-oss-safeguard reasoning models to users for safety classification tasks. gpt-oss-safeguard models are available in two sizes: 20B and 120B, and are permissively licensed under the Apache 2.0 license.]]></description>
      <link>https://ollama.com/blog/gpt-oss-safeguard</link>
      <pubDate>Wed, 29 Oct 2025 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/gpt-oss-safeguard</guid>
    </item>
    <item>
      <title><![CDATA[MiniMax M2]]></title>
      <description><![CDATA[MiniMax M2 is now available on Ollama's cloud. It's a model built for coding and agentic workflows.]]></description>
      <link>https://ollama.com/blog/minimax-m2</link>
      <pubDate>Tue, 28 Oct 2025 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/minimax-m2</guid>
    </item>
    <item>
      <title><![CDATA[NVIDIA DGX Spark performance]]></title>
      <description><![CDATA[We ran performance tests on release day firmware and an updated Ollama version to see how Ollama performs.]]></description>
      <link>https://ollama.com/blog/nvidia-spark-performance</link>
      <pubDate>Thu, 23 Oct 2025 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/nvidia-spark-performance</guid>
    </item>
    <item>
      <title><![CDATA[New coding models & integrations]]></title>
      <description><![CDATA[GLM-4.6 and Qwen3-coder-480B are available on Ollama’s cloud service with easy integrations to the tools you are familiar with. Qwen3-Coder-30B has been updated for faster, more reliable tool calling in Ollama’s new engine.]]></description>
      <link>https://ollama.com/blog/coding-models</link>
      <pubDate>Thu, 16 Oct 2025 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/coding-models</guid>
    </item>
    <item>
      <title><![CDATA[Qwen3-VL]]></title>
      <description><![CDATA[Ollama now supports Alibaba's Qwen3-VL.]]></description>
      <link>https://ollama.com/blog/qwen3-vl</link>
      <pubDate>Tue, 14 Oct 2025 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/qwen3-vl</guid>
    </item>
    <item>
      <title><![CDATA[NVIDIA DGX Spark]]></title>
      <description><![CDATA[The latest NVIDIA DGX Spark is here! Ollama has partnered with NVIDIA to ensure it runs fast and efficiently out-of-the-box.]]></description>
      <link>https://ollama.com/blog/nvidia-spark</link>
      <pubDate>Mon, 13 Oct 2025 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/nvidia-spark</guid>
    </item>
    <item>
      <title><![CDATA[Web search]]></title>
      <description><![CDATA[A new web search API is now available in Ollama. Ollama provides a generous free tier of web searches for individuals to use, and higher rate limits are available via Ollama’s cloud.]]></description>
      <link>https://ollama.com/blog/web-search</link>
      <pubDate>Wed, 24 Sep 2025 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/web-search</guid>
    </item>
    <item>
      <title><![CDATA[New model scheduling]]></title>
      <description><![CDATA[Ollama now includes a significantly improved model scheduling system, reducing crashes due to out of memory issues, maximizing GPU utilization and performance, especially on multi-GPU systems.]]></description>
      <link>https://ollama.com/blog/new-model-scheduling</link>
      <pubDate>Tue, 23 Sep 2025 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/new-model-scheduling</guid>
    </item>
    <item>
      <title><![CDATA[Cloud models]]></title>
      <description><![CDATA[Cloud models are now in preview, letting you run larger models with fast, datacenter-grade hardware. You can keep using your local tools while running larger models that wouldn’t fit on a personal computer.]]></description>
      <link>https://ollama.com/blog/cloud-models</link>
      <pubDate>Fri, 19 Sep 2025 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/cloud-models</guid>
    </item>
    <item>
      <title><![CDATA[OpenAI gpt-oss]]></title>
      <description><![CDATA[Ollama partners with OpenAI to bring gpt-oss to Ollama and its community.]]></description>
      <link>https://ollama.com/blog/gpt-oss</link>
      <pubDate>Tue, 05 Aug 2025 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/gpt-oss</guid>
    </item>
    <item>
      <title><![CDATA[Ollama's new app]]></title>
      <description><![CDATA[Ollama's new app is now available for macOS and Windows.]]></description>
      <link>https://ollama.com/blog/new-app</link>
      <pubDate>Wed, 30 Jul 2025 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/new-app</guid>
    </item>
    <item>
      <title><![CDATA[Secure Minions: private collaboration between Ollama and frontier models]]></title>
      <description><![CDATA[Secure Minions is a secure protocol built by Stanford's Hazy Research lab to allow encrypted local-remote communication.]]></description>
      <link>https://ollama.com/blog/secureminions</link>
      <pubDate>Tue, 03 Jun 2025 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/secureminions</guid>
    </item>
    <item>
      <title><![CDATA[Thinking]]></title>
      <description><![CDATA[Ollama now has the ability to enable or disable thinking. This gives users the flexibility to choose the model’s thinking behavior for different applications and use cases.]]></description>
      <link>https://ollama.com/blog/thinking</link>
      <pubDate>Fri, 30 May 2025 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/thinking</guid>
    </item>
    <item>
      <title><![CDATA[Streaming responses with tool calling]]></title>
      <description><![CDATA[Ollama now supports streaming responses with tool calling. This enables all chat applications to stream content and also call tools in real time.]]></description>
      <link>https://ollama.com/blog/streaming-tool</link>
      <pubDate>Wed, 28 May 2025 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/streaming-tool</guid>
    </item>
    <item>
      <title><![CDATA[Ollama's new engine for multimodal models]]></title>
      <description><![CDATA[Ollama now supports new multimodal models with its new engine.]]></description>
      <link>https://ollama.com/blog/multimodal-models</link>
      <pubDate>Thu, 15 May 2025 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/multimodal-models</guid>
    </item>
    <item>
      <title><![CDATA[Minions: where local and cloud LLMs meet]]></title>
      <description><![CDATA[Avanika Narayan, Dan Biderman, and Sabri Eyuboglu from Christopher Ré's Stanford Hazy Research lab, along with Avner May, Scott Linderman, James Zou, have developed a way to shift a substantial portion of LLM workloads to consumer devices by having small on-device models (such as Llama 3.2 with Ollama) collaborate with larger models in the cloud (such as GPT-4o).]]></description>
      <link>https://ollama.com/blog/minions</link>
      <pubDate>Tue, 25 Feb 2025 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/minions</guid>
    </item>
    <item>
      <title><![CDATA[Structured outputs]]></title>
      <description><![CDATA[Ollama now supports structured outputs making it possible to constrain a model's output to a specific format defined by a JSON schema. The Ollama Python and JavaScript libraries have been updated to support structured outputs.]]></description>
      <link>https://ollama.com/blog/structured-outputs</link>
      <pubDate>Fri, 06 Dec 2024 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/structured-outputs</guid>
    </item>
    <item>
      <title><![CDATA[Ollama Python library 0.4 with function calling improvements]]></title>
      <description><![CDATA[With Ollama Python library version 0.4, functions can now be provided as tools. The library now also has full typing support and new examples have been added.]]></description>
      <link>https://ollama.com/blog/functions-as-tools</link>
      <pubDate>Mon, 25 Nov 2024 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/functions-as-tools</guid>
    </item>
    <item>
      <title><![CDATA[Llama 3.2 Vision]]></title>
      <description><![CDATA[Llama 3.2 Vision 11B and 90B models are now available in Ollama.]]></description>
      <link>https://ollama.com/blog/llama3.2-vision</link>
      <pubDate>Wed, 06 Nov 2024 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/llama3.2-vision</guid>
    </item>
    <item>
      <title><![CDATA[IBM Granite 3.0 models]]></title>
      <description><![CDATA[Ollama partners with IBM to bring Granite 3.0 models to Ollama.]]></description>
      <link>https://ollama.com/blog/ibm-granite</link>
      <pubDate>Mon, 21 Oct 2024 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/ibm-granite</guid>
    </item>
    <item>
      <title><![CDATA[Llama 3.2 goes small and multimodal]]></title>
      <description><![CDATA[Ollama partners with Meta to bring Llama 3.2 to Ollama.]]></description>
      <link>https://ollama.com/blog/llama3.2</link>
      <pubDate>Wed, 25 Sep 2024 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/llama3.2</guid>
    </item>
    <item>
      <title><![CDATA[Reduce hallucinations with Bespoke-Minicheck]]></title>
      <description><![CDATA[Bespoke-Minicheck is a new grounded factuality checking model developed by Bespoke Labs that is now available in Ollama. It can fact-check responses generated by other models to detect and reduce hallucinations.]]></description>
      <link>https://ollama.com/blog/reduce-hallucinations-with-bespoke-minicheck</link>
      <pubDate>Wed, 18 Sep 2024 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/reduce-hallucinations-with-bespoke-minicheck</guid>
    </item>
    <item>
      <title><![CDATA[Tool support]]></title>
      <description><![CDATA[Ollama now supports tool calling with popular models such as Llama 3.1. This enables a model to answer a given prompt using tool(s) it knows about, making it possible for models to perform more complex tasks or interact with the outside world.]]></description>
      <link>https://ollama.com/blog/tool-support</link>
      <pubDate>Thu, 25 Jul 2024 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/tool-support</guid>
    </item>
    <item>
      <title><![CDATA[Google Gemma 2]]></title>
      <description><![CDATA[Gemma 2 is now available on Ollama in 3 sizes - 2B, 9B and 27B.]]></description>
      <link>https://ollama.com/blog/gemma2</link>
      <pubDate>Thu, 27 Jun 2024 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/gemma2</guid>
    </item>
    <item>
      <title><![CDATA[An entirely open-source AI code assistant inside your editor]]></title>
      <description><![CDATA[Continue enables you to easily create your own coding assistant directly inside Visual Studio Code and JetBrains with open-source LLMs.]]></description>
      <link>https://ollama.com/blog/continue-code-assistant</link>
      <pubDate>Fri, 31 May 2024 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/continue-code-assistant</guid>
    </item>
    <item>
      <title><![CDATA[Google announces Firebase Genkit with Ollama support]]></title>
      <description><![CDATA[At Google IO 2024, Google announced Ollama support in Firebase Genkit, a new open-source framework for developers to build, deploy and monitor production-ready AI-powered apps.]]></description>
      <link>https://ollama.com/blog/firebase-genkit</link>
      <pubDate>Mon, 20 May 2024 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/firebase-genkit</guid>
    </item>
    <item>
      <title><![CDATA[Llama 3 is not very censored]]></title>
      <description><![CDATA[Compared to Llama 2, Llama 3 feels much less censored. Meta has substantially lowered false refusal rates. Llama 3 will refuse less than 1/3 of the prompts previously refused by Llama 2.]]></description>
      <link>https://ollama.com/blog/llama-3-is-not-very-censored</link>
      <pubDate>Fri, 19 Apr 2024 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/llama-3-is-not-very-censored</guid>
    </item>
    <item>
      <title><![CDATA[Llama 3]]></title>
      <description><![CDATA[Llama 3 is now available to run on Ollama. This model is the next generation of Meta's state-of-the-art large language model, and is the most capable openly available LLM to date.]]></description>
      <link>https://ollama.com/blog/llama3</link>
      <pubDate>Thu, 18 Apr 2024 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/llama3</guid>
    </item>
    <item>
      <title><![CDATA[Embedding models]]></title>
      <description><![CDATA[Embedding models are available in Ollama, making it easy to generate vector embeddings for use in search and retrieval augmented generation (RAG) applications.]]></description>
      <link>https://ollama.com/blog/embedding-models</link>
      <pubDate>Mon, 08 Apr 2024 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/embedding-models</guid>
    </item>
    <item>
      <title><![CDATA[Ollama now supports AMD graphics cards]]></title>
      <description><![CDATA[Ollama now supports AMD graphics cards in preview on Windows and Linux. All the features of Ollama can now be accelerated by AMD graphics cards on Ollama for Linux and Windows.]]></description>
      <link>https://ollama.com/blog/amd-preview</link>
      <pubDate>Thu, 14 Mar 2024 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/amd-preview</guid>
    </item>
    <item>
      <title><![CDATA[Windows preview]]></title>
      <description><![CDATA[Ollama is now available on Windows in preview, making it possible to pull, run and create large language models in a new native Windows experience. Ollama on Windows includes built-in GPU acceleration, access to the full model library, and serves the Ollama API including OpenAI compatibility.]]></description>
      <link>https://ollama.com/blog/windows-preview</link>
      <pubDate>Thu, 15 Feb 2024 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/windows-preview</guid>
    </item>
    <item>
      <title><![CDATA[OpenAI compatibility]]></title>
      <description><![CDATA[Ollama now has initial compatibility with the OpenAI Chat Completions API, making it possible to use existing tooling built for OpenAI with local models via Ollama.]]></description>
      <link>https://ollama.com/blog/openai-compatibility</link>
      <pubDate>Thu, 08 Feb 2024 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/openai-compatibility</guid>
    </item>
    <item>
      <title><![CDATA[Vision models]]></title>
      <description><![CDATA[New vision models are now available: LLaVA 1.6, in 7B, 13B and 34B parameter sizes. These models support higher resolution images, improved text recognition and logical reasoning.]]></description>
      <link>https://ollama.com/blog/vision-models</link>
      <pubDate>Fri, 02 Feb 2024 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/vision-models</guid>
    </item>
    <item>
      <title><![CDATA[Python & JavaScript Libraries]]></title>
      <description><![CDATA[The initial versions of the Ollama Python and JavaScript libraries are now available, making it easy to integrate your Python or JavaScript, or Typescript app with Ollama in a few lines of code. Both libraries include all the features of the Ollama REST API, are familiar in design, and compatible with new and previous versions of Ollama.]]></description>
      <link>https://ollama.com/blog/python-javascript-libraries</link>
      <pubDate>Tue, 23 Jan 2024 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/python-javascript-libraries</guid>
    </item>
    <item>
      <title><![CDATA[Building LLM-Powered Web Apps with Client-Side Technology]]></title>
      <description><![CDATA[Recreate one of the most popular LangChain use-cases with open source, locally running software - a chain that performs Retrieval-Augmented Generation, or RAG for short, and allows you to “chat with your documents”]]></description>
      <link>https://ollama.com/blog/building-llm-powered-web-apps</link>
      <pubDate>Fri, 13 Oct 2023 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/building-llm-powered-web-apps</guid>
    </item>
    <item>
      <title><![CDATA[Ollama is now available as an official Docker image]]></title>
      <description><![CDATA[Ollama can now run with Docker Desktop on the Mac, and run inside Docker containers with GPU acceleration on Linux.]]></description>
      <link>https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image</link>
      <pubDate>Thu, 05 Oct 2023 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image</guid>
    </item>
    <item>
      <title><![CDATA[Leveraging LLMs in your Obsidian Notes]]></title>
      <description><![CDATA[This post walks through how you could incorporate a local LLM using Ollama in Obsidian, or potentially any note taking tool.]]></description>
      <link>https://ollama.com/blog/llms-in-obsidian</link>
      <pubDate>Thu, 21 Sep 2023 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/llms-in-obsidian</guid>
    </item>
    <item>
      <title><![CDATA[How to prompt Code Llama]]></title>
      <description><![CDATA[This guide walks through the different ways to structure prompts for Code Llama and its different variations and features including instructions, code completion and fill-in-the-middle (FIM).]]></description>
      <link>https://ollama.com/blog/how-to-prompt-code-llama</link>
      <pubDate>Sat, 09 Sep 2023 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/how-to-prompt-code-llama</guid>
    </item>
    <item>
      <title><![CDATA[Run Code Llama locally]]></title>
      <description><![CDATA[Meta's Code Llama is now available on Ollama to try.]]></description>
      <link>https://ollama.com/blog/run-code-llama-locally</link>
      <pubDate>Thu, 24 Aug 2023 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/run-code-llama-locally</guid>
    </item>
    <item>
      <title><![CDATA[Run Llama 2 uncensored locally]]></title>
      <description><![CDATA[This post will give some example comparisons running Llama 2 uncensored model versus its censored model.]]></description>
      <link>https://ollama.com/blog/run-llama2-uncensored-locally</link>
      <pubDate>Tue, 01 Aug 2023 00:00:00 +0000</pubDate>
      <guid isPermaLink="true">https://ollama.com/blog/run-llama2-uncensored-locally</guid>
    </item>
  </channel>
</rss>