LOADING...
LOADING...
The **ollama/ollama v0.17.0** release focuses on expanding the library of supported AI models and enhancing the user experience for running and integrating them.
Key points include: * Support for a wider range of models such as **Kimi-K2.5, GLM-5, MiniMax, DeepSeek, gpt-oss, Qwen, and Gemma**. * Ollama provides tools for easy setup and integration with various applications, including **chat interfaces, code editors, libraries, and frameworks**. * The release highlights Ollama's **REST API** and offers client examples in **Python and JavaScript** for developers.
What's Changed ggml: force flash attention off for grok by @rick-github in https://github.com/ollama/ollama/pull/15050 mlx: fix KV cache snapshot memory leak by @jessegross in https://github.com/ollama/ollama/pull/15065 mlxrunner: schedule periodic snapshots during prefill by @jessegross in https://github.com/ollama/ollama/pull/15058 doc: update vscode doc by @hoyyeva in https://github.com/ollama/ollama/pull/15064 * launch: hide vs code by @ParthSareen in https://github.com/ollama/ollama/pull/15076 Full Changelog: