🚀
🚀releaserepoby ggerganov

ggerganov/llama.cpp b7961: b7961

Source: GitHub AI ReleasesRead Original
🤖

AI Summary

This article announces the release of version b7961 of the ggerganov/llama.cpp project, which is a C++ implementation of the LLaMA language model. The key points are: 1. The release includes support for the SYCL (Single-Source C++ Heterogeneous Programming) framework, with the addition of F16 support for the GGML_OP_CEIL operation. 2. The release provides pre-built binaries for various platforms, including macOS (both Apple Silicon and Intel), iOS, Linux (Ubuntu x64 CPU, Vulkan, and s390x), Windows (CPU, CUDA 12, CUDA 13, Vulkan, SYCL, and HIP), and openEuler (x86 and aarch64). 3. The release is signed with GitHub's verified signature, ensuring the integrity of the code. Overall, this release focuses on improving the performance and cross-platform compatibility of the ggerganov/llama.cpp project, making it more accessible to a wider range of users and developers.

Original Description

<details open> sycl: add F16 support for GGML_OP_CEIL (#19306) * Fix SYCL CEIL operator * sycl: implement GGML_OP_CEIL </details> **macOS/iOS:** - [macOS Apple Silicon (arm64)](https://github.com/ggml-org/llama.cpp/releases/download/b7961/llama-b7961-bin-macos-arm64.tar.gz) - [macOS Intel (x64)](https://github.com/ggml-org/llama.cpp/releases/download/b7961/llama-b7961-bin-macos-x64.tar.gz) - [iOS XCFramework](https://github.com/ggml-org/llama.cpp/releases/download/b7961/llama-b7961-xcframework.zip) **Linux:** - [Ubuntu x64 (CPU)](https://github.com/ggml-org/llama.cpp/releases/download/b7961/llama-b7961-bin-ubuntu-x64.tar.gz) - [Ubuntu x64 (Vulkan)](https://github.com/ggml-org/llama.cpp/releases/download/b7961/llama-b7961-bin-ubuntu-vulkan-x64.tar.gz) - [Ubuntu s390x (CPU)](https://github.com/ggml-org/llama.cpp/releases/download/b7961/llama-b7961-bin-ubuntu-s390x.tar.gz) **Windows:** - [Windows x64 (CPU)](https://github.com/ggml-org/llama.cpp/releases/download/b7961/llama-b7961-b

Details

💬

Discussion coming soon...