ggml-cpu: extend RVV quantization vec dot to higher VLENs#10
Open
taimur-10x wants to merge 76 commits into
Open
ggml-cpu: extend RVV quantization vec dot to higher VLENs#10taimur-10x wants to merge 76 commits into
taimur-10x wants to merge 76 commits into
Conversation
86ffc7e to
bd69a20
Compare
f83ddf7 to
c7c6abc
Compare
cf95828 to
05a5425
Compare
05a5425 to
80c0ac3
Compare
bd69a20 to
aa171a2
Compare
* ggml-cuda: add internal AllReduce provider for tensor parallelism
Introduces a NCCL-free AllReduce implementation for LLAMA_SPLIT_MODE_TENSOR
using a single-phase CUDA kernel that pipelines D2H copy, cross-GPU
handshake via pinned-memory volatile flags, and the reduction in one
kernel launch per GPU.
New files:
- ggml/src/ggml-cuda/comm.cuh — ggml_cuda_allreduce_provider enum
- ggml/src/ggml-cuda/allreduce.cuh — pipeline API declarations
- ggml/src/ggml-cuda/allreduce.cu — kernel + pipeline init/dispatch
ggml-cuda.cu changes:
- ggml_backend_cuda_comm_context gains ar_pipeline field
- Provider selection via GGML_CUDA_ALLREDUCE env var ("nccl" / "internal")
- INTERNAL provider initialises the pipeline at comm_init time
- Dispatch routes to ggml_cuda_ar_allreduce(); falls back to meta-backend
CPU reduce for unsupported sizes or GPU counts (> 2)
Current scope: 2 GPUs, FP32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* llama-bench: add --allreduce flag to select AllReduce provider
Adds --allreduce <auto|nccl|internal> to llama-bench (and via the shared
field pattern, consistent with other multi-value flags). Useful for
isolating hangs or regressions in tensor-parallel mode: pass --allreduce nccl
to force NCCL and bypass the internal provider.
Also fixes ggml_cuda_select_allreduce_provider() to treat an empty
GGML_CUDA_ALLREDUCE env var the same as unset (avoids spurious warning when
llama-bench sets it to "" for the "auto" case).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
xt gains ar_pipeline field
- Provider selection via GGML_CUDA_ALLREDUCE env var ("nccl" / "internal")
- INTERNAL provider initialises the pipeline at comm_init time
- Dispatch routes to ggml_cuda_ar_allreduce(); falls back to meta-backend
CPU reduce for unsupported sizes or GPU counts (> 2)
Current scope: 2 GPUs, FP32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* llama-bench: rename --allreduce to --reduction-provider / -rp
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
via the shared
field pattern, consistent with other multi-value flags). Useful for
isolating hangs or regressions in tensor-parallel mode: pass --allreduce nccl
to force NCCL and bypass the internal provider.
Also fixes ggml_cuda_select_allreduce_provider() to treat an empty
GGML_CUDA_ALLREDUCE env var the same as unset (avoids spurious warning when
llama-bench sets it to "" for the "auto" case).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
xt gains ar_pipeline field
- Provider selection via GGML_CUDA_ALLREDUCE env var ("nccl" / "internal")
- INTERNAL provider initialises the pipeline at comm_init time
- Dispatch routes to ggml_cuda_ar_allreduce(); falls back to meta-backend
CPU reduce for unsupported sizes or GPU counts (> 2)
Current scope: 2 GPUs, FP32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* llama-bench: pass WARN/ERROR log messages through in non-verbose mode
The null log callback was silently dropping all messages. WARN and ERROR
should always be visible since they indicate legitimate issues (e.g. a
requested reduction provider not being available).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
vider.
Also fixes ggml_cuda_select_allreduce_provider() to treat an empty
GGML_CUDA_ALLREDUCE env var the same as unset (avoids spurious warning when
llama-bench sets it to "" for the "auto" case).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
xt gains ar_pipeline field
- Provider selection via GGML_CUDA_ALLREDUCE env var ("nccl" / "internal")
- INTERNAL provider initialises the pipeline at comm_init time
- Dispatch routes to ggml_cuda_ar_allreduce(); falls back to meta-backend
CPU reduce for unsupported sizes or GPU counts (> 2)
Current scope: 2 GPUs, FP32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* cmake: improve NCCL detection for source-tree builds, add static/dynamic switch
FindNCCL.cmake now searches the cmake source-build layout used by the Windows
NCCL port (cmake/lib/Release for static, cmake/src/Release for dynamic import
lib) and also checks src/include for the generated nccl.h header.
New option GGML_CUDA_NCCL_STATIC (default OFF) selects static vs dynamic
linking and controls which paths and library names are searched.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
for the "auto" case).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
xt gains ar_pipeline field
- Provider selection via GGML_CUDA_ALLREDUCE env var ("nccl" / "internal")
- INTERNAL provider initialises the pipeline at comm_init time
- Dispatch routes to ggml_cuda_ar_allreduce(); falls back to meta-backend
CPU reduce for unsupported sizes or GPU counts (> 2)
Current scope: 2 GPUs, FP32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* ggml-cuda: add AllReduce hang watchdog (GGML_CUDA_AR_WATCHDOG)
When compiled with -DGGML_CUDA_AR_WATCHDOG=ON, uses a debug kernel
variant that writes per-GPU spin diagnostics to pinned host memory.
A host-side blocking poll (cudaEventQuery + volatile reads) detects
hangs and logs WARN with the last observed arrival counters and spin
counts, controlled by GGML_CUDA_AR_WATCHDOG (ms timeout) and
GGML_CUDA_AR_MAX_SPIN (kernel bailout) env vars at runtime.
Zero overhead on the production path — all debug code is behind #ifdef.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
ar_pipeline field
- Provider selection via GGML_CUDA_ALLREDUCE env var ("nccl" / "internal")
- INTERNAL provider initialises the pipeline at comm_init time
- Dispatch routes to ggml_cuda_ar_allreduce(); falls back to meta-backend
CPU reduce for unsupported sizes or GPU counts (> 2)
Current scope: 2 GPUs, FP32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* ggml-cuda: fix intermittent AllReduce hang on Blackwell PCIe
Add __threadfence_system() before the arrival signal write in
signal_set to ensure D2H data is globally visible before the peer
observes the arrival flag. Without this fence, the peer could enter
Phase 3 host reads before the data had fully landed, causing an
intermittent deadlock on RTX 5090 (Blackwell, PCIe-only).
Also redesign the watchdog from a blocking dispatch-thread poll to a
non-blocking background thread, eliminating the ~20ms per-slot
latency the old design added.
Verified: 30/30 soak test runs clean at ~50 t/s (previously ~1-in-15
hang rate).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- INTERNAL provider initialises the pipeline at comm_init time
- Dispatch routes to ggml_cuda_ar_allreduce(); falls back to meta-backend
CPU reduce for unsupported sizes or GPU counts (> 2)
Current scope: 2 GPUs, FP32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* ggml-cuda: fix watchdog shutdown ordering and pipeline_free drain
- Stop watchdog thread BEFORE destroying GPU resources (events, streams)
to prevent polling destroyed handles → spurious "busy" readings
- Add cudaStreamSynchronize in pipeline_free to drain in-flight kernels
before freeing pinned host buffers they may still be reading
- Sleep-first watchdog polling: no +0ms noise, only logs when a kernel
is genuinely stuck past the poll interval
- Check wdog_stop in both outer and inner loops so join() returns
promptly instead of draining the entire queue
- Add Phase 3 breadcrumbs to debug[3] for hang localization
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
RNAL provider initialises the pipeline at comm_init time
- Dispatch routes to ggml_cuda_ar_allreduce(); falls back to meta-backend
CPU reduce for unsupported sizes or GPU counts (> 2)
Current scope: 2 GPUs, FP32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* ggml-cuda: replace event-based watchdog with per-GPU ring buffer
Completely rework the GGML_CUDA_AR_WATCHDOG system:
- Replace the shared debug_buf + event-polling + queue design with
per-GPU ring buffers in pinned host memory
- Kernel writes a debug record only on spin-limit bailout: claims a
ring slot via atomicAdd (single-GPU host atomics work on RTX 5090),
writes fields, fences, sets completion flag, then all threads exit
- Watchdog thread simply polls ring head counters every 1ms and prints
any new complete records — no CUDA event queries, no mutex, no queue
- Zero overhead on the dispatch path (no queue posting, no memset)
- Watchdog shutdown returns within ~1ms (atomic bool, no drain)
- On bailout the kernel skips Phase 3 entirely and exits cleanly
Verified: 20/20 prefill soak test clean at ~1112 t/s, no hangs.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
P32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* fix: normalize line endings to LF (undo Windows CRLF conversion)
Five files were inadvertently converted to CRLF by the Windows
development environment, causing every line to show as changed in
diffs against master.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
imit bailout: claims a
ring slot via atomicAdd (single-GPU host atomics work on RTX 5090),
writes fields, fences, sets completion flag, then all threads exit
- Watchdog thread simply polls ring head counters every 1ms and prints
any new complete records — no CUDA event queries, no mutex, no queue
- Zero overhead on the dispatch path (no queue posting, no memset)
- Watchdog shutdown returns within ~1ms (atomic bool, no drain)
- On bailout the kernel skips Phase 3 entirely and exits cleanly
Verified: 20/20 prefill soak test clean at ~1112 t/s, no hangs.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
P32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* .gitattributes: force LF line endings to prevent Windows CRLF conversion
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
elopment environment, causing every line to show as changed in
diffs against master.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
imit bailout: claims a
ring slot via atomicAdd (single-GPU host atomics work on RTX 5090),
writes fields, fences, sets completion flag, then all threads exit
- Watchdog thread simply polls ring head counters every 1ms and prints
any new complete records — no CUDA event queries, no mutex, no queue
- Zero overhead on the dispatch path (no queue posting, no memset)
- Watchdog shutdown returns within ~1ms (atomic bool, no drain)
- On bailout the kernel skips Phase 3 entirely and exits cleanly
Verified: 20/20 prefill soak test clean at ~1112 t/s, no hangs.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
P32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* ggml-cuda: move GGML_CUDA_AR_WATCHDOG from CMake option to local define
The watchdog is development-only; a global CMake option is overkill.
Move the toggle to a #define at the top of allreduce.cu (set to 0 by
default) and remove the option from ggml/CMakeLists.txt and the CUDA
CMakeLists.txt add_compile_definitions block.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
fences, sets completion flag, then all threads exit
- Watchdog thread simply polls ring head counters every 1ms and prints
any new complete records — no CUDA event queries, no mutex, no queue
- Zero overhead on the dispatch path (no queue posting, no memset)
- Watchdog shutdown returns within ~1ms (atomic bool, no drain)
- On bailout the kernel skips Phase 3 entirely and exits cleanly
Verified: 20/20 prefill soak test clean at ~1112 t/s, no hangs.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
P32, tensors <= 256 KB. Notes in NOTES-allreduce.md.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
* unify kernel debug paths
* use __threadfence_system explicitly (not in ggml_cuda_ar_signal_set)
* preferentially use internal reduction for <=2 GPUs
* templatize the main kernel to support fp16/bf16
* restore llama-bench.cpp changes
* revert CMakeLists changes
* remove notes from repo
* remove dead warmup code
* fix comments
* improve reduction provider fallback code
* add messages for allreduce fallback
* rework reduction provider init to not call ncclCommInitAll if using the internal provider
* fix case where a given tensor has not been computed
* add chunked mode to the kernel for unlimited vector size
* rework a few checks/fallbacks
* various small cleanups
* allow disabling CUDA reductions completely (falling back to the non-CUDA butterfly mode)
* simplify reduction provider selection
* minor simplifications
* more cleanups/fixes
* prototype alternate path for large reductions
* chunked version of large reduction path
* use bf16 for large reductions
* experimental reduction using cudaMemcpyPeerAsync (slightly slower)
* revert experimental change
* add combined conversion/reduction kernel
* add bf16 wire format for single kernel mode
* experimental on-stream small reduction kernel
* double buffer arrival slots, use token (incrementing) method
* double buffer host_buf for small reductions
* put in waits for use of host_mem in large reduction case (prevents stomping on in-use memory
* remove watchdog code
* various cleanups / dead code removal
* fix fp16 mode
* fix some comments/logging statements
* use increasing token scheme for arrival signals
* add top-level comment to allreduce.cu
* improve top-level comment in allreduce.cu
* fix comments in ggml_cuda_ar_kernel
* improve event handling for hostmem buffer usage tracking
* change ev_pool to fixed 2D array
* add chunked memcpy fallback for extra-large reductions (>32 MB)
* change thresholds for copy-engine path and bf16 demotion
* multi-block kernel test
* more fine-tuning for chukn-size, etc.
* various fixes for PR review
* more PR fixes
* fix semantics of all host mappings
* require ampere+
* small cleanups
* properly use host pointer for src/dst in cudaMemcpy calls
* allreduce: lazy-init the internal pipeline on first use
A config that lives entirely on NCCL never needs the chunked-kernel
pipeline (host_buf, host_large, dev_tmp, streams, events, arrival ring).
Defer pipeline creation to the first try_allreduce_internal call using the
same std::call_once pattern as ensure_nccl, so those resources stay
unallocated when only NCCL is in use.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* allreduce: assert n_backends == 2 instead of soft-fallback
ar_pipeline_init already requires n_devices == 2 and bails before any AR can
get here, so by the time we reach try_allreduce_internal we know we have
exactly two backends. Replace the runtime-debug-log fallback with a hard
assert.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
NCCL is in use.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* rework reduction provider selection. internal/nccl is OS dependent; most fallbacks are removed
* remove unneeded Turing arch check (llama.cpp doesn't even compile pre-Turing anyway)
* allreduce: ASCII-only comments and ggml_cuda_cast for value conversions
Replace non-ASCII characters in comments (em dashes, right arrows) with
ASCII equivalents (--, ->) so the source stays in the ggml/upstream norm.
In the kernel-side code, replace static_cast<Twire>/static_cast<Tdst>
with ggml_cuda_cast<...> so the BF16 conversions go through the fast
__float2bfloat16 / __bfloat162float intrinsics from convert.cuh. Pure
pointer and integer casts stay as static_cast.
Also drops two stray garbage tokens that snuck in from earlier merges
(a duplicated 'return ok; }' tail in allreduce.cu and a leftover '_reg)'
fragment in ggml-cuda.cu).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* allreduce: use ggml_cuda_memcpy_1 for the chunked-kernel vector copies
The chunked kernel's two 16-byte register<->host transfers (Phase 1 store
and Phase 3 load) used reinterpret_cast<float4 *> on both sides. Replace
with ggml_cuda_memcpy_1<sizeof(wire)>, which is the canonical helper for
this pattern and emits the same int4 LD/ST under the hood.
Conformance passes; 5x reruns of 70b internal pp512 show 1832-1836 t/s,
matching the prior matrix value of 1831 t/s -- no perf change as expected.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
ok; }' tail in allreduce.cu and a leftover '_reg)'
fragment in ggml-cuda.cu).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* allreduce: assert cuda_ctx->device matches the pipeline's device
Both ggml_cuda_ar_pipeline and ggml_backend_cuda_context carry the device
they were created for; if they ever disagree, every cuda call that follows
runs on the wrong device. Add GGML_ASSERT at each cuda_ctx retrieval site
in the AR path so the misuse fails fast rather than silently corrupting.
Also: rename __nv_bfloat16 -> nv_bfloat16 (typedef alias) for consistency
with the rest of the file, and tighten one cudaGetLastError check to fire
only after the to_bf16 call that can actually fail.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
gml-cuda.cu).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* allreduce: expand one-liner for loops to braced bodies
Code-style preference -- match the rest of the file by writing every for
loop with the body on its own braced line. Three sites in the copy-engine
typed dispatch.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
in the AR path so the misuse fails fast rather than silently corrupting.
Also: rename __nv_bfloat16 -> nv_bfloat16 (typedef alias) for consistency
with the rest of the file, and tighten one cudaGetLastError check to fire
only after the to_bf16 call that can actually fail.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
gml-cuda.cu).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* allreduce: rename template parameters Tdst/Twire/Tsrc -> T_dst/T_wire/T_src
Code-style preference per PR review -- T_dst/T_wire/T_src is more
consistent with surrounding code. Whole-word rename across all 58 sites
in allreduce.cu (kernel definitions, internal uses, and comment text).
Realigned the parameter columns in three function signatures whose
T_src/T_dst lines shifted by 1 char relative to their non-templated
neighbors.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
to fire
only after the to_bf16 call that can actually fail.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
gml-cuda.cu).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* allreduce: drop hyphen in 'chunked-kernel' across comments
Per PR review feedback -- 'chunked kernel' (no hyphen) reads more naturally
in running prose, especially for ESL readers. Pure comment-only change;
all 10 occurrences in allreduce.cu updated.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
three function signatures whose
T_src/T_dst lines shifted by 1 char relative to their non-templated
neighbors.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
to fire
only after the to_bf16 call that can actually fail.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
gml-cuda.cu).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* allreduce: use ggml_cuda_get_max_cpy_bytes() instead of hardcoded 16
The chunked kernel hardcoded a 16-byte vector unit; replace with the
ggml_cuda_get_max_cpy_bytes() helper that fattn-common.cuh uses for the
same purpose, so ELEMS_PER_VEC self-adjusts to the arch's widest
single-instruction copy.
Perf-neutral on supported targets (Volta+ returns 16).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
hbors.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
to fire
only after the to_bf16 call that can actually fail.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
gml-cuda.cu).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* ggml-cuda: PR review fixes -- annotate #endif, fix stale comment, assert nbytes alignment
Three separate but minor changes from PR ggml-org#22299 review feedback:
1. Annotate the five GGML_USE_NCCL #endif lines with the matching condition
so the pairing is visible without scrolling back.
2. The comment block on ggml_backend_cuda_comm_context claimed NCCL is
lazy-initialised; that was true at one point but the dispatch refactor
(727b141) made both NCCL and the internal pipeline eager. Rewrite
the comment to match current behaviour.
3. Assert in ggml_backend_cuda_comm_allreduce_internal that the tensor's
byte size is a 16-byte multiple. The chunked-kernel issues full-width
vector loads/stores, so this is a precondition; tensor-parallel splits
of hidden-dim-multiples satisfy it trivially, but a hard assert turns
any caller-side bug into a clear failure rather than UB.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
device's new AR
records its ev.ker -- otherwise the second device's wait sees the first
device's just-recorded event (the in-flight new AR) and creates a circular
dependency with the in-kernel peer signal. Two-pass dispatch (all waits,
then all launches) avoids this.
Bump POOL_SIZE 2 -> 8 (small memory cost, more breathing room for the
GPU's view of the event chain) and add a runtime env override for the
hybrid kernel chunk size (GGML_CUDA_AR_HYBRID_CHUNK_BYTES) for tuning.
One-shot stderr diagnostic at first AR prints the chosen path + sizing.
Result on 2x RTX 5090 Linux, 70b ub_sweep:
ub=64 (1 MB AR): 913 -> 1036 t/s (+13.5% vs old, +1.8% vs NCCL)
ub=128 (2 MB AR): 1056 -> 1181 (+11.9%, +3.7% vs NCCL)
ub=256 (4 MB AR): 1212 -> 1424 (+17.5%, +3.5% vs NCCL)
Internal now beats NCCL at every size (+1.8% to +15.6%), recovering all
ground in the 1-4 MB regime that was previously a 10-12% loss.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* simplify the init logic
* address some other PR requests
* ggml-cuda: stub internal AllReduce on HIP/MUSA, drop pre-Ampere mention, gate NCCL fallback warning on !HIP
The internal AllReduce relies on cudaHostAllocPortable/Mapped,
cudaHostGetDevicePointer, and __nanosleep -- none of which the HIP or
MUSA shims expose -- so wrap the implementation in
!defined(GGML_USE_HIP) && !defined(GGML_USE_MUSA) and provide
nullptr/no-op/false stubs in the #else branch. The dispatcher already
treats a null pipeline as init failure and silently falls back to the
meta backend's generic AllReduce, so HIP/MUSA builds compile clean and
behave correctly without further call-site changes.
PR review follow-ups:
- drop "or pre-Ampere?" from the internal-init failure warning -- the
kernel doesn't require Ampere or newer.
- guard the "NCCL not compiled in" fallback warning behind
!defined(GGML_USE_HIP); the suggestion to install NCCL only makes
sense on NVIDIA builds.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
hind, now +6-8% ahead at ub=1024-4096.
Perplexity (32 chunks) matches NCCL bit-for-bit (3.4044 vs 3.4043).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* allreduce: guard __nanosleep on Volta+ and reject pre-Volta devices at init
__nanosleep is the only Volta-specific intrinsic in the kernel; wrap it
in #if __CUDA_ARCH__ >= GGML_CUDA_CC_VOLTA / NO_DEVICE_CODE so the file
still compiles cleanly when targeting older arches (the dispatcher's
init check below ensures the kernel is never actually launched on
pre-Volta).
Add a per-device compute-capability check in pipeline_init that returns
nullptr if any device is below sm70. The dispatcher already treats
nullptr as init failure and silently falls back to the meta backend's
generic AllReduce.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
rom the internal-init failure warning -- the
kernel doesn't require Ampere or newer.
- guard the "NCCL not compiled in" fallback warning behind
!defined(GGML_USE_HIP); the suggestion to install NCCL only makes
sense on NVIDIA builds.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
hind, now +6-8% ahead at ub=1024-4096.
Perplexity (32 chunks) matches NCCL bit-for-bit (3.4044 vs 3.4043).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* allreduce: fix CI -Werror warnings (sign-compare, format, restrict alias, maybe-uninitialized)
The CUDA CI builds with -Werror -Wsign-compare -Wformat -Wrestrict
-Wmaybe-uninitialized. Address each:
- n_devices is size_t; change `int i; i < n_devices` to size_t in the
three init loops, and the matching GGML_LOG_INFO format from %d to %zu.
- ggml_cuda_ar_kernel was launched with sendbuf == recvbuf (in-place
reduction), so the __restrict__ qualifiers on those parameters were
technically UB. Drop __restrict__ from sendbuf and recvbuf; an A/B
sweep showed <0.6% perf delta (within noise) on Linux.
- The buf/src/dst pointer arrays in ggml_cuda_ar_allreduce and the
per-iteration arrays in ggml_cuda_ar_allreduce_copy_outer were
declared with size GGML_CUDA_MAX_DEVICES but the loop only writes
indices [0, n_devices); zero-initialise so the compiler sees the
tail elements as defined.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
now +6-8% ahead at ub=1024-4096.
Perplexity (32 chunks) matches NCCL bit-for-bit (3.4044 vs 3.4043).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* ggml-cuda: drop unused-function warning by guarding try_allreduce_nccl behind GGML_USE_NCCL
The only call site (in init_nccl) is already inside #ifdef GGML_USE_NCCL,
so the function is unreferenced in non-NCCL builds and trips
nvcc's -Werror=unused-function check. Move the guard from inside the
function body to around the entire definition.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
ce
reduction), so the __restrict__ qualifiers on those parameters were
technically UB. Drop __restrict__ from sendbuf and recvbuf; an A/B
sweep showed <0.6% perf delta (within noise) on Linux.
- The buf/src/dst pointer arrays in ggml_cuda_ar_allreduce and the
per-iteration arrays in ggml_cuda_ar_allreduce_copy_outer were
declared with size GGML_CUDA_MAX_DEVICES but the loop only writes
indices [0, n_devices); zero-initialise so the compiler sees the
tail elements as defined.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
now +6-8% ahead at ub=1024-4096.
Perplexity (32 chunks) matches NCCL bit-for-bit (3.4044 vs 3.4043).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Add missing `#include <mutex>` in ggml-backend-device.cpp. Fixes: ggml-org#22809 Signed-off-by: Oliver Walsh <owalsh@redhat.com>
* server: Never return 0.0 post-sampling probabilities * backend sampling: support returning post-sampling probs
* add im2col_3d * format code * update the ops.md
* convert : add image break token fallback
This commit adds a image_break_token_id fallback for mistral where the
config contains a image_break_token_id of -1:
```console
"vision_encoder": {
"image_token_id": 10,
"image_break_token_id": -1,
...
```
But the tokenizer.json has this token:
```console
115 "id": 12,
116 "content": "[IMG_BREAK]",
117 "single_word": false,
118 "lstrip": false,
119 "rstrip": false,
120 "normalized": false,
121 "special": true
122 },
```
If we look in convert_hf_to_gguf.py we have:
```python
elif self.is_mistral_format:
# hparams is already vision config here so norm_eps is only defined in global_config.
self.hparams["norm_eps"] = self.global_config.get("norm_eps", None)
assert self.hparams["norm_eps"] is not None, "norm_eps not found in params.json"
if self.use_break_tok:
self.img_break_tok_id = self.find_vparam(["image_break_token_id"])
```
The motivation for this is that currently converting this models
results in the following error:
```console
load_hparams: model size: 5131.60 MiB
load_hparams: metadata size: 0.15 MiB
clip_init: failed to load model 'models/mmproj-Mistral-Medium-3.5-128B.gguf': operator(): unable to find tensor v.token_embd.img_break
mtmd_init_from_file: error: Failed to load CLIP model from models/mmproj-Mistral-Medium-3.5-128B.gguf
Failed to load vision model from models/mmproj-Mistral-Medium-3.5-128B.gguf
```
With this fallback the model loads successfully.
Resolves: ggml-org#22901
* Revert "convert : add image break token fallback"
This reverts commit 292e40c.
* convert : add image break token fallback
This commit adds a image_break_token_id fallback for mistral where the
config contains a image_break_token_id of -1:
```console
"vision_encoder": {
"image_token_id": 10,
"image_break_token_id": -1,
...
```
But the tokenizer.json has this token:
```console
115 "id": 12,
116 "content": "[IMG_BREAK]",
117 "single_word": false,
118 "lstrip": false,
119 "rstrip": false,
120 "normalized": false,
121 "special": true
122 },
```
If we look in convert_hf_to_gguf.py we have:
```python
elif self.is_mistral_format:
# hparams is already vision config here so norm_eps is only defined in global_config.
self.hparams["norm_eps"] = self.global_config.get("norm_eps", None)
assert self.hparams["norm_eps"] is not None, "norm_eps not found in params.json"
if self.use_break_tok:
self.img_break_tok_id = self.find_vparam(["image_break_token_id"])
```
The motivation for this is that currently converting this models
results in the following error:
```console
load_hparams: model size: 5131.60 MiB
load_hparams: metadata size: 0.15 MiB
clip_init: failed to load model 'models/mmproj-Mistral-Medium-3.5-128B.gguf': operator(): unable to find tensor v.token_embd.img_break
mtmd_init_from_file: error: Failed to load CLIP model from models/mmproj-Mistral-Medium-3.5-128B.gguf
Failed to load vision model from models/mmproj-Mistral-Medium-3.5-128B.gguf
```
With this fallback the model loads successfully.
Co-authored-by: Pascal <admin@serveurperso.com>
Resolves: ggml-org#22901
* convert : allow zero value for img_break_tok_id
Before, we relied on a transient import from `cub/cub.cuh`, which is bad practice to do as cub may not always expose cuda/iterator
…#22938) This commit updates the command line arguments to use the correct names and values which are now required. The motivation for this change is that currently running the example command as is will generate the following errors: ```console error while handling argument "--color": error: unknown value for --color: '--sampling-seq' usage: -co, --color [on|off|auto] Colorize output to distinguish prompt and user input from generations ('on', 'off', or 'auto', default: 'auto') 'auto' enables colors when output is to a terminal error while handling argument "-fa": error: unknown value for --flash-attn: '--temp' usage: -fa, --flash-attn [on|off|auto] set Flash Attention use ('on', 'off', or 'auto', default: 'auto') (env: LLAMA_ARG_FLASH_ATTN) error while handling argument "--draft-max": the argument has been removed. use --spec-draft-n-max or --spec-ngram-mod-n-max usage: --draft, --draft-n, --draft-max N the argument has been removed. use --spec-draft-n-max or --spec-ngram-mod-n-max (env: LLAMA_ARG_DRAFT_MAX) error while handling argument "--draft-min": the argument has been removed. use --spec-draft-n-min or --spec-ngram-mod-n-min usage: --draft-min, --draft-n-min N the argument has been removed. use --spec-draft-n-min or --spec-ngram-mod-n-min (env: LLAMA_ARG_DRAFT_MIN) ```
* [ggml-virtgpu] Add a GHA build check * Apply suggestions from code review Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com> --------- Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* spec : refactor * spec : drop support for incompatible vocabs * spec : update common_speculative_init() * cont : pass seq_id * cont : dedup ctx_seq_rm_type * server : sketch the ctx_dft decode loop * server : draft prompt cache and checkpoints * server : improve ctx names * server, spec : transition to unified spec context * cont : sync main and drft contexts * cont : async drft eval when possible * cont : handle non-ckpt models * cont : pass correct n_past for drafting * cont : process images throught the draft context * spec : handle draft running out of context * server : fix mtmd draft processing * server : fix URL for draft model * server : add comment * server : clean-up + dry * speculative-simple : update * spec : fix n_past type * server : fix slot ctx_drft ptr * tools : update readme * naming : improve consistency * spec : refactor for multi-sequence speculative context * cont : prepare params * cont : prepare params * spec : support parallel drafts * server : support parallel drafting * llama : reuse device buffers when possible * server, spec : clean-up * cont : clean-up * cont : minor * spec : reset `drafting` flag at the end * spec : introduce `common_speculative_process()` * spec : allow for multiple spec types (chain of speculators) * replace old type field of type common_speculative_type in the common_params_speculative struct with a vector to allow multiple types to be specified * introduce common_get_enabled_speculative_impls(const std::vector<enum common_speculative_type>) to figure out which implementations the user has enabled * introduce common_speculative_type_from_names(const std::vector<std::string> & names) to parse the already user provided spec types * all speculators run sequentially, best one wins (we verify its drafted tokens) * maximize expected accepted tokens for current round by calculating the product between the probability of accepting current token (n_acc_tokens / n_gen_drafts) and the draft's length --------- Co-authored-by: Petros Sideris <petros.sideris@nokia.com>
* docs: fix metrics endpoint description in server README Required model query parameter for router mode described. Removed metrics: - llamacpp:kv_cache_usage_ratio - llamacpp:kv_cache_tokens Added metrics: - llamacpp:prompt_seconds_total - llamacpp:tokens_predicted_seconds_total - llamacpp:n_decode_total - llamacpp:n_busy_slots_per_decode * server: fix metrics type for n_busy_slots_per_decode metric
* cuda: tighten snake fusion type checks for all operands (defensive, sync vulkan) * cuda: reject snake fusion when ne[2] or ne[3] > 1 (mirror vulkan PR review) * cuda: merge type_ok and types_ok into a single types_ok (address am17an review) * cuda: filter ADD/SUB/MUL/DIV in supports_op to F32/F16 bin_bcast only dispatches F32/F16 type triplets, mirror the vulkan filter so unsupported types fall back through cpy instead of aborting. * test-backend-ops: extend snake_fuse to rank-4 with ne[2]/ne[3] > 1 cases
`im2col_cuda` and `im2col_3d_cuda` both dispatch with `block_nums.y = OW`. CUDA caps grid Y at 65535. Conv1d encoders on raw 16 kHz audio with T > 65535 (~ 4 s) trip the limit -- e.g. SEANet at 11 s lands at OW = 176000 -- and the launch returns `invalid configuration argument`. Clamp `block_nums.y` to `MIN(OW, MAX_GRIDDIM_Y)` and loop inside the kernel with stride `MAX_GRIDDIM_Y`. Same in-kernel stride pattern already used for the z axis (`MAX_GRIDDIM_Z`). Both 2D `im2col_kernel` and 3D `im2col_3d_kernel` need the same fix. Bit-identical for OW <= 65535 (single iteration of the new outer loop). Tested on T4 / Jetson Orin with a SEANet encoder running on 11 s / 16 kHz audio (im2col reaching OW ~ 176000); pre-fix launch returns `invalid configuration argument`, post-fix runs to completion. Existing test-backend-ops im2col cases unchanged.
* Q4_1 MoE CLC pass sanity check * remove unnecessary code * opencl: remove unnecessary asserts and reformat * opencl: fix supports_op for q4_1 moe * q4_1 moe is supported by Adreno with certain shapes --------- Co-authored-by: Li He <lih@qti.qualcomm.com>
…gml-org#22711) * metal : promote mul_mv/mul_mm batch divisors to function constants * metal : take op directly in get_pipeline_mul_mv_ext
…22832) * convert : add split() method to LoraTorchTensor * Fix python type-check * Fix flake8 Lint * fix: handle positional dim arg in torch.split dispatch * Fix type-check again * Fix type-checks * Remove unit test per reviewers feedback * work around ty deficiency --------- Co-authored-by: Sigbjørn Skjæret <sigbjorn.skjaeret@scala.com>
* mimo-v2.5: vision support * mimo-v2.5: use fused qkv for vision * mimi-v2.5: fix f16 vision overflow * mimo-v2.5: comment cleanups * mimo-v2.5: Flash doesn't have mmproj more cleanup remember to use filter_tensors * mimo-v2.5: fix trailing whitespace
…s for Xe2 and newer (ggml-org#22461) * refactor * Use l_warptile only when coopamt is available for BF16
* working llama-eval mc and math suite * multi source llama-eval * Add readme * add checkpointing * examples: add llama-server simulator for testing eval scripts Add a standalone Python script that simulates a llama-server HTTP endpoint for testing the eval script. The simulator: - Implements /v1/chat/completions endpoint with OpenAI-compatible format - Loads AIME dataset from HuggingFace with local caching - Uses Levenshtein distance for intelligent question matching - Supports configurable success rate for correct/wrong answer generation - Provides debug logging for troubleshooting Also includes test scripts and documentation for testing and understanding the simulator functionality. * examples: refactor test-simulator.sh for better readability Extract repeating question string into TEST_QUESTION variable and create make_request() helper function to reduce code duplication. Add proper error handling for error responses. * docs: update llama-eval-discussion.md with session work summary Add summary of llama-server-simulator implementation work including features, testing results, technical decisions, and refactoring. * examples: add simplified llama-eval-new.py for AIME evaluation - Create new simplified evaluation script focused only on AIME - Implement EvalState and Processor dataclasses for structured state management - Add real-time feedback showing correct/incorrect status per case - Abstract grading interface for external grader support - Use structured JSON output for eval state - Apply HuggingFace dataset caching to avoid repeated downloads - Remove Levenshtein matching - eval script only sends requests and validates answers * docs: remove README.md from llama-eval * examples: implement flexible grader system for answer validation - Add Grader class supporting regex and CLI-based grading - Implement built-in regex patterns for AIME, GSM8K, MMLU, HellaSwag, ARC, WinoGrande - Add CLI grader interface: python script.py --answer <pred> --expected <gold> - Add HF telemetry disable to avoid warnings - Support exact match requirement for regex patterns - Add 30-second timeout for CLI grader - Handle both boxed and plain text formats for AIME answers * examples: use HF_HUB_OFFLINE to avoid HF Hub warnings * examples: remove HF_HUB_OFFLINE to allow dataset download * examples: use cached dataset path to avoid HF Hub requests * examples: use cached dataset path in simulator to avoid HF Hub requests * docs: update llama-eval-discussion.md with session work summary * examples: add threading support and model parameter to llama-eval-new.py - Add ThreadPoolExecutor for parallel request processing controlled by --threads - Add --model argument to specify model name in request data - Refactor process() to use thread-safe _process_single_case() method - Update progress tracking to work with concurrent execution * docs: update llama-eval-discussion.md with threading and model parameter updates - Add threading support implementation details - Document ThreadPoolExecutor usage and thread safety - Add model parameter implementation details - Include testing results for both features * examples: add task summary table to llama-eval-new.py * eval : print progress * eval : add prompts * test : fix path * sim : fix answer matching * eval : support multiple dataset runs * minor * improve grader * docs * remove old files * datasets : add gsm8k * add gpqa + sampling + docs * rename * grader : improve example answers * cont * datasets : add aime2025 * grader : update prompt * grade : improve regex + logs * datasets : fix aime2025 * cleanup * add AGENTS.md * ignore errors * resume eval * cleanup * fix counts * simplify * fix prompts * add html * store full response * add tokens * resoning and error handling * refactor * track total time * remove junk * eval : unify "judge" terminology to "grader" Replace all occurrences of "judge" with "grader" for consistency across the codebase (CLI args, Grader class fields, help text). Assisted-by: llama.cpp:local pi * eval : add Wilson score confidence interval to results Compute 95% CI on-the-fly from completed cases. Displayed in terminal output, HTML report, and JSON state. * llama-eval : add per-task generation speed from server timings Extract predicted_per_second from the server timings response and store it as tps_gen per task. Display in console progress, print_all_tasks, and HTML report. Assisted-by: llama.cpp:local pi * llama-eval : add per-task generation time from server timings Extract predicted_ms from the server timings response and store it as t_gen_ms per task. Display in seconds with one decimal digit in console progress, print_all_tasks, and HTML report. Assisted-by: llama.cpp:local pi * llama-eval : rename display, escaped, and count variables to use prefix convention - _display suffix → display_ prefix (answer, tokens, tps, t_gen) - _escaped suffix → escaped_ prefix (response, prompt, reasoning) - _count suffix → n_ prefix (correct, incorrect, pending) Assisted-by: llama.cpp:local pi * llama-eval : support multiple evaluation endpoints with dynamic task distribution - Add ServerConfig dataclass (url, threads, name) - Accept comma-separated --server, --threads, --server-name CLI args - Dynamic shared-queue task distribution across servers (fast servers do more work) - One ThreadPoolExecutor per server, workers pull from shared Queue - Track which server processed each task (server_name in results) - Thread-safe EvalState with threading.Lock for concurrent mutations - Server column in HTML report and console output - Backward compatible: single server works as before Assisted-by: llama.cpp:local pi * llama-server-simulator : replace Flask with stdlib http.server - Use HTTPServer + BaseHTTPRequestHandler instead of Flask - RequestHandler handles POST /v1/chat/completions - Server runs in daemon thread with clean Ctrl+C shutdown - Remove flask and unused asdict imports Assisted-by: llama.cpp:local pi * llama-eval : update README with PR link and quick-start examples Assisted-by: llama.cpp:local pi * llama-eval : track model name in eval state and verify on resume - Store model_name in EvalState and JSON output - Display model in HTML summary table - Verify --model matches stored model when resuming Assisted-by: llama.cpp:local pi * llama-server-simulator : fix comment - Dice coefficient, not Levenshtein Assisted-by: llama.cpp:local pi * llama-eval : require --grader-model or --model when using --grader-type llm Assisted-by: llama.cpp:local pi * llama-eval : protect dump() with lock for thread safety Assisted-by: llama.cpp:local pi * llama-eval : compact HTML report output - Replace verbose summary table with single inline bar - Shorten status text: '✓'/'✗'/'–'/'!' instead of full words - Flatten CSS: remove box-shadows, border-radius, reduce padding - Use system-ui font, 13px table, 12px details - Conditional reasoning section (only shown when present) - Single toggle JS function instead of two - Shorter column headers Assisted-by: llama.cpp:local pi * llama-eval : check server connectivity on startup - Hit /v1/models for each server before evaluation - Exit with error if any server is unreachable - Print comma-separated model IDs per server in startup output - Sequential checks, no retries, no timeout override Assisted-by: llama.cpp:local pi * llama-eval : use server1/server2 instead of gpu1/gpu2 in README Assisted-by: llama.cpp:local pi --------- Co-authored-by: gatbontonpc <gatbontonpc@gmail.com>
…#22969) * model-conversion : add causal-convert-mmproj target [no ci] This commit adds a new Make target that only converts the mmproj model. The motivation for this that the causal-convert-mm-model target will convert both the test model and the mmproj model which is nice when the model model conversion is finalized. But during development it is nice to be able to just convert the mmproj model and not have to wait for the often more time consuming text model conversion. * add path model path validation check
…s, iq2_s, iq2_xs, iq2_xxs improve iq2_xs impl for rvv 256 Co-authored-by: Rehan Qasim <rehan.qasim@10xengineers.ai>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
This PR adds RVV implementations for quantized vector dot kernels (for VLENs 512-bit and 1024-bit).
Key Changes
ggml_vec_dot_tq1_0_q8_K_vl512.Testing
Kernels were functionally tested through
test-quantize-fnsfor VLENs512-bitand1024-bitwith QEMU, usingtest-quantize-fns.