16 comments

  • tarruda 41 minutes ago
    Note that this is not the only way to run Qwen 3.5 397B on consumer devices, there are excellent ~2.5 BPW quants available that make it viable for 128G devices.

    I've had great success (~20 t/s) running it on a M1 Ultra with room for 256k context. Here are some lm-evaluation-harness results I ran against it:

        mmlu: 87.86%
    
        gpqa diamond: 82.32%
    
        gsm8k: 86.43%
    
        ifeval: 75.90%
    
    More details of my experience:

    - https://huggingface.co/ubergarm/Qwen3.5-397B-A17B-GGUF/discu...

    - https://huggingface.co/ubergarm/Qwen3.5-397B-A17B-GGUF/discu...

    - https://gist.github.com/simonw/67c754bbc0bc609a6caedee16fef8...

    Overall an excellent model to have for offline inference.

    • Aurornis 9 minutes ago
      The method in this link is already using a 2-bit quant. They also reduced the number of experts per token from 10 to 4 which is another layer of quality degradation.

      In my experience the 2-bit quants can produce output to short prompts that makes sense but they aren’t useful for doing work with longer sessions.

      This project couldn’t even get useful JSON out of the model because it can’t produce the right token for quotes:

      > *2-bit quantization produces \name\ instead of "name" in JSON output, making tool calling unreliable.

  • Aurornis 33 minutes ago
    Reading the details, he is using 2-bit quantization and reduced the number of experts per token from 10 down to 4 to get 5 tokens/sec. Cool proof of concept but it’s far from the quality and performance of the 397B model as normally used. Dropping the number of experts is particularly misleading.

    This is some interesting work, but applying such extreme measures to LLMs to get them to run severely degrades quality. I know he claims negligible quality loss, but in my experience 2-bit quantizations are completely useless for real work. You can get them to respond to prompts, but they lose their intelligence and will go around in circles.

    He also shows 5-6 tokens per second. Again that’s impressive for a large model on limited hardware but it’s very slow. Between the severely degraded model abilities and the extremely slow output the 397B result should be considered an attempt at proving something can technically run, not evidence that it can run well and produce output you’d expect from a 397B model.

    He even mentions the obvious problems with his changes:

    > *2-bit quantization produces \name\ instead of "name" in JSON output, making tool calling unreliable.

    So right out of the gate this isn’t useful if you want to do anything with it. He could have tried smaller models or less quantizations but then the headline he was going for with his AI-coded project and paper wouldn’t have looked as impressive.

  • homarp 2 hours ago
  • zozbot234 1 hour ago
    The github page mentions that a naïve mmap approach is bottlenecked by per-page overhead. Can this be mitigated by setting up explicit "huge" pages? (2M using the CONT PTE feature if the "native" page size is 16k; 32M using a PMD level block mapping; or 1G using the CONT PMD feature.) Does macOS support this out of the box? Alternatively, one might use a simple mmap and then something like posix_fadvise to set up prefetching of the data.
  • m-hodges 18 minutes ago
    As frontier models get closer and closer to consumer hardware, what’s the most for the API-driven $trillion labs?
    • OJFord 3 minutes ago
      Assuming 'moat' – they'll push the frontier forward; they don't really have to worry until progress levels off.

      At that point, I suppose there's still paid harnesses (people have always paid for IDEs despite FOSS options) partly for mindshare, and they could use expertise & compute capacity to provide application-specific training for enterprises that need it.

    • stri8ted 6 minutes ago
      48 GB is not consumer hardware. But fundamentally, there are economies of scale due to batching, power distribution, better utilization etc.., that means data center tokens will be cheaper. Also, as the cost of training (frontier) models increases, it's not clear the Chinese companies will continue open sourcing them. Notice for example, that Qwen-Max is not open source.
  • maxloh 29 minutes ago
    Can you add a license to the report? Legally we couldn't run any code without a license attached to it.
  • JSR_FDED 2 hours ago
    This is a very impressive result. If I understand correctly the bottleneck is the SSD in this architecture - the author seems to get almost 15GB/s - but I seem to remember the max b/w was about 8GB/s. What am I missing?
    • Aurornis 12 minutes ago
      PCIe 5 doubles the maximum throughout. That’s why the numbers for newer SSDs are about double what you recall for the old maximum.
    • Roxxik 1 hour ago
      IO is very bursty in these setups. When the router results are in you can start loading experts from SSD. In this brief moment the SSD is saturated.

      Outside of that the SSD is idling.

      Table 3 shows for K=4 experts an IO of 943 MB/Tok at 3.15 Tok/s giving an average IO of 2970 MB/s far below what the SSD could do.

      I'm not sure, but not all expert weights are used immediately. Maybe they could do async reads for the down tensors parallelizing compute with IO.

      Not sure if this works on Mac, I only tested my larger than RAM setup on Linux with io_uring O_DIRECT reads and I saw that about 20% of total reads do finish while my fused upgate matmul is already running.

      Edit: Typos

      • zozbot234 1 hour ago
        The github page mentions that you can't overlap SSD traffic and GPU compute on Apple Silicon, you get heavy contention for the shared hardware resources.
    • rado 2 hours ago
      MacBook Pro M5 Pro and M5 Max have such SSD speed
      • selimthegrim 1 hour ago
        I have an MBP M4 Pro and a WD Black SN850x in an external TB5 enclosure and I easily get 6-7 GB/s
  • bertili 2 hours ago
    Very impressive! I wonder if there is a similar path for Linux using system memory instead of SSD? Hell, maybe even a case for the return of some kind of ROMs of weights?
    • daemonologist 32 minutes ago
      Most definitely - the popular engines have extensive support for doing this and controlling exactly which weights end up where (llama.cpp: https://github.com/ggml-org/llama.cpp/blob/master/tools/cli/... , vllm: https://docs.vllm.ai/en/stable/configuration/engine_args/#of... , sglang (haven't tried this): https://docs.sglang.io/advanced_features/server_arguments.ht...).

      Even with a MoE model, which has to move a relatively small portion of the weights around, you do end up quite bandwidth constrained though.

    • Aurornis 31 minutes ago
      Using system memory and CPU compute for some of the layers that don’t fit into GPU memory is already supported by common tools.

      It’s workable for mixture of experts models but the performance falls off a cliff as soon as the model overflows out of the GPU and into system RAM. There is another performance cliff when the model has to be fetched from disk on every pass.

      • zozbot234 10 minutes ago
        It's less of a "performance falls off a cliff" problem and more of a "once you offload to RAM/storage, your bottleneck is the RAM/storage and basically everything else no longer matters". This means if you know you're going to be relying on heavy offload, you stop optimizing for e.g. lots of VRAM and GPU compute since that doesn't matter. That saves resources that you can use for scaling out.
    • zozbot234 1 hour ago
      Loading experts to system memory is supported by most local-AI frameworks. But you do not gain much by running that part of the decode on GPU, since decode is not compute-limited and the CPU-GPU transfer involves overhead. It's best to use the GPU for speeding up the shared part of the model.
    • K0balt 1 hour ago
      My thoughts exactly. Something like this could make it so that modest GPU capacity, like a pair of 3090s , and lots of RAM could make big inference more practical for personal labs
  • spwa4 1 hour ago
    Does this mean that it should be possible to load up a system with ~10 (seems to me at least the number of active experts) SSDs to get 40 tok/s even on truly gigantic models?
    • zozbot234 59 minutes ago
      SSD bandwidth will ultimately be limited by the amount of PCIe lanes you have available (for something other than the Apple Silicon internal storage). So the approach has inherent limitations. You can of course scale out to multiple systems to get more throughput.

      You can use this approach with Intel Optane, which is wearout-resistant unlike NAND and can thus substitute for RAM. Last I checked, it was available quite cheap on the secondary market, ~$1/GB as opposed to ~$15/GB or more for DRAM. (Of course that's nowhere near as cheap as NAND, which is around ~$0.1/GB but quite wearout-prone with heavy writes.)

      • spwa4 3 minutes ago
        Yeah, PCIe is the bottleneck. The point being that whether the data originates from RAM or from NVME or Optane, you cannot get data to the GPU faster with RAM than with SSDs.

        Meanwhile PCIe switches exist. So why not build:

        1 CPU + memory + ...

        N PCIe switch with each 1 low-memory GPU + 6 NVME drives (in theory 5 can saturate the GPU)

        Each of those should only bother the CPU when they have some tokens produced and have plenty of PCIe lanes to get at their data.

  • lostmsu 1 hour ago
    How large is the KV cache?
    • xbar 44 minutes ago
      0.1 GB per full-attention layer and "The model has 60 transformer layers: 45 GatedDeltaNet (linear attention) + 15 standard full attention." So, 1.5 GB.
  • pdyc 1 hour ago
    impressive, i wish someone takes a stab at using this technique on mobile gpu's even if it does not use storage it would still be a win. I am running llama.cpp on adreno 830 with oepncl and i am getting pathetic 2-3t/s for output tokens
  • harshhhhhhhhh 2 hours ago
    seems promising , this is the way , can someone benchmark this
    • frwickst 2 hours ago
      I'm getting 6.55t/s using the Qwen3.5-397B-A17B-4bit model with the command: ./infer --prompt "Explain quantum computing" --tokens 100

      MacBook Pro M5 Pro (64GB RAM)

      • j45 1 hour ago
        Appreciate the data point. M5 Max would also be interesting to see once available in desktop form.
      • logicallee 1 hour ago
        can you post the final result (or as far as you got before you killed it) to show us how cohesive and good it is? I'd like to see an example of the output of this.
  • leontloveless 29 minutes ago
    [dead]
  • mugivarra69 1 hour ago
    [dead]
  • vilequeef 1 hour ago
    Why so much RAM?
    • vilequeef 1 hour ago
      Oh Mac, unified. Sometimes it takes a downvote
  • rvz 2 hours ago
    The technical write up is great, but Mac users should not get too excited just yet on running 300B+ parameter models locally as the TPS isn't that good.

    >...at 4.4+ tokens/second

    That is even when it is using 4-bit quantization and it is still at that speed.

    > The entire 209GB model streams from SSD through a custom Metal compute pipeline.

    This is my main problem.

    If I were to run this on a Mac SSD, 24/7 for heavy usage such as Openclaw, that is going to significantly reduce the lifetime of the SSD.

    Can't imagine using this in the long term right now, but improvements will follow. Still a great write up anyways.

    • Roxxik 2 hours ago
      Does an SSD meaningfully degrade by read only workloads?
      • JSR_FDED 2 hours ago
        Nope, reads don’t cause wear
        • zozbot234 48 minutes ago
          No appreciable wear of course, but read disturb (requiring occasional rewrites) becomes more of an issue as NAND fabrication advances.
    • etiam 1 hour ago
      > If I were to run this on a Mac SSD, 24/7 for heavy usage such as Openclaw, that is going to significantly reduce the lifetime of the SSD.

      How sure are you about that? I've never looked closer at how a large LLM with mixture of experts architecture switches between expert modules, but staying on roughly the same topic for the use (as it often would when editing the same codebase), I wouldn't be surprised to see the switches of composition are fairly rare, fairly small, and to the extent it happens it's repeated reads from the flash disk rather than writes it tends to cause.

      • frotaur 1 hour ago
        Afaik the experts are not usually very interpretable, and generally would be surprised if at least one does not change every token. I don't know what happens in practice, but I know at least during training, nothing is done to minimize the number of expert switches between tokens.
    • Wowfunhappy 1 hour ago
      Eh. I mean, 4 tokens a second works fine if you're patient. Go do something else while you wait.

      I feel like whenever I'm trying to find information on which local models will work on my hardware, I have to overestimate because people don't know how to wait for things.

      Also, reading data doesn't cause SSD wear.

    • hrmtst93837 1 hour ago
      If you want decent throughput and do not care about burning SSD write cycles on a box that was never meant to act like a tiny inference server, a used server with actual RAM is still the cheaper and less silly option. I woudn't expect Apple's warranty team to be much help.
      • K0balt 1 hour ago
        Is it doing a bunch of ssd writes?