115 comments
  • dust422m

    To add some numbers, on MBP M1 64GB with ggml-org/gemma-3-4b-it-GGUF I get

      25t/s prompt processing 
      63t/s token generation
    
    Overall processing time per image is ~15secs, no matter what size the image is. The small 4B has already very decent output, describing different images pretty well.

    Steps to reproduce:

      git clone https://github.com/ggml-org/llama.cpp.git
      cmake -B build
      cmake --build build --config Release -j 12 --clean-first
      # download model and mmproj files...
      build/bin/llama-server \
        --model gemma-3-4b-it-Q4_K_M.gguf \
        --mmproj mmproj-model-f16.gguf
    
    Then open http://127.0.0.1:8080/ for the web interface

    Note: if you are not using -hf, you must include the --mmproj switch or otherwise the web interface gives an error message that multimodal is not supported by the model.

    I have used the official ggml-org/gemma-3-4b-it-GGUF quants, I expect the unsloth quants from danielhanchen to be a bit faster.

    • matja2m

      For every image I try, I get the same response:

      > This image shows a diverse group of people in various poses, including a man wearing a hat, a woman in a wheelchair, a child with a large head, a man in a suit, and a woman in a hat.

      No, none of these things are in the images.

      I don't even know how to begin debugging that.

      • clueless2m

        I get the same as well, instead I get this message, no matter which image I upload: "This is a humorous meme that uses the phrase "one does not get it" in a mocking way. It's a joke about people getting frustrated when they don’t understand the context of a joke or meme."

        Not sure why it's not working

      • exe342m

        Means it can't see the actual image. It's not loading for some reason.

        • aendruk2m

          I’m having a hard time imagining how failure to see an image would result in such a misleadingly specific wrong output instead of e.g. “nothing” or “it’s nonsense with no significant visual interpretation”. That sounds awful to work with.

          • sigmaisaletter2m

            LLMs have a very hard time saying "I am useless in this situation", because they are explicitly trained to be a helpful assistant.

            So instead of saying "I can't help you with this picture", the thing hallucinates something.

            That is the expected behavior by now. Not hard to imagine at all.

            • aendruk2m

              No controls in the training data?

          • tough2m

            Fun fact,you can prompt the llm's with no input and random nonsense will come out of them

            • exe342m

              And if you set the temperature to zero, you'll get the same output every time!

    • brrrrrm2m

      hmm, I'm getting the same results - but I see on M1 with a 7b model we should expect ~10x faster prompt processing

      https://github.com/ggml-org/llama.cpp/discussions/4167

      I wonder if it's the encoder that isn't optimized?

    • zamadatix2m

      Are those numbers for the 4/8 bit quants or the full fp16?

      • dust422m

        It is a 4-bit quant gemma-3-4b-it-Q4_K_M.gguf. I just use "describe" as prompt or "short description" if I want less verbose output.

        As you are a photographer, using a picture from your website gemma 4b produces the following:

        "A stylish woman stands in the shade of a rustic wooden structure, overlooking a landscape of rolling hills and distant mountains. She is wearing a flowing, patterned maxi dress with a knotted waist and strappy sandals. The overall aesthetic is warm, summery, and evokes a sense of relaxed elegance."

        This description is pretty spot on.

        The picture I used is from the series L'Officiel.02 (L-officel_lanz_08_1369.jpg) from zamadatix' website.

        • zamadatix2m

          I'm can neither claim to be a photographer nor that https://www.dansmithphotography.com/ my website, but I appreciate the example! The specific photo for other's reference, based on the filename: https://payload.cargocollective.com/1/15/509333/14386490/L-o...

          That said I'm not as impressed of the description. The structure has some wood but it's certainly not just wooden, there are distant mountains but not much in the way of rolling hills to speak of. The dress is flowing but the waist is not knotted - the more striking note might have been the sleeves.

          For 4 GB of model I'm not going to ding it too badly though. The question on which quant was mainly around the tokens/second angle (q4 requires 1/4th the memory bandwidth as the full model would) rather than quality angle. As a note: a larger multimodal model gets all of these points accurately (e.g. "wooden and stone rustic structure"), they aren't just things I noted myself.

      • refulgentis2m

        n.b. the image processing is by a separate model, basically has to load the image and generate ~1000 tokens

        (source: vision was available in llama.cpp but Very Hard, been maintaining an implementation)

        (n.b. it's great work, extremely welcome, and new in that the vision code badly needed a rebase and refactoring after a year or two of each model adding in more stuff)

    • 2m
      [deleted]
    • astrodude2m

      do you have any example images it generated based on your prompts?

      want to have a look before I try

      • geoffpado2m

        To be clear, this model isn't generating images, it's describing images that are sent to it.

  • danielhanchen2m

    It works super well!

    You'll have to compile llama.cpp from source, and you should get a llama-mtmd-cli program.

    I made some quants with vision support - literally run:

    ./llama.cpp/llama-mtmd-cli -hf unsloth/gemma-3-4b-it-GGUF:Q4_K_XL -ngl -1

    ./llama.cpp/llama-mtmd-cli -hf unsloth/gemma-3-12b-it-GGUF:Q4_K_XL -ngl -1

    ./llama.cpp/llama-mtmd-cli -hf unsloth/gemma-3-27b-it-GGUF:Q4_K_XL -ngl -1

    ./llama.cpp/llama-mtmd-cli -hf unsloth/unsloth/Mistral-Small-3.1-24B-Instruct-2503-GGUF:Q4_K_XL -ngl -1

    Then load the image with /image image.png inside the chat, and chat away!

    EDIT: -ngl -1 is not needed anymore for Metal backends (CUDA still yes) (llama.cpp will auto offload to the GPU by default!). -1 means all GPU layers offloaded to the GPU.

    • danielhanchen2m

      If it helps, I updated https://docs.unsloth.ai/basics/gemma-3-how-to-run-and-fine-t... to show you can use llama-mtmd-cli directly - it should work for Mistral Small as well

      • 2m
        [deleted]
      • distalx2m

        Is there a simple GUI available for running LLaMA on my desktop that I can access from my laptop?

        • xyc2m

          If you are on a Mac, give https://recurse.chat/ a try. As simple as download the model and start chatting. Just added the new multimodal support in LLaMA.cpp.

        • Devorlon2m

          Give https://docs.openwebui.com/ a look, you'll be able to access it by using your desktops IP while on your laptop (providing you're on the same network).

        • tough2m

          isnt that ollama + any client supporting it?

          using tailscale for the internal network works really well

    • thenameless77412m

      If you install llama.cpp via Homebrew, llama-mtmd-cli is already included. So you can simply run `llama-mtmd-cli <args>`

    • 2m
      [deleted]
    • danielhanchen2m

      Ok it's actually better to use -ngl 99 and not -ngl -1. -1 might or might not work!

    • raffraffraff2m

      I can't see the letters "ngl" anymore without wanting to punch something.

      • simlevesque2m

        That's your problem. Hope you do something about that pent up aggressivity.

      • danielhanchen2m

        Oh it's shorthand for number of layers to offload to the GPU for faster inference :) but yes it's probs not the best abbreviation.

        • stavros2m

          It probably isn't, not gonna lie.

      • blowsand2m

        [flagged]

  • ngxson2m

    We also support SmolVLM series which delivers light-speed response thanks to its mini size!

    This is perfect for real-time home video surveillance system. That's one of the ideas for my next hobby project!

        llama-server -hf ggml-org/SmolVLM-Instruct-GGUF
        llama-server -hf ggml-org/SmolVLM-256M-Instruct-GGUF
        llama-server -hf ggml-org/SmolVLM-500M-Instruct-GGUF
        llama-server -hf ggml-org/SmolVLM2-2.2B-Instruct-GGUF
        llama-server -hf ggml-org/SmolVLM2-256M-Video-Instruct-GGUF
        llama-server -hf ggml-org/SmolVLM2-500M-Video-Instruct-GGUF
    • a_e_k2m

      I've been noticing your commits as I skim the latest git commit notes whenever I periodically pull and rebuild. Thank you for all your work on this (and llama.cpp in general)!

    • thatspartan2m

      Thanks for landing the mtmd functionality in the server. Like the other commenter I kept poring over commits in anticipation.

    • moffkalast2m

      Ok but what's the quality of the high speed response? Can the sub-2.2B ones output a coherent sentence?

  • simonw2m

    This is the most useful documentation I've found so far to help understand how this works: https://github.com/ggml-org/llama.cpp/tree/master/tools/mtmd...

    • scribu2m

      It’s interesting that they decided to move all of the architecture-specific image-to-embedding preprocessing into a separate library.

      Similar to how we ended up with the huggingface/tokenizers library for text-only Tranformers.

  • banana_giraffe2m

    I used this to create keywords and descriptions on a bunch of photos from a trip recently using Gemma3 4b. Works impressively well, including going doing basic OCR to give me summaries of photos of text, and picking up context clues to figure out where many of the pictures were taken.

    Very nice for something that's self hosted.

    • accrual2m

      That's pretty neat. Do you essentially loop over a list of images and run the prompt for each, then store the result somewhere (metadata, sqlite)?

      • banana_giraffe2m

        Yep, exactly, just looped through each image with the same prompt and stored the results in a SQLite database to search through and maybe present more than a simple WebUI in the future.

        If you want to see, here it is:

        https://gist.github.com/Q726kbXuN/f300149131c008798411aa3246...

        Here's an example of the kind of detail it built up for me for one image:

        https://imgur.com/a/6jpISbk

        It's wrapped up in a bunch of POC code around talking to LLMs, so it's very very messy, but it does work. Probably will even work for someone that's not me.

        • wisdomseaker2m

          Nice! How complicated do you think it would be to do summaries of all photos in a folder, ie say for a collection of holiday photos or after an event where images are grouped?

          • banana_giraffe2m

            Very simple. You could either do what I did, and ask for details on each image, then ask for some sort of summary of the group of summaries, or just throw all the images in one go:

            https://imgur.com/a/1IrCR97

            I'm sure there's a context limit if you have enough images, where you need to start map-reducing things, but even that wouldn't be too hard.

            • wisdomseaker2m

              Thanks for the reply, I'll see if I can work it out :)

              • sorenjan2m

                You might want to extract the location from the image exif data and include in the prompt as well. There are reverse geocoding libraries and services that takes coordinates and return a city, which would probably make for a better summary of a trip.

    • buyucu2m

      is gemma 4b good enough for this? I was playing with larger versions of gemma because I didn't think 4b would be any good.

      • banana_giraffe2m

        It certainly seemed good enough for my use. I feed it some random images I found online, you can see the sort of metadata it outputs in a static dump here:

        https://q726kbxun.github.io/llama_cpp_vision/index.html

        It's not perfect, by any means, but between the keywords and description text, it's good enough for me to be able to find images in a larger collection.

  • simonw2m

    llama.cpp offers compiled releases for multiple platforms. This release has the new vision features: https://github.com/ggml-org/llama.cpp/releases/tag/b5332

    On macOS I downloaded the llama-b5332-bin-macos-arm64.zip file and then had to run this to get it to work:

      unzip llama-b5332-bin-macos-arm64.zip
      cd build/bin
      sudo xattr -rd com.apple.quarantine llama-server llama-mtmd-cli *.dylib
    
    Then I could run the interactive terminal (with a 3.2GB model download) like this (borrowing from https://news.ycombinator.com/item?id=43943370R)

      ./llama-mtmd-cli -hf unsloth/gemma-3-4b-it-GGUF:Q4_K_XL -ngl 99
    
    Or start the localhost 8080 web server (with a UI and API) like this:

      ./llama-server -hf unsloth/gemma-3-4b-it-GGUF:Q4_K_XL -ngl 99
    
    I wrote up some more detailed notes here: https://simonwillison.net/2025/May/10/llama-cpp-vision/
    • ngxson2m

      For brew users, you can specify --HEAD when installing the package. This way, brew will automatically build the latest master branch.

      Btw, the brew version will be updated in the next few hours, so after that you will be able to simply "brew upgrade llama.cpp" and you will be good to go!

    • danielhanchen2m

      I'm also extremely pleased with convert_hf_to_gguf.py --mmproj - it makes quant making much simpler for any vision model!

      Llama-server allowing vision support is definitely super cool - was waiting for it for a while!

    • ngxson2m

      And btw, -ngl is automatically set to max value now, you don't need to -ngl 99 anymore!

      Edit: sorry this is only true on Metal. For CUDA or other GPU backends, you still need to manually specify -ngl

      • danielhanchen2m

        OH WHAT! So just -ngl? Oh also do you know if it's possible to auto do 1 GPU then the next (ie sequential) - I have to manually set --device CUDA0 for smallish models, and probs distributing it amongst say all GPUs causes communication overhead!

        • ngxson2m

          Ah no I mean we can omit the whole "-ngl N" argument for now, as it is internally set to -1 by default in CPP code (instead of being 0 traditionally), and -1 meaning offload everything to GPU

          I have no idea how to specify custom layer specs with multi GPU, but that is interesting!

          • danielhanchen2m

            WAIT so GPU offloading is on by DEFAULT? Oh my fantastic! For now I have to "guess" via a Python script - ie I sum sum up all the .gguf split files in filesize, then detect CUDA memory usage, and specify approximately how many GPUs ie --device CUDA0,CUDA1 etc

            • ngxson2m

              Ahhh no sorry I forgot that the actual code controlling this is inside llama-model.cpp ; sorry for the misinfo, the -ngl only set to max by default if you're using Metal backend

              (See the code in side llama_model_default_params())

              • danielhanchen2m

                Oh no worries! I re-edited my comment to account for it :)

  • thenthenthen2m

    What has changed in laymans terms? I tried llama.cpp a few months ago and it could already do image description etc?

  • nico2m

    How does this compare to using a multimodal model like gemma3 via ollama?

    Any benefit on a Mac with apple silicon? Any experiences someone could share?

    • ngxson2m

      Two things:

      1. Because the support in llama.cpp is horizontal integrated within ggml ecosystem, we can optimize it to run even faster than ollama.

      For example, pixtral/mistral small 3.1 model has some 2D-RoPE trick that use less memory than ollama's implementation. Same for flash attention (which will be added very soon), it will allow vision encoder to run faster while using less memory.

      2. llama.cpp simply support more models than ollama. For example, ollama does not support either pixtral or smolvlm

      • nolist_policy2m

        On the other hand ollama supports iSWA for Gemma 3 while llama.cpp doesn't. iSWA reduces kv cache size to 1/6.

        • vlovich1232m

          What’s iSWA? Can’t find any reference online

          • imtringued2m

            Gemma 3 has some layers with a context size of 1024 tokens and others having full length. You need to read the Gemma technical report.

          • nolist_policy2m

            interleaved sliding window attention

      • roger_2m

        Won’t the changes eventually be added to ollama? I thought it was based on llama.cpp

        • diggan2m

          As far as I understand (not affiliated, just a user who peeked at the code), Ollama started out using llama.cpp as a runner for everything. But eventually they wrote their own runner in Golang, which is where they add support for new models. So most models you run via Ollama uses llama.cpp, but new stuff their own Golang runner.

      • danielhanchen2m

        By the way - fantastic work again on llama.cpp vision support - keep it up!!

        • ngxson2m

          Thanks Daniel! Kudos for your great work on quantization, I use the Mistral Small IQ2_M from unsloth during development and it works very well!!

          • danielhanchen2m

            :)) I did have to update the chat template for Mistral - I did see your PR in llama.cpp for it - confusingly the tokenizer_config.json file doesn't have a chat_template, and it's rather in chat_template.jinja - I had to move the chat template into tokenizer_config.json, but I guess now with your fix its fine :)

            • ngxson2m

              Ohhh nice to know! I was pretty sure that someone already tried to fix the chat template haha, but because we also allow users to freely create their quants via the GGUF-my-repo space, I have to fix the quants produces from that source

  • dr_kiszonka2m

    Are there any tools that leverage vision for UI development?

    Use case: I am working on a hobby project that uses TS/React as frontend. I can use local or cloud LLMs in VSCode but even those with vision require that I take a screenshot and paste it to a chat. Ideally, I would want it all automated until some stop criterion is met (even if only n-iterations). But even an extension that would screenshot a preview and paste it to chat (triggered by a keyboard shortcut) would be a big time-saver.

  • a_e_k2m

    This is excellent. I've been pulling and rebuilding periodically, and watching the commit notes as they (mostly ngxson, I think) first added more vision models, each with their own CLI program, then unified those under a single CLI program and deprecated the standalone one, while bug fixing and improving the image processing. I'd been hoping that meant they'd eventually add support to the server again, and now it's here! Thanks!

  • gryfft2m

    Seems like another step change. The first time I ran a local LLM on my phone and carried on a fairly coherent conversation, I imagined edge inference would take off really quickly at least with e.g. personal assistant/"digital waifu" business cases. I wonder what the next wave of apps built on Llama.cpp and its downstream technologies will do to the global economy in the next three months.

    • LPisGood2m

      The “global economy in three month is writing some checks that I don’t know all of the recent AI craze has been able to cash in three years.

      • ijustlovemath2m

        AI is fundamentally learning the entire conditional probability distribution of our collective knowledge; but sampling it over and over is not going to fundamentally enhance it, except to, perhaps, reinforce a mean, or surface places we have insufficiently sampled. For me, even the deep research agents aren't the best when it comes to surfacing truth, because the nuance of that is lost on the distribution.

        I think that if we're realistic with ourselves, AI will become exponentially more expensive to train, but without additional high quality data (not you, synthetic data), we're back to 1980s era AI (expert systems), just with enhanced fossil fuel usage to keep up with the TPUs. What's old is new again, I suppose!

        I sincerely hope to be proven wrong, of course, but I think recent AI innovation has stagnated in terms of new things it can do. It's a great tool, when you use it to leverage that distribution (eg, semantic search), but it might not fundamentally be the approach to AGI (unless your goal is to replicate what we can, but less spikey)

        • MoonGhost2m

          It's not as simple as stochastic parrot. Starting with definitions and axioms all theorems can be invented and proved. That's in theory, without having theorems in the training set. That's thinking models should be able to do without additional training and data.

          In other words way forward seems to be to put models in loops. Which includes internal 'thinking' and external feedback. Make them use generated and acquired new data. Lossy compress the data periodically. And we have another race of algorithms.

          • GTP2m

            > Starting with definitions and axioms all theorems can be invented and proved

            This was the premise of symbolic AI, but this approach seems to have been abandoned now.

        • gryfft2m

          It doesn't have to be AGI to have a major economic impact. It just has to beat enough extant CAPTCHA implementations.

          • LPisGood2m

            We can already do that today

  • 2m
    [deleted]
  • yieldcrv2m

    Finally! Open source multimodal is so far behind closed source options that people don’t even try to benchmark

    They’re still doing text and math tests on every new model because it’s so bad

  • behnamoh2m

    didn't llama.cpp use to have vision support last year or so?

    • danielhanchen2m

      Yes they always did, but they moved it all into 1 umbrella called "llama-mtmd-cli"!

    • breput2m

      Yes, but this is generalized so it was able to be added to the llama-server GUI as well.

  • jacooper2m

    Is it possible to run multimodal LLMs using their Vulkan backend? I have a ton of 4gb gpus laying around that only support vulkan.

    • buyucu2m

      Yes, llama.cpp has very good Vulkan support.

  • buyucu2m

    It was really sad when vision was removed back a while ago. It's great to see it restored. Many thanks to everyone involved!

  • mrs69692m

    so image processing there but image generation isn't ?

    just trying to understand, awesome work so far.

    • a21282m

      As far as I'm aware there are no open source LLMs that can generate images. There's image generation models like Stable Diffusion but those are not transformer language models so they'd be out of scope for the project

    • zozbot2342m

      Do the underlying models support generation? If the support isn't there to begin with, the llama.cpp folks can't do anything about that.

    • Rastonbury2m

      Generating images using chat seems cumbersome when you can do it directly with something like stable diffusion

  • bsaul2m

    great news ! sidenote : Does vision include the ability to read a pdf ?

    • diggan2m

      Vision = visual, while PDF is a container of sorts, usually containing images and text. So I guess the short answer is: 50% yes, the other part you can use any LLM for.

      • bsaul2m

        i'm asking because openai api has a special endpoint to deal with pdf, different from images.

        Which part of a pdf file can you use LLMs for ? Pdf is a binary format..

        • diggan2m

          Yeah, that'd make sense, PDFs aren't images.

          PDF isn't really a binary format, it starts with a text header, structure is mostly text-based objects and you can parse many PDFs as plain-text. They tend to contain embedded binary data though, which is the specific part these vision models can help you with, assuming they're images. The rest a "normal" LLM can parse just fine.

  • nurettin2m

    Didn't we already have vision via llava?

    • nikolayasdf1232m

      no, it did not work in llama.cpp

      • woodson2m

        Slight correction: It worked in llama.cpp via the CLI tools, but not in the llama-server (OpenAI API compatible interface).

      • nurettin2m

        I remember it distinctly working.

        • buyucu2m

          they deprecated it 1-1.5 years ago. it's not back.

  • nikolayasdf1232m

    finally! very important use-case! glad they added it!

  • babuloseo2m

    Someone ELI5 please or tldr

  • gitroom2m

    Man, the ngl abbreviation gets me every time too. Kinda cool seeing all the tweaks folks do to make this stuff run faster on their Macs. You think models hitting these speed boosts will mean more people start playing with vision stuff at home?

    • thenthenthen2m

      For sure! Llama.cpp runs great on my 10 year old pc and m1 mac!

  • loops1232m

    [flagged]

  • halalfatal2m

    [dead]