mmproj upgrade
This commit is contained in:
16
README.md
16
README.md
@@ -119,6 +119,22 @@ Example:
|
||||
# capabilities: tools, vision
|
||||
```
|
||||
|
||||
**Vision Model Support (MMProj):**
|
||||
|
||||
For vision-capable models, you can specify an mmproj (multimodal projection) file that contains the vision encoder. See [MMProj Support Documentation](docs/MMPROJ_SUPPORT.md) for detailed information.
|
||||
|
||||
```dockerfile
|
||||
# hf_upstream: https://huggingface.co/mistralai/Ministral-3-14B-Reasoning-2512-GGUF
|
||||
# quantization: Q5_K_M
|
||||
# capabilities: vision, reasoning, tools
|
||||
#
|
||||
# mmproj_url: https://huggingface.co/unsloth/Ministral-3-14B-Reasoning-2512-GGUF
|
||||
# mmproj_quant: BF16
|
||||
# mmproj_sha256: abc123... (optional)
|
||||
```
|
||||
|
||||
The script will automatically download both the main GGUF and mmproj files, and create an Ollama model with vision support.
|
||||
|
||||
**Note:** Capabilities are read from the GGUF file's metadata by Ollama. The `# capabilities:` comment serves as documentation to track expected model features. If a model doesn't show the expected capabilities after installation, it may be due to the GGUF file lacking that metadata.
|
||||
|
||||
The script will:
|
||||
|
||||
Reference in New Issue
Block a user