|
- r ollama - Reddit
How good is Ollama on Windows? I have a 4070Ti 16GB card, Ryzen 5 5600X, 32GB RAM I want to run Stable Diffusion (already installed and working), Ollama with some 7B models, maybe a little heavier if possible, and Open WebUI
- How does Ollama handle not having enough Vram? : r ollama - Reddit
I have been running phi3:3 8b on my GTX 1650 4GB and it's been great I was just wondering if I were to use a more complex model, let's say Llama3:7b, how will Ollama handle having only 4GB of VRAM available? Will it revert back to CPU usage and use my system memory (RAM) Or will it use both my system memory and GPU memory?
- Training a model with my own data : r LocalLLaMA - Reddit
I'm using ollama to run my models I want to use the mistral model, but create a lora to act as an assistant that primarily references data I've supplied during training This data will include things like test procedures, diagnostics help, and general process flows for what to do in different scenarios
- Ollama is making entry into the LLM world so simple that even school . . .
Ollama doesn't hide the configuration, it provides a nice dockerfile-like config file that can be easily distributed to your user This philosophy is much more powerful (it still needs maturing, tho)
- Ollama GPU Support : r ollama - Reddit
Additional Info System Specifications Operating System: Debian GNU Linux 12 (bookworm) Product Name: HP Compaq dc5850 SFF PC
- Allow larger outputs : r ollama - Reddit
This is what I learned in an issue post a while ago: "Context window size is largely manual right now – it can be specified via {"options": {"num_ctx": 32768}} in the API or via PARAMETER num_ctx 32768 in the Modelfile
- Request for Stop command for Ollama Server : r ollama - Reddit
Ok so ollama doesn't Have a stop or exit command We have to manually kill the process And this is not very useful especially because the server respawns immediately So there should be a stop command as well Edit: yes I know and use these commands But these are all system commands which vary from OS to OS I am talking about a single command
- Options for running LLMs on laptop - better than ollama
I currently use ollama with ollama-webui (which has a look and feel like ChatGPT) It works really well for the most part though can be glitchy at times There are a lot of features in the webui to make the user experience more pleasant than using the cli Even using the cli is simple and straightforward
|
|
|