|
- What is the best current Local LLM to run? : r LocalLLaMA - Reddit
Basically, you simply select which models to download and run against on your local machine and you can integrate directly into your code base (i e Node js or Python) I recently used their JS library to do exactly this (e g run models on my local machine through a Node js script) and got it to work pretty quickly
- For what purpose do you use local LLMs? : r LocalLLaMA - Reddit
A lot of discussions which model is the best, but I keep asking myself, why would average person need expensive setup to run LLM locally when you can get ChatGPT 3 5 for free and 4 for 20usd month? My story: For day to day questions I use ChatGPT 4 It seems impracticall running LLM constantly or spinning it off when I need some answer quickly
- Comparison of some locally runnable LLMs : r LocalLLaMA - Reddit
Simple knowledge questions are trivial What I expect from a good LLM is to take complex input parameters into consideration Example: Give me a receipe how to cook XY -> trivial and can easily be trained Better: "I have only the following things in my fridge: Onions, eggs, potatoes, tomatoes and the store is closed
- Local LLM + Image Gen = Like GPT 4 Dalle 3 - Reddit
Sure to create the EXACT image it's deterministic, but that's the trivial case no one wants However, it's a challenge to alter the image only slightly (e g now the character has red hair or whatever) even with same seed and mostly the same prompt -- look up "prompt2prompt" (which attempts to solve this), and then "instruct pix2pix "on how even prompt2prompt is often unreliable for latent
- Local LLM with web access : r LocalLLaMA - Reddit
Local models (mainly Mistral Instruct 7B) with access to web searches It was not too hard to set up and it gets the job done with a very nice UX The stack uses Ollama + LiteLLM + ChatUI
- r LocalLLM - Reddit
I want to use local LLM on my own system to read a PDF and answer questions for me I wanted to use it to help my run D D so if I forget a rule or not understand it it can explain it quickly I've currently got Ooobga text web ui running on my system and sillytavern Anyone know how I can do the PDF thing I'm asking or know a tutorial that can
- The easier way to run a local LLM : r LocalLLaMA - Reddit
Certainly! You can create your own REST endpoint using either node-llama-cpp (Node js) or llama-cpp-python (Python) Both of these libraries provide code snippets to help you get started You can use any GGUF file from Hugging Face to serve local model I've also built my own local RAG using a REST endpoint to a local LLM in both Node js and
- Sharing a simple local LLM setup : r LocalLLaMA - Reddit
I know all the information is out there, but to save people some time, I'll share what worked for me to create a simple LLM setup I've done this on Mac, but should work for other OS I only need to install two things: Backend: llama cpp UI: Chatbox for me, but feel free to find one that works for you, here is a list of them here
|
|
|