• 1 Post
  • 40 Comments
Joined 6 months ago
cake
Cake day: March 22nd, 2024

help-circle
  • The problem is that splitting models up over a network, even over LAN, is not super efficient. The entire weights need to be run through for every half word.

    And the other problem is that petals just can’t keep up with the crazy dev pace of the LLM community. Honestly they should dump it and fork or contribute to llama.cpp or exllama, as TBH no one wants to split up LLAMA 2 (or even llama 3) 70B, and be a generation or two behind for a base instruct model instead of a finetune.

    Even the horde has very few hosts relative to users, even though hosting a small model on a 6GB GPU would get you lots of karma.

    The diffusion community is very different, as the output is one image and even the largest open models are much smaller. Lora usage is also standardized there, while it is not on LLM land.






  • but what am I realistically looking at being able to run locally that won’t go above like 60-75% usage so I can still eventually get a couple game servers, network storage, and Jellyfin working?

    Honestly, not much. Llama 8B, but very slowly, or maybe deepseek v2 chat, preprocessed on the 270 with vulkan but mostly running on CPU. And I guess just limit it to 6 threads? I’d host it with kobold.cpp vulkan, or maybe the llama.cpp server if there will be multiple users.

    You can try them to see if they feel OK, but llms are just not something that like old hardware. An RTX 3060 (or a Mac, or a 12GB+ AMD GPU) is considered bare minimum in the community, a 3090 or 7900 XTX standard.














  • Cutting edge ones? Unfortunately, rarely. Right now there’s a sliding scale between “open and transparent” and “smart and performant” because they’re just so darn expensive to train.

    I think some of the closest ones to your requirements are Nvidia’s research models, excluding Mistral Nemo which isn’t as well documented (as its really a Mistral Model). And you can see a lot of the open “alternative” efforts like RWKV, openllama and such are severely underfunded and undertrained.

    The datasets are there, the highly optimized implementations are getting there, pieces are there, a lot of of models have detailed papers, fully open codebases, but the funding to actually do it is just too much to deal with most of the time.

    Another factor is that “closed” datasets like whatever Mistral, Facebook, Cohere and such use do seem to have an edge.