So telegram’s delusional propaganda did something good for once?
So telegram’s delusional propaganda did something good for once?
I doubt the disk will bottleneck at 40mb/s when doing sequential write. Torrent downloads are usually heavy random writes, which is the worst you can do to a HDD.
Sell them to zoomers as 3d save button coasters. $19.95 each
Well this one needs a new job, maybe he can work school security. He’s available m-f after all
You said the quiet part out loud…
Llama3 8b can be run at 6gb vram, and it’s fairly competent. Gemma has a 9b I think, which would also be worth looking into.
That’s super green!
That’s like saying car crash is just a fancy word for accident, or cat is just a fancy term for animal.
Hallucination is a technical term for this type of AI, and it’s inherent to how it works at it’s core.
And now I’ll let you get back to your hating.
If they only had a teacher there with a gun, this wouldn’t have been a problem at all
Isn’t there a Geneva convention against inflicting such horror on an enemy?
And just to top it off, make this pythonscript a dialect of rust
Better background backups
Rework background backups to be more reliable
Hilarious for a system which main point / feature is photo backup
🫰🤙🫵👌✊🫳🫸🤲🤌
I mean, I totally agree with you. But that also kinda ignores all the useful things a dog can be trained to do.
She’s a witch, get h… Whisper whisper really? Whisper whisper whisper oh, sorry, wrong page. Pulls out new page
She’s an abortion patient, get her!
It’s less the calculations and more about memory bandwidth. To generate a token you need to go through all the model data, and that’s usually many many gigabytes. So the time it takes to read through in memory is usually longer than the compute time. GPUs have gb’s of RAM that’s many times faster than the CPU’s ram, which is the main reason it’s faster for llm’s.
Most tpu’s don’t have much ram, and especially cheap ones.
Reasonable smart… that works preferably be a 70b model, but maybe phi3-14b or llama3 8b could work. They’re rather impressive for their size.
For just the model, if one of the small ones work, you probably need 6+ gb VRAM. If 70b you need roughly 40gb.
And then for the context. Most models are optimized for around 4k to 8k tokens. One word is roughly 3-4 tokens. The VRAM needed for the context varies a bit, but is not trivial. For 4k I’d say right half a gig to a gig of VRAM.
As you go higher context size the VRAM requirement for that start to eclipse the model VRAM cost, and you will need specialized models to handle that big context without going off the rails.
So no, you’re not loading all the notes directly, and you won’t have a smart model.
For your hardware and use case… try phi3-mini with a RAG system as a start.
They want to force people to be hetero, Christian, and either white male and upper class, or anything else and subservient.
So for them, other groups forcing/ brainwashing people to be gay/trans, atheist and so on makes perfect sense.