General Kenobi
(I can’t help)
Just a guy shilling for gun ownership, tech privacy, and trans rights.
I’m open for chats on mastodon https://hachyderm.io/
my blog: thinkstoomuch.net
My email: [email protected]
Always looking for penpals!
General Kenobi
(I can’t help)
For simply productivity like Copilot or Text Gen like ChatGPT.
It absolutely is doable on a local GPU.
Source: I do it.
Sure I can’t do auto running simulations to find new drugs and protein sequencing or whatever. But it helps me code. It helps me digest software manuals. That’s honestly all I want
Also, massive compute projects for the @home project are good?
Local LLMs runs fine on a 5 year old GPU, a 3060 12 gig. I am getting performance on par with cloud ran models. I’m upgrading to a 5060ti just because I wanted to play with image Gen.
Whack. I just set up a Forgejo too.
Which is funny since that does solve a lot of the problems.
If it’s completely open source at least.
Like OS data sets and model that can be ran locally means it’s not trained on stolen data and it’s not spying on people for more data.
And if it runs locally on a GPU, it’s no worse for the environment than gaming. Really the big problem with the data center compute is the infrastructure of getting that data around.
I am a fan of LLMs and what they can do, and as such have a server specifically for running AI models. However, I’ve been reading “Atlas of AI” by Kate Crawford and you’re right. So much of the data that they’re trained on is inherently harmful or was taken without consent. Even in the more ethical data sets it’s probably not great considering the sheer quantity of data needed to make even a simple LLM.
I still like using it for simple code generation (this is just a hobby to me so Vibe coding isn’t a problem in my scenario) and corporate tone policing. And I tell people non stop that it’s worthless outside of these use cases and maybe as a search engine, but I recommend Wikipedia as a better start almost Everytime.
I was honestly impressed with the speed and accuracy I was getting with Deepseek, llama, and Gemma on my 1660ti.
$100 used and it was seconds to get responses.
Despite everything. I was considering giving it a try, just to see.
But your right, if there’s a CSAM community on there I would inevitably host it to.
So I will not be trying plebbit
Homie, you used 4chans logo in your shitty nft drops
https://x.com/getplebbit/status/1516387903177383948
I just assume anything that promises free stuff on Instagram reels is in fact a scam or a hack.
Also I’m not getting on that shit site to confirm it’s true.
I’m inclined to believe not a single actually suicidal person received one of these messages.
You can’t automate concern for fellow humans.
Lots of places do a better job providing DRM free or DRM Lite ebooks (Chicago press only ties your name to the files so you’d have to doxx yourself to share it, but you can share it), but the sheer library of self published books on Amazon is hard to find.
There’s an author I’ve become good friends with who I pay him (in coffee) for his books because I disagree with giving Amazon a cent. But he noted that’s just where the masses are still and it’s hard to break that momentum.
An AI project that’s ultimately just trying to cash out? Say it ain’t so!
Checked the videos you posted
This is just a ChatGPT front end isn’t it?
More I’m even more curious what you’re doing here. Are you selling access to your own API key for $30? Why LMAO
Okay so it’s something I can use to automate some sort of data processing and then be compensated for it.
How is it processing the data?
Where am I getting the data?
What is it doing to the data?
Who is paying me for processed data?
Linux everywhere and then Windows VM labeled “Shitty Spyware Do Not Open”
That’s not a bad call.
There’s fortunately pretty tech literate people at both locations. I can walk them through most of it with very little a long the lines of finger puppets and crayons.
That’s me! Gotta love Spectrum baby!
I actually got into this because I used to have sporadic hour+ long Internet outages when I was trying to watch all of Star Trek.
15 minutes drive to my MIL and 4 hours to my own Mom.
My dad used to do tech support and wants to learn some of this stuff while he’s recovering from surgery and I’m at my MILs several times a month anyways. So it all works out. Also it’s only fair as the FIL has helped me do so much with my car over the years I wanted to pay them back and he likes movies more than me.
Hell yeah
I like a very small amount of RGB.
I didn’t always, I wanted full no color, but the ONLY GPU I could find had just a smidge of RGB in the logo (MSI something 5060 ti) and I like it as a highlight.