Did pretty much the same with a new server recently - spent ages debugging why it didn’t find the SAS disks. Turns out, disks like to have power connected, and no amount of debugging on software level will help you.
Did pretty much the same with a new server recently - spent ages debugging why it didn’t find the SAS disks. Turns out, disks like to have power connected, and no amount of debugging on software level will help you.
I was referring to work setups with the overengineering - if I had a cent for every time I had to argue with somebody at work to not make things more complex than we actually need I’d have retired a long time ago.
Unless you are gunning for a job in infrastructure you don’t need to go into kubernetes or terraform or anything like that,
Even then knowing when not to use k8s or similar things is often more valuable than having deep knowledge of those - a lot of stuff where I see k8s or similar stuff used doesn’t have the uptime requirements to warrant the complexity. If I have something that just should be up during working hours, and have reliable monitoring plus the ability to re-deploy it via ansible within 10 minutes if it goes poof maybe putting a few additional layers that can blow up in between isn’t the best idea.
Everything is deployed via ansible - including nameservices. So I already have the description of my infra in ansible, and rest is just a matter of writing scripts to pull it in a more readable form, and maybe add a few comment labels that also get extracted for easily forgettable admin URLs.
Shitty companies did it like that back then - and shitty companies still don’t properly utilize what easy tools they have available for controlled deployment nowayads. So nothing really changed, just that the amount of people (and with that, amount of morons) skyrocketed.
I had automated builds out of CVS with deployment to staging, and option to deploy to production after tests over 15 years ago.
Accessing powershell is not the issue - that Windows is broken, with a sprinkle of bad permission management by corporations using it is the issue. And the bad permission practices are a direct result of how broken Windows is - I tried a while ago to use it with a fully unprivileged user, just like I do for decades on UNIX and now Linux. It pretty much is impossible without privilege elevation prompts every few minutes.
In a proper environment a user should be able to destroy data they’re working with - but not have the ability to alter the operating system.
I nowadays manage my private stuff with the ansible scripts I develop for work - so mostly my own stuff is a development environment for work, and therefore doesn’t need to be done on private time.
There is nothing like this availlable currently. Framework probably comes closest, but they only sell in a few countries, and there is lots of stuff to dislike about their solutions - but building your own around a framework board might be feasible.
I have two mnt reforms - as you said, slow and expensive. They have their use for work prototyping for me, but generally wouldn’t recommend. They also have the worst keyboard I’ve encountered in a notebook in the last decade.
Generally yes, but you still need hardware support (mostly kernel and mesa). They upstream - but generally you currently want packages built from their git for that.
Also the installer is very mac hardware specific.
A lot of the Zen based APUs don’t support ECC. The next thing is if it supports registered or unregistered modules - everything up to threadripper is unregistered (though I think some of the pro parts are registered), Epycs are registered.
That makes a huge difference in how much RAM you can add, and how much you pay for it.
Is it a ‘death by quantity’ thing?
Pretty much that - those companies rely on open projects to sort it for them, so they’re pretty much scraping open databases, and selling good data they pull from there. That’s why they were complaining about the kernel stuff - the info required was there already, just you needed to put effort in, so they were asking for CVEs. Now they got their CVEs - but to profit from it they’d still need to put the same effort in as they’d had to without CVEs in place.
Short version: A bunch of shitty companies have as business model to sell open databases to companies to track security vulnerabilities - at pretty much zero effort to themselves. So they’ve been bugging the kernel folks to start issuing CVEs and do impact analysis so they have more to sell - and the kernel folks just went “it is the kernel, everything is critical”
tl;dr: this is pretty much an elaborate “go fuck yourself” towards shady ‘security’ companies.
Funny timing, I’m currently going through a stack of Sun hardware in my garage to decide what to keep, and for what I’ll try to find a good home (or eventually dispose of it).
It starts with them only doing initial talks about buying their hardware for a project with you for a 7-figure payment, and doesn’t improve from there.
It has been a while since I touched ssmtp, so take what I’m saying with a grain of salt.
Problem with ssmtp and related when I was testing it was its behaviour in error conditions - due to a lack of any kind of spool it doesn’t fail very gracefully, and if the sending software doesn’t expect it and implement a spool itself (which it typically doesn’t have a reason to, as pretty much the only situation where something like sendmail would fail is a situation where it also wouldn’t be able to write a spool) this can very easily lead to loss of mails.
I already had a working SMTP client capable of fishing mails out of a Maildir at that point, so I ended up just doing a simple sendmail program throwing whatever it receives into a Maildir, and a cronjob to send this forward. This might be the most minimalistic setup for reliably sending out mail (and I’m using it an all my computers behind Emacs to do so) - but it is badly documented, so if you don’t care about reliability postfix might be a better choice, or if you don’t just go with ssmtp or similar. Or if you do want to dig into that message me, and I’ll help making things more user friendly.
A problem of this bubble is that it is making AI synonymous with LLM - and when it goes down will burn other more sensibly forms of AI.
It surely is a bubble - so probably a bit different than many other bubbles.
I think OpenAI made the right call (for them) to commercialize when they did - as that pretty much was their only chance to do so. Things has moved fast over the last 1.5 years - and what used to take a decade in tech has happened within months: OpenAI is the dinosaur company grandfathered in, while for already about a year it’s been more sensible for anybody wanting to do something with LLM to selfhost (or buy hosting capacity, but put up own data) one of the more open language models, and possibly adjust or re-train it.
As a company owner I get a ridiculous amount of spam for a year already from all kinds of companies building products on top of OpenAI stack, or are trying to sell training or conferences. All those companies will be left with nothing once all the slower users realize technology has moved on. It’s like somebody trying to build all their product offerings based on VMWare stack nowadays.
If you as a company want to offer something around AI right now the safest option is probably offering hosting, or if you want to do more hands on, adjustment of open models. Both of those are very risky, and many will go bust in years to come - but not as suicidal as building on top of a closed dinosaur.
I see you’re not working in any industry having to deal with Qualcomm.
He probably needs a comaintainer. We could select one of us and then try pressuring him into accepting that.
The problem with renewables is the fluctuation. So you need something you can quickly spin up or down to compensate. Now you can do that with nuclear reactors to some extent - but they barely break even at current energy prices, and they keep having the same high cost while idle.
So a combination of grid storage and power plants with low cost when idle (like water) is the way to go now.