• 1 Post
  • 40 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle





  • Free tier is super limited and super easy to accidentally break out of. I had a single file in S3, but because my logging settings were wrong, I broke the free tier with junk logs.

    The t2 micro ec2 instances are fine, but you need to be very careful about their storage and network egress.

    Best use I’ve had for AWS that has managed to stay within the free limits has been Lambda. Managed to convert a couple self hosted discord bots to a few Lambda functions, works great. Plugging it into CloudFormation and tying up CI/CD with CodePipeline and the like were overkill but good learning exp.

    I don’t think there’s any ECS free tier, but you can fit a private container repository in the free S3 limits as well.



  • You’re going to want to look up things like symlinks, hard links, fuse filesystems, and bind mounts among other concepts. Your “whole directory” and other duplicates are artifacts of how the filesystem and process management works, and simply running fsearch or find over them is going to be confusing if you don’t know what you’re looking at.

    One Unix concept that carries over to Linux is that everything is a file. Your shared memory space, process data, device driver interfaces, etc, all of it is accessible somewhere in the same virtual filesystem tree as the actual files.

    Because of this, there’s very little reason to have the whole filesystem indexed from root. If you’re worried about space usage, you want to work with packages through the package manager. If you’re worried about system integrity, you’ll want package validators.






  • If you are in a position to ask this question, it means you have no actual uptime requirements, and the question is largely irrelevant. However, in the “real” world where seconds of downtime matter:

    Things not changing means less maintenance, and nothing will break compatibility all of the sudden.

    This is a bit of a misconception. You have just as many maintenance cycles (e.g. “Patch Tuesdays”) because packages constantly need security updates. What it actually means is fewer, better documented changes with maintenance cycles. This makes it easier and faster to determine what’s likely to break before you even enter your testing cycle.

    Less chance to break.*

    Sort of. Security changes frequently break running software, especially 3rd party software that just happened to need a certain security flaw or out-of-date library to function. The world has got much better about this, but it’s still a huge headache.

    Services are up to date anyway, since they are usually containerized (e.g. Docker).

    Assuming that the containerized software doesn’t need maintenance is a great way to run broken, insecure containers. Containerization helps to limit attack surfaces and outage impacts, but it isn’t inherently more secure. The biggest benefit of containerization is the abstraction of software maintenance from OS maintenance. It’s a lot of what makes Dev(Sec)Ops really valuable.

    Edit since it’s on my mind: Containers are great, but amateurs always seem to forget they’re all sharing the host kernel. One container causing a kernel panic, or hosing misconfigured SHM settings can take down the entire host. Virtual machines are much, much safer in this regard, but have their own downsides.

    And, for Debian especially, there’s one of the biggest availability of services and documentation, since it’s THE server OS.

    No it isn’t. THE server OS is the one that fits your specific use-case best. For us self-hosted types, sure, we use Debian a lot. Maybe. For critical software applications, organizations want a vendor so support them, if for no other reason than to offload liability when something goes wrong.

    It is running only rarely. Most of the time, the device is powered off. I only power it on a few times per month when I want to print something.

    This isn’t a server. It’s a printing appliance. You’re going to have a similar experience of needing updates with every power-on, but with CoreOS, you’re going to have many more updates. When something breaks, you’re going to have a much longer list of things to track down as the culprit.

    And, last but not least, I’ve lost my password.

    JFC uptime and stability isn’t your problem. You also very probably don’t need to wipe the OS to recover a password.

    My Raspberry Pi on the other hand is only used as print server, running Octoprint for my 3D-printer. I have installed Octoprint there in the form of Octopi, which is a Raspian fork distro where Octoprint is pre-installed, which is the recommended way.

    That is the answer to your question. You’re running this RPi as a “server” for your 3d printing. If you want your printing to work reliably, then do what Octoprint recommends.

    What it sounds like is you’re curious about CoreOS and how to run other distributions. Since breakage is basically a minor inconvenience for you, have at it. Unstable distros are great learning experiences and will keep you up to date on modern software better than “safer” things like Debian Stable. Once you get it doing what you want, it’ll usually keep doing that. Until it doesn’t, and then learning how to fix it is another great way to get smarter about running computers.

    E: Reformatting


  • I actually want to learn enough code to contribute, but there’s this gap between “how to code” and “how to participate in a modern software project”.

    Like, I’ve created plenty of little things. Discord bots, automation scripts, plenty of sysadmin stuff for work, etc. But like, I clone a git repo cause there’s a home assistant bug I’d like to fix for example, and I’m immediately lost on where to start.




  • No apology needed, one thing about security is that paranoia is good. One problem with security is that paranoia leads to assumptions and misinformation, rather than understanding.

    Symmetric key encryption is much faster than asymmetric, and can use much larger keys with less compute penalty. So we use acPU intensive asymmetric TLS handshakes to safely exchange the keys, and then switch to the faster method for the data.

    So when ZigBee use AES 128, you can be reasonably sure the data packets are safe. The next question to ask is “do they exchange their keys safely?”

    Which in this case would be “no” if you just leave the ZigBee controller in pairing mode all the time. However, you only allow pairing when you want it, and only pair with devices you explicitly allow. Unauthorized devices never get your network key.


  • That’s fair, since it’s possible these chips have some backdoored bootloader or something, I’ve never personally analyzed them with an electron microscope, but the architecture and wire traces are published, so you could start a chip fabrication plant and roll your own silicon.

    The actual running code on them is usually GitHub hosted though, or you can write it yourself and just import the libraries you need, again usually from GitHub or the platform specific repositories.

    If you’re worried about Chinese chips in your open source though, I have some real bad news for you.

    If you’re using FOSS specifically as a control against Chinese spying, and not analyzing the commit logs of every package you download, I have more bad news for you.