no idea

  • 0 Posts
  • 12 Comments
Joined 1 year ago
cake
Cake day: June 18th, 2023

help-circle
  • kroy@lemmy.worldtoProgrammer Humor@lemmy.mlPHP is dead?
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    As someone that used PHP professionally for literal decades, the PHP hate is so meme-y.

    Its biggest problem is that it allows you to do some truly cursed things. The same can be said about other languages, but PHP really doesn’t do much to set you up for success, especially as a new-intermediate coder.

    With opcache, it became fast enough for basically most web backends, and as a language overall it does seem to be evolving and shedding off some of the crap that used to make it truly horrible in the hands of a new person. At least the type-juggling stupiderrors

    Now I mainly use go and python (only because I have to on this one), and I would put Python and PHP on a similar level of “fuck this language” moments






  • kroy@lemmy.worldtoSelfhosted@lemmy.worldOPNsense virtualization
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I’m happy to discuss it, as I’ve written articles about it.

    I live high level routing and firewalling in VMs (60 Gbps+), and there are a couple of realities you need to accept, especially when you involved a *BSD in the mix.

    1. *BSD’s networking drivers and, to a lesser degree, the whole stack SUUUCK. This becomes extra poignant when you involve *pf, which is incredible for hand editing, but also horrible for performance because it’s a straight top-to-bottom list.
    2. We could argue about the whole networking stack sucking all day, but in reality, it’s the driver situtation that really brings it down. That’s why “You must buy Intel” is such a mantra on *BSD. Because they are about the only drivers which don’t make for a completely horrible experience. You can meme about how terrible Realtek is, but really it’s only terrible on *BSD. It’s a first-class linux citizen, and often supports better hardware features than the ancient X520, pre-Connect-4, etc people circle-jerk about. And if you often losing out on cool new features/offloads/abilities.
    3. The virtio drivers are usually more efficient and performant than most physical hardware drivers (on *BSD)
    4. You asked “why would anyone ever need to do that?”. It’s simple. High availability. You can run two router/firewall VMs on two different hosts and have zero downtime. Or, if you only want one, you can migrate the VM either manually or automagically, and only suffer the downtime for a reboot as the VM moves to a different host. You can share the same physical NIC between multiple VMs with SR-IOV for maximum low-latency networking, aka storage. It’s a waste throwing 10Gb at just pfSense when it’ll be idle most of the time, and with older hardware pfSense isn’t going to even be able to hit half of that.
    5. Your VM just works if you ever have to move it to another host. Your main routing and firewall VM is now tied to a single specific host. In a disaster recovery situation, this is going to make you hate yourself as you basically end up needing to either physically pull a card and re-setup passthrough, or setup passthrough on a new card, make sure the VM is bound to those MACs. When it’s fully virtualized, it’s hardware agnostic. Your VM may think it’s 10Gb on a single link, but underneath the links are high availability (aka vSphere vDS), on different VLANs, etc. My example here is a few years ago where I swapped in a Z8350 WYSE 3040 when my main router died with 40Gb uplinks. Sure, I was limping for a few days, but as far as my router is concerned, there is no difference.
    6. NUMA becomes an issue. Even single processors have NUMA nodes now, and it wouldn’t be difficult for someone not knowing was a NUMA node is to create a NUMA issue, where you incur huge penalties going from CPU/Chipset to RAM to NIC and back again, depending on where the items are physically arranged in the system. This is doubly poignant in the *BSD world.
    7. If a 1Gb interface is your bottleneck, your network design is broken. There is no reason for most people in a homelab to try and route >1Gbps on your edge. Don’t packet inspect it, and internally you are up to 10Gbps and beyond. Sure, a >1Gbps link might be a reason in 2023, but what’s your 95th percentile, like 25Mbps if you are lucky. It’s only “hawt” for your speedtest numbers, and an occasional download. And you can do 10Gbps pretty easily with virtio on basically any semi-modern system especially with the large files that most people would want 10Gb for, and not dedicate a PCIe slot to it and make it portable.

    I mean, you do you. But I’d much rather to just be able to change the uplink on a vSwitch or bridge to get my router going again instead of having to reboot, passthrough, insert grub cli options, swap cards, etc.





  • The great thing about Proxmox is you can do snapshot backups which take mere moments to complete. Then pass those off to a NAS where they can survive a irreparable loss of your Proxmox server.

    Hopefully you put a giant asterix by this point. You need the snapshot AND the original backup. Snapshots are only diffs and can’t survive without their base backup.