• 1 Post
  • 248 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle

  • You can do reverse proxy on the VPS and use SNI routing (because the requested domain is in clear text over HTTPS), then use Proxy Protocol to attach the real source IP to the TCP packets.
    This way, you don’t have to terminate HTTPS on the VPS, and you can load balance between a couple wireguard peers so you have redundancy (or direct them to different reverse proxies or whatever).
    On your home servers, you will need an additional frontend(s) that accepts Proxy Protocol from the VPS (as Proxy Protocol packets aren’t standard HTTP/S packets, so standard HTTPS reverse proxies will drop them as unknown/broken/etc).
    This way, your home reverse proxy knows the original IP and can attach it to the decrypted http requests as x-forward-for. Or you can do ACLs based on original client IP. Or whatever.

    I haven’t found a way to get a firewall that pays attention to Proxy Protocol TCP headers, but I haven’t found that to really be an issue. I don’t really have a use case









  • It’s not a workaround.
    In the old days, if you had 2 services that were hard coded to use the same network port, you would need virtualization or a different server and make sure the networking for those is correct.

    Network ports allow multiple services to use the same network adapter as a port is like a “sub” address.
    Docker being able to remap host network ports to containers ports is a huge feature.
    If a container doesn’t need to be accessed outside of the docker network, you don’t need to expose the port.

    The only way to have multiple services on the same port is to use either a load balancer (for multiple instances of the same service) or an application-aware reverse proxy (like nginx, haproxy, caddy etc for web things, I’m sure there are other application-aware reverse proxies).


  • No idea. Probably cause it’s a bit gate-keepy in the way I say “any tuner worth their salt” as if it’s the only way to achieve good results.
    I haven’t met a tuner that uses anything other than forks. Maybe that’s because all the pianos I’ve worked with have been in good condition, so haven’t needed drastic measures applied. As I haven’t met a tuner that uses anything else, I can’t say if they are better/faster/whatever. I just assumed it’s the industry standard, like how orchestras tune by ear






  • If they are on the same subnet, why are they going via the router? Surely the NIC/OS will know it’s a local address within its subnet, and will send it directly; as opposed to not knowing where to send the packet, so letting the router deal with it.

    I’m assuming you are using a standard 24 bit subnet mask, because you haven’t provided anything that indicates otherwise and the issue you present would be indicative of a local link being used - this possible



  • Well, if the Tories get back in then deporting people to Rwanda and expansion of oil drilling in the north sea will be guaranteed.
    Voting for lab, there is a chance that these will be cancelled.

    Tories have had decades in charge, and shit is fucked.
    Labour are more progressive - not enough for my taste, but better than constant austerity


  • So, is public accessibility actually required?
    Does it need to be exposed to the public internet?

    Why not use wireguard (or another VPN)? Even easier is tailscale.
    If you are hand selecting users (IE, doesn’t actually need to be publicly accessible), then VPN is the most secure and just run a reverse proxy for ease & certs.
    Or set up client certificate authentication, so only users that install a certificate issued by you can connect to the service (dunno how that works for 3rd party apps to immich)

    Like I asked, what is your actual threat model?
    What are your requirements?
    Is public accessibility actually required?


  • That got a bit long.
    Reading more into bunkerweb.

    Things like the “limit” feature are going to doink people on cgnat or large corporate networks. I’ve had security stuff tripped by a company using my software, and it’s a PITA cause all the requests from legit users come from only a few IP addresses.

    Antibot isn’t going to be helpful for things like JS requests, because cookies aren’t included by default with fetch requests - so the application needs to be specifically built for this (at which point, do it at an application level so it can scale easier?).
    And captcha. For whatever that is worth these days.

    Reverse Scan is going to slow down every request (as it scans the remote client for suspicious open ports, so a 500ms delay as default).

    Country is just geo-ip.

    Bad Behaviour is just rate limiting (although with a 24h ban). Sucks if a few corporate/cgnat users all hit a 404 and suddenly that entire company/ISP’s IP is blocked for a day.

    This seems like something to use when running a TOR server or something, where security is more important than user experience. Like, every feature seems to punish legit users