• 0 Posts
  • 42 Comments
Joined 1 year ago
cake
Cake day: July 21st, 2023

help-circle




  • Implementation of VPN’d torrent client

    This is how I torrent over Mullvad. I have no hesitation to recommend Mullvad - but I am not a crypto or security expert.

    The main image fails closed - if the VPN goes down, transmission disconnects.

    This setup also includes a SOCKS server that proxies your traffic over the same VPN. I use a separate browser (librewolf) and set the SOCKS proxy to :2020 including sending DNS over SOCKS. That’s because my country blocks piracy-related sites at the DNS level. If you don’t need this, you can delete the socks section of the docker-compose file.

    On my ubuntu laptop, I install transmission-remote-gtk in order to click on a magnet link and have it added. Otherwise you have to browse to the container’s web interface, which gets tiresome.

    I have this installed as a systemd service so it runs on boot. I use the systemd state and credential features as a safeguard against my own mistakes with permissions, but my long-term goal is to encrypt these files on disk. Linux can be pwned - I have read that around 35% of botnet nodes are linux (although these are presumably mostly weak IoT devices). The secondary benefit of the LoadCredential/CREDENTIALS_DIRECTORY mechanism is that it doesn’t expose secrets as environment variables.

    The p2p.service file needs to be in that path, but you can put the other files wherever you want.

    Known issues / todo list

    • The socks proxy sometimes falls over, I haven’t looked into why
    • The downloaded files will be owned by root, since that’s what the container runs as

    File contents

    /root/.secrets/mullvad:

    123456789
    ""
    

    For mullvad, there is no password, only an account number. I believe that the empty quotes are necessary. This file should be owned by root and chmod 600; containing dir should be 700. Replace the account number with your own account, obvs!

    /etc/systemd/system/p2p.service:

    [Unit]
    Description=p2p
    Requires=docker.service multi-user.target
    After=docker.service network-online.target dhcpd.service
    
    [Service]
    Restart=always
    RemainAfterExit=yes
    WorkingDirectory=/usr/local/bin/p2p
    ExecStart=docker compose up --remove-orphans
    ExecStop=docker compose down
    LoadCredential=mullvad:/root/.secrets/mullvad
    DynamicUser=yes
    SupplementaryGroups=docker
    StateDirectory=p2p
    StateDirectoryMode=700
    
    [Install]
    WantedBy=multi-user.target
    

    /usr/local/bin/p2p/docker-compose.yml:

    ---
    version: "3.7"
    
    services:
      p2p:
        restart: always
        container_name: p2p
        image: haugene/transmission-openvpn   # see also: https://www.nickkjolsing.com/posts/dockermullvadvpn/
        cap_add:
          - NET_ADMIN
        sysctls:
          - "net.ipv6.conf.all.disable_ipv6=0"  # ipv6 must be enabled for Mullvad to work
        volumes:
          - ${STATE_DIRECTORY:-./config/}:/config   # dir managed by systemd - but defaults to ./config if running interactively
          - ${CREDENTIALS_DIRECTORY:-.}/mullvad:/config/openvpn-credentials.txt:ro  # var populated by LoadCredential - but defaults to ./mullvad if running interactively
          - transmission:/data
          - transmission_incomplete:/data/incomplete
          - /my/directory/Downloads:/data/completed
        environment:
          - OPENVPN_PROVIDER=MULLVAD
          - OPENVPN_CONFIG=se_all  # sweden
          - LOCAL_NETWORK=192.168.1.0/24    # put your own LAN network here - in most cases it should end in .0/24
          - TRANSMISSION_WEB_UI=flood-for-transmission  # optional
        ports:
          - 9091:9091
          - 80:9091
          - 2020:2020
    
      socks:
        restart: always
        container_name: socks
        image: lthn/dante
        network_mode: "service:p2p"
        volumes:
          - ./sockd.conf:/etc/sockd.conf
        depends_on:
          - p2p
    
    volumes:
      transmission:
        external: false
      transmission_completed:
        external: false
      transmission_incomplete:
        external: false
    

    /usr/local/bin/p2p/sockd.conf:

    logoutput: stderr
    # debug: 2
    internal: 0.0.0.0 port = 2020
    external: tun0
    external.rotation: route
    
    clientmethod: none
    socksmethod: username none
    
    user.privileged: root
    user.notprivileged: nobody
    user.unprivileged: sockd
    
    # Allow everyone to connect to this server.
    client pass {
        from: 0.0.0.0/0 to: 0.0.0.0/0
        log: connect error  # disconnect
    }
    
    # Allow all operations for connected clients on this server.
    socks pass {
        from: 0.0.0.0/0 to: 0.0.0.0/0
        command: bind connect udpassociate
        log: error  # connect disconnect iooperation
        #socksmethod: username
    }
    # Allow all inbound packets.
    socks pass {
        from: 0.0.0.0/0 to: 0.0.0.0/0
        command: bindreply udpreply
        log: error  # connect disconnect iooperation
    }
    

    Steps

    1. Install docker and docker-compose, e.g. with sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
    2. Create the files with contents as above
    3. sudo systemctl enable p2p
    4. sudo systemctl start p2p
    5. Check what it’s doing: systemctl status p2p
    6. On first start, it will take a few minutes to pull the images
    7. To debug interactively while also passing the creds, use sudo systemd-run -P --wait -p LoadCredential=mullvad:/root/.secrets/mullvad docker compose up --remove-orphans
    8. Every so often, cd into /usr/local/bin/p2p and run docker compose pull to update the images.



  • glue_snorter@lemmy.sdfeu.orgtoAsklemmy@lemmy.mlBypassing "wifi pausing"
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    2
    ·
    10 months ago

    Racial origins of those terms? Citation needed.

    Black and white, in the sense of good and evil, have had their connotations for a fucking long time, whereas black and white to describe skin colour are relatively recent etymologies. I’m pretty sure that Romans didn’t call themselves “white” or others “black”, for example.

    I’m willing to be taught, but this sounds like bullshit to me.

    The etymology being wrong doesn’t mean that we shouldn’t drop freighted terminology, I just don’t want false justification. Bullshit is never a valid reason to do something.


  • Pure FUD. Worse, it’s wilfully stupid.

    Are you in the habit of picking pipeline commands at random? Do you not usually have a purpose in mind? OBVIOUSLY the receiving end has to understand what it’s receiving, or what the fuck are you even doing?

    Do you believe that your text processing commands don’t have to understand what they receive?

    Let’s get the ports of the node container.

    Bash:

    docker ps | grep node | cut -n 6 -f ' '
    

    Pwsh:

    docker-ps | where name -eq 'node' | select ports
    

    First the grep command shits the bed because at some point you started a new container running a nodejs image.

    Then the cut command fails because you had a container with a space in the name, so it outputs mounts instead of ports.

    That’s a non-issue with sematic tools. Semantic tools are also legible. Yeah, I can figure out what that awk command does, but it’s meaningless unless I also know the shape of the data is supposed to operate on.

    You don’t write “USE 2nd DATABASE; SELECT 3rd COLUMN FROM 10th ROW”, do you? Why would you want to do that in a shell?




  • My understanding is that the Windows terminal sucks? I don’t know why, it just looks bad.

    Your understanding is wrong. I’ve tried 8 different terminals on mac, arch and kubuntu, and I miss Windows Terminal every day. It looks good and the config is a pleasure. I don’t expect Linux to look pretty, but MacOS had fucking awful font rendering and it’s supposed to be this upmarket OS for moneyed pricks in black turtlenecks. Was everyone in unixland busy doing drugs while Microsoft was implementing anti-aliasing? Is clear legible type for losers?







  • glue_snorter@lemmy.sdfeu.orgtoSelfhosted@lemmy.worldOld PC as Server
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    11 months ago

    It’s OK, but I’d suggest:

    Atom > arm64 > arm32

    I ran on a Pi 4, but switched to a PC for jellyfin. The pi can’t transcode for shit. It was slow to boot and slow over SSH.

    Look for a NUC - they’re designed for desktop use, so they have more poke than a Pi. The N6005 CPU is a good choice, the N5105 is ok. These are x64, so you’ll have the widest range of packages. 4GB will do, if its upgradeable later. NUCs usually take SODIMMs, which you can pick up on ebay for peanuts.

    Bear in mind that network chipset will be your bottleneck in some use cases. If it has a “gigabit port” but only a cheap chipset, and you use it as a router, you might max out at ADSL speeds… in that case you’ll wish you’d gone for a box designed for soft routing, which are a fair bit pricier.



  • They don’t supply PoE, mind.

    I’m planning an ubiquiti deployment:

    • 5-6x AP 6 Pro (haven’t done survey yet)
    • 1x TL-SG1016PE PoE switch (yuck, but cheap)
    • 1x R86S running opnsense and docker VMs, with unifi controller and pihole in docker

    The R86S is the same price as the dream machine, but good luck running pihole on the DM.

    I considered Mikrotik, but my mum would have to call me every time there was an issue, and it would only be marginally cheaper. I expect any competent local tech to be able to support unifi and opnsense.