I have played around with yunohost and other similar tools. I know how to open ports on router, configure port forwarding. I am also interested on hosting my own stuff for experiments, but I also have a VPN enabled for privacy reasons on my router at all times. If you haven’t guessed already, I am very reserved on revealing my home IP for selfhosting, as contradictory as it sounds.

I am aware that it’s better to rent a VPS, not to mention the dynamic IP issues, but here it goes: assuming my VPN provider permits port forwarding, is it possible to selfhost anything from behind a VPN, including the virtual machine running all the necessary softwares?

edit: title

edit2: I just realized my VPN provider is discontinuing port forwarding next month. Why?!

  • Mountaineer@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    Absolutely possible.
    The key to simple self hosting is to have a dns record that points to your externally accessible IP, whether that be your real one or an external one hosted at a VPN provider.
    If that IP changes, you’ll need to update it dynamically.

    It’s becoming increasibly common to be a requirement to do so as CGNat becomes more widespread.

    One of the newer ways to do that is with a Cloudflare Tunnel, which whilst technically is only for web traffic, they ignore low throughput usage for other things like SSH.

    • stonesimulator@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      My knowledge is a little dated and I remember messing around dyndns or noip to update my IP many years ago. I guess a simple script running on the router or the host should suffice?

  • meteokr@community.adiquaints.moe
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    At the end of the day, packets need to get from whatever your DNS points, to the server that’s running. Depending on your tolerance for jank, and as long as a route actually exists for this, you can run the server anywhere you want. Renting a VPS does offer a lot more freedom in how your are routing, and where.

  • sven@l.mchome.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Hopefully this will help someone. This seems to work for me. Subscribed communities update, I am able to post. I’m the only user right now on my server. NPM took me a bit of messing around with the config but I think I have everything working, some of this may be redundant / non functional but I don’t have the will to go line by line to see what more I can take out. Here is how I have it configured. Note that some things go to the Lemmy UI port and some to the Lemmy port. These should be defined in your docker-compose if you’re using that. (Mine is below)

    On the first tab in NPM, “Details” I have the following:

     Scheme: http
     Hostname: <docker ip>
    Port: <lemmy-ui port>
    Block Common Exploits and Websockets Support are enabled.
    

    On the Custom Locations page, I added 4 locations, you have to do one for each directory even though the ip/ports are the same.

    Location: /api
    Scheme: http
    Hostname: <docker ip>
    Port: <lemmy port>
    

    Repeat the above for “/feeds”, “/pictrs”, and “/nodeinfo”. The example file they give also says to have “.well_known” in there but as far as I know that’s just for Let’s Encrypt which NPM should be handling for us.

    On the SSL tab, I have a Let’s Encrypt certificate set up. Force SSL, HTTP/2 Support, and HSTS Enabled.

    On the Advanced tab, I have the following:

     location / {
    
       set $proxpass "http://<docker ip>:<lemmy-ui port>";
       if ($http_accept = "application/activity+json") {
    
         set $proxpass "http://<docker ip>:<lemmy-ui port>";`
       }
       if ($http_accept = "application/ld+json; profile=\"https://www.w3.org/ns/activitystreams\"") {
         set $proxpass "http://<docker ip>:<lemmy-ui port>";
       }
       if ($request_method = POST) {
         set $proxpass "http://<docker ip>:<lemmy-ui port>";
       }
       proxy_pass $proxpass;
       
       rewrite ^(.+)/+$ $1 permanent;
        # Send actual client IP upstream
       proxy_set_header X-Real-IP $remote_addr;
       proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
     }
    

    I probably should add in my docker compose file as well… I’m far from a docker expert. This is reasonably close to their examples and others I found. I removed nginx from in here since we already have a proxy. I disabled all the debug logging because it was using disk space. I also removed all the networking lines because I’m not smart enough to figure it all out right now. If you use this, look out for the < > sections, you need to set your own domain/hostname, and postgres user/password.

    version: "3.3"
    
    services:
      lemmy:
        image: dessalines/lemmy:0.17.3
        hostname: lemmy
        restart: always
        ports:
          - 8536:8536
        environment:
          - RUST_LOG="warn"
          - RUST_BACKTRACE=full
        volumes:
          - ./lemmy.hjson:/config/config.hjson:Z
        depends_on:
          - postgres
          - pictrs
    
      lemmy-ui:
        image: dessalines/lemmy-ui:0.17.4
        # use this to build your local lemmy ui image for development
        # run docker compose up --build
        # assuming lemmy-ui is cloned besides lemmy directory
        # build:
        #   context: ../../lemmy-ui
        #   dockerfile: dev.dockerfile
        ports:
          - 1234:1234
        environment:
          # this needs to match the hostname defined in the lemmy service
          - LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8536
          # set the outside hostname here
          - LEMMY_UI_LEMMY_EXTERNAL_HOST=< domain name>
          - LEMMY_HTTPS=false
          - LEMMY_UI_DEBUG=true
        depends_on:
          - lemmy
        restart: always
    
      pictrs:
        image: asonix/pictrs:0.4.0-beta.19
        # this needs to match the pictrs url in lemmy.hjson
        hostname: pictrs
        # we can set options to pictrs like this, here we set max. image size and forced format for conversion
        # entrypoint: /sbin/tini -- /usr/local/bin/pict-rs -p /mnt -m 4 --image-format webp
        environment:
          - PICTRS_OPENTELEMETRY_URL=http://otel:4137
          - PICTRS__API_KEY=API_KEY
          - RUST_LOG=debug
          - RUST_BACKTRACE=full
          - PICTRS__MEDIA__VIDEO_CODEC=vp9
          - PICTRS__MEDIA__GIF__MAX_WIDTH=256
          - PICTRS__MEDIA__GIF__MAX_HEIGHT=256
          - PICTRS__MEDIA__GIF__MAX_AREA=65536
          - PICTRS__MEDIA__GIF__MAX_FRAME_COUNT=400
        user: 991:991
        volumes:
          - ./volumes/pictrs:/mnt:Z
        restart: always
    
      postgres:
        image: postgres:15-alpine
        # this needs to match the database host in lemmy.hson
        # Tune your settings via
        # https://pgtune.leopard.in.ua/#/
        # You can use this technique to add them here
        # https://stackoverflow.com/a/30850095/1655478
        hostname: postgres
        command:
          [
            "postgres",
            "-c",
            "session_preload_libraries=auto_explain",
            "-c",
            "auto_explain.log_min_duration=5ms",
            "-c",
            "auto_explain.log_analyze=true",
            "-c",
            "track_activity_query_size=1048576",
          ]
        ports:
          # use a different port so it doesnt conflict with potential postgres db running on the host
          - "5433:5432"
        environment:
          - POSTGRES_USER=< dbuser >
          - POSTGRES_PASSWORD=< dbpassword>
          - POSTGRES_DB=lemmy
        volumes:
          - ./volumes/postgres:/var/lib/postgresql/data:Z
        restart: always