• 0 Posts
  • 73 Comments
Joined 23 days ago
cake
Cake day: February 5th, 2025

help-circle
  • Signature verification protects you against malicious actors. Generally its not critical, but if you’re worried about the source you’re getting software from, then I highly recommend that you verify the signature. Ideally, you’re given an asc file with the distribution and assuming you have PGP installed (and have a key), it’s pretty easy.

    First you want to import the public key they are saying that they use to sign all of their distributions;

    gpg --auto-key-locate nodefault,wkd --locate-keys torbrowser@torproject.org
    

    Once it’s in your keyring, you sign it with your own key;

    gpg --sign-key torbrowser@torproject.org
    

    This is you telling the keyring that you trust this exact signing key, so now when you verify anything using that signing key (no matter where you get it from) you’ll get a little message saying “hey, we know who this is, this is probably safe!”;

    $ gpg --verify mullvad-browser-linux-x86_64-13.0.4.tar.xz.asc
    gpg: assuming signed data in 'mullvad-browser-linux-x86_64-13.0.4.tar.xz'
    gpg: Signature made Thu Nov 23 11:24:40 2023 CET
    gpg:                using RSA key 613188FC5BE2176E3ED54901E53D989A9E2D47BF
    gpg: Good signature from "Tor Browser Developers (signing key) <torbrowser@torproject.org>" [full]
    

    In all reality, signing archives like this isn’t really necessary anymore. In the early days of the internet when resources were scarce and web-servers didn’t have 100% uptime, people mainly got software from FTP servers that weren’t up all the time. So you have to search and hunt for software and sometimes get it from random places. This was a way for you to ensure that even though you didn’t get it from an official source, that the software you were about to put on your machine wasn’t messed with.

    These days you’re gonna get it directly from Mullvad–but even so, using signing keys protects you from MITM attacks, so that’s always cool. lol.




  • Xanza@lemm.eetoPrivacy@lemmy.mlDuckDuckGo Gone Rogue
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    8 hours ago

    Like, sure. That’s a valid argument. But it’s not the end of the goddamn world because they make you click a button to use a completely free service.

    If you’re that pissed about it, then setup SearX yourself. Not sure why every “technologist” feels like their opinion is the only that matters and gets butthurt about shit like this.




  • Xanza@lemm.eetoPrivacy@lemmy.mlDuckDuckGo Gone Rogue
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    12 hours ago

    I mean, this seems like a really stupid gripe. You can completely disable it for your searches: https://i.xno.dev/FUHSj.png

    In addition, it gives you a way to interface with ChatGPT without an account, or using the API, and maintaining privacy… I see this as an absolute win. They stay competitive by including AI for people who can’t live without, while making it completely optional. Offering duck.ai is a smart move, too. I just don’t see the issue here.








  • I’ve read a lot about using a VPS with reverse proxy but I’m kind of a noob in that area. How exactly does that protect my machine?

    So you’re not letting people directly connect to your server via ports. Instead, you’re sending the data through your reverse proxy. So let’s say you have a server and you want to server something off port :9000. Normally you would connect from domain.com:9000. With a reverse proxy you would setup to use a subdomain, like service.domain.com. If you choose caddy as your reverse proxy (which I highly recommend that you do) everything is served from port :443 on your proxy, which as you might know is the default SSL port.

    And do I understand correctly that since we’re using the reverse proxy the possible attack surface just from finding the domain would be limited to the web interface of e.g. Jellyfin?

    I wouldn’t say that it decreases your attack surface, but it does put an additional server between end-users and your server, which is nice. It acts like a firewall. If you wanted to take security to the n^th degree, you could run a connection whitelist from your home server to only allow local and connections from your rproxy (assuming it’s a dedicated IP). Doing that significantly increases your security and drastically lowers your attack vector–because even if an attack is able to determine the port, and even your home IP, they can’t connect because the connection isn’t originating from your rproxy.

    Sorry for the chaotic & potentially stupid questions, I’m just really a confused beginner in this area.

    You’re good. Most of this shit is honestly hard.