It’s Sunday somewhere already so why wait?
Let us know what you set up lately, what kind of problems you currently think about or are running into, what new device you added to your homelab or what interesting service or article you found.
I’ll post my ongoing things later/tomorrow but I didn’t want to forget the post again.
I know this isn’t sexy but I’ve been working on my documentation. Getting configs etc properly versioned in my gitea instance, readmes updated etc. My memory is not what it once was and I need the hints when things break.
Same here. I got Gemini to write a shell script for me that I can run on my Proxmox host which will output all of my configs to a .txt file. I asked it to format the output in a way a LLM can understand so I can just copy/paste it next time I need to consult AI.
This sounds interesting. Although I’m not even sure of what sort of configuration I would need to keep between reinstalls lol.
Mostly the stuff in /etc/pve, plus whatever you installed in additional software
Half finished projects
I spun up a new Plex server with a decent GPU - and decided to try offloading Home Assistant’s Preview Voice Assistant TTS/STT to it. That’s all working as of yesterday, including an Ollama LLM for processing.
Last on my list is figuring out how to get Home Assistant to help me find my phone.
Got any links for howtos on this?
Sure! I mostly followed this random youtuber’s video for getting Wyoming protocols offloaded (Whisper/Piper), but he didn’t get Ollama to use his GPU: https://youtu.be/XvbVePuP7NY.
For getting the Nvidia/Docker passthrough, I used this guide: https://www.bittenbypython.com/en/posts/install_ollama_openwebui_ubuntu_nvidia/.
It’s working fairly great at this point!
I have a family member across the country that wants to break from Google and really isn’t the type to self-host themselves, and I connect to my self hosted NextCloud solely through TailScale.
NextCloud permissions seem easy enough, but I’m researching how to add them to my Tailnet safely to avoid potential compromise of my network if something happens to their system.
Presuming this involves ACLs, which look intimidating, but I’m doing some research on that.
Is exposing it to the internet not an option? Boarding more family members on could be cool.
I expose mine for convenience, and I use multiple layers of security to reduce risk:
- Cloudflare protections at edge
- IP filtering at VPS
- connection from VPS to NAS is over Wireguard
- TLS handled in my network (so no snooping at VPS)
- all exposed services are in containers with minimal access
That cuts most of the issues.
Slowly building up my self hosted test env in a VM on my gaming PC.
Most recently playing with homepage so I don’t have to remember as many sub domains.
Eventually I will get the *arr stack going so my jellyseerr instance is more automated.
OpenWRT on a new router. The wifi works better, ethernet works up to 980Mbit/s and I don’t have all my traffic routed trough a Huawei device.
And it allows you to configure everything.
Running opnsense here and just plain having my own firewall is the coolest thing.
I need to switch to OPN. Was on pfSense Plus until they csncelled the free licenses so I finally “downgraded” to pfSense CE and now I’m finding it hasn’t been updated in 2+ years and I’m really missing having DHCP hostnames being added to local DNS automatically.
A couple of days ago, after testing it myself for a few months to make sure I understood how everything works, I made the switch to NextCloud Calendar, and will no longer use Google Calendar.
This is the best part though… I somehow convinced my wife to do the same. She let me install the NextCloud app(optional for Calendar stuff but makes the setup easier) and DAVx5 on her phone (both from F-Droid, so DAVx5 was free). I exported and imported her calendar, and made sure the notifications were set up to her preferred default.
It’s multiple days later, and she hasn’t complained!
I’ve also moved all of my contacts over to NextCloud, but have yet to coerce my spouse to do the same.
Which calendar client did you use?
I thought the switch to nextcloud calendar was going to be simple, but davx is … Not a clean-cut app.
- Did you find a way to sync from device to NC?
- Were you able to merge Google’s dumb export of 3 calendars?
I’ve been using Fossify Calendar for a while now and it’s been pretty great. I moved to it after the whole Simple apps getting sold drama when it happened.
I try to install docker (only docker) on the extern hdd… I have some tutorials, but I do not get
What exactly are you trying and on which operating system are you?
I am setting up the server on Raspberry Pi 4 with RaspiOS. I want to download torrents and I have connected an external hdd USB3 for it… I was told that you could change the Docker directory to the external hdd to mount the containers on it. That way the microsd would work less and in case of failure, it would only be to install RaspiOS again and change the directories… All the configuration, docker containers, etc are in the hdd… So far I have not succeeded, although I have listened to 2 or 3 tutorials.
You can also mount everything on the Raspberry, leaving the microSD only for booting, but it is more complicated…
Excuse my DeepL english
I haven’t tried that but good luck!
Considering moving my stuff into a VirtualBox VM or two rather than running directly on my PC. Then at some point in the future when I have the hardware for it I can fairly easily move it to proxmox. Also means installing a clean OS on my main PC is a quicker task as it would just be install virtual box, load up the VMs and a lot of stuff would already be done.
Consider using containers. I used to think this way, though now my goal is to get down to almost all containers since it’s nice to be able to spin up and down just what the one ‘thing’ needs.
I have setup a immich docker container and am slowly moving users and images from google photos.
Replacing Google Photos is still on my to-do list. How do you like Immich so far? Did you compare it to any alternatives?
Interested in this too - immich gets so much viral hype I’m a little suspicious of it
I set it up a couple weeks ago. It’s alright; facial recognition works pretty well, the files are easy to manage, and setup was pretty straightforward (using docker).
Searching for images works fairly well, as long as you’re searching for content and not text. Searching ‘horse’ for example does a pretty good job showing you your pictures of horses, but often misses images containing the word horse. Not always, but it’s noticeable to me.
The mobile apps work well too; syncing files in the background as they appear, optionally creating albums based on folders. Two things I find missing though are the ability to edit faces/people in an image (you’ve gotta do that from a browser), and the ability to see what albums an image is in and quickly navigate to one.
It’s a developing project that’s well on it’s way. A good choice imo.
My big problem is remote stuff. None of my users have aftermarket routers to easily manipulate their DNS. One has an android modem thing which is hot garbage. I’m using a combination of making their pi be their DHCP and one user is running on avahi.
Chrome, the people’s browser of choice, really, really hates http so I’m putting them on my garbage ######.xyz domain. I had plans to one day deal with Https, just not this day. Locally I just use the domain for vaultwarden so the domain didn’t matter. But if people are going to be using it then I’ll have to get a more memorable one.
System updates have been a faff. I’m 'ssh’ing over tailscale. When tailscale updates it kicks me out, naturally. Which interrupts the session, naturally. Which stops the update, naturally. Also, it fucks up dkpg beyond what --configure -a can repair. I’ll learn to update in background one day, or include tailscale in the unattended-upgrades. Honestly, I should put everything into unattended-upgrades.
Locally works as intended though, so that’s nice. Everything also works for my fiancee and I remotely all as intended, which is also nice. My big project is coalescing what I’ve got into something rational. I’m on the make it good part of the “make it work > make it good” cycle.
System updates have been a faff. I’m 'ssh’ing over tailscale. When tailscale updates it kicks me out, naturally. Which interrupts the session, naturally. Which stops the update, naturally.
Have a look at Screen. You can create a persistent terminal to start your update in, disconnect (manually or by connection loss), and resume the session when you reconnect, with it having completed the update while you were gone.
Why is it so hard to send large files?
Obviously I can just dump it on my server and people can download it from a browser but how are they gonna send me anything? I’m not gonna put an upload on my site, that’s a security nightmare waiting to happen. HTTP uploads have always been wonky, for me, anyway.
Torrents are very finnicky with 2-peer swarms.
instant.io (torrents…) has never worked right.
I can’t ask everyone to install a dedicated piece of software just to very occasionally send me large files
Sending is someone else’s problem. They have all sorts of different understandings and tools and I can’t deal with them all. So the only alternative is to set them up with an account in (e.g.) Nexcloud or just accept whatever Google service they use to send you a large file.
Sending other people files is easy in Nextcloud, just create a shared link and unshare when done. Set a password on the file itself.
Sending is someone else’s problem.
It becomes my problem when I’m the one who wants the files and no free service is going to accept an 80gb file.
It is exactly my point that I should not have to deal with third parties or something as massive and monolithic as Nextcloud just to do the internet equivalent of smoke signals. It is insane. It’s like someone tells you they don’t want to bike to the grocer 5 minutes away because it’s currently raining and you recommend them a monster truck.
OK 80 GB is for sure an edge case. Nextcloud won’t even work for that due to PHP memory limits, I think.
Interesting problem. FTP is an option, with careful instructions to an untutored user. Maybe rsync over a VPN connection if it is always the same sender.
Not even sure what else would reliably work, except Tannenbaum’s Adage.
Maybe something like Copyparty would be what you’re looking for?
Thanks for the mention :>
Yeah, copyparty was my attempt at solving this issue - a single python-file for receiving uploads of infinitely large files, usually much faster than other alternatives (ftp, sftp, nextcloud, etc.) especially when the physical distance to the uploader is large (hairy routing).
I’m not gonna put an upload on my site, that’s a security nightmare waiting to happen.
curious to hear your specific concerns on this; maybe it’s something that’s already handled?
On a related note, it would be nice if there was a shared storage option for self hosting. It wouldn’t be the same as self hosting, but more like distributed hosting where everyone pools storage they have available and we could have an encrypted sharing option.
You’re describing the world wide web, except giving others write access
Moved my fediverse apps friendica, lemmy, 35c. (only user is me) to one server since it was overkill having 2 barely using 8% if that if their cpu/ram. Suprisingly easy with yunohost backups, remade users and restored backup if just the apps. Updated enhance panel, switched the sites im making for family to use as a portfolio for local webdev to ols, fairly easy, was using wordpress templates wrong so I fixed that and redid the home pages, now I feel less confident with wordpress and wonder if ive always made sites wrong, think i just forgot since its been years.
Great to hear the yunohost migration worked. What’s 35C?
This is what I found, a Discord bot. Hopefully GP comes back with an answer.
I’ve been trying to learn K8s and more recently the Gateway API. The struggles are that most Helm charts don’t know Gateway (most are barely Ingressroute) and I’m trying to find a solution to one service affecting the other gateways.when a service cannot find a pod, the httproute fails and when one route fails, the ingress fails. It’s a weird cascading problem.
Right now, I’m considering adding a secondary service to each gateway that resolves to a static error page. I haven’t looked into it yet; it cane to me in the brief moment of clarity before I fell asleep last night.
Also, I may be doing everything wrong, but I am learning and learning is fun.
Setting up let’s encrypt auto cert renewal with ACME. Also looking to setup some monitoring service, basic stuff like CPU, memory usage etc. If anyone has recommendations that have an android app available, that would be awesome.
ACME.sh? I love that little tool.
Cert renewal via DNS-01, independent of any other services or ports. Set it up like 7 years ago and haven’t had to touch it since.
I’m personally using Prometheus Stack and like it, but I just check Grafana in my Android browser. I think Zabbix has an Android app but I don’t know if it has as many possibilities as Prometheus.