Hello,
My IoT/Home Automation needs are centered around custom built ESPHome devices and I currently have them all connected to a HA instance and things work fine.
Now, I like HA’s interface and all the sugar candy, however I don’t like the massive amounts of resources it requires and the fact that the storage usage keeps growing and it is essentially a huge, albeit successful, docker clusterfuck.
Is there any alternative dashboard that just does this:
- Specifically made for ESPHome devices - no other devices required;
- Single daemon or something PHP/Python/Node that you can setup manually with a few systemd units;
- Connects to the ESPHome devices, logs the data and shows a dashboard with it;
- Runs offline, doesn’t go into 24234 GitHub repositories all the time and whatnot.
Obviously that I’m expecting more manual configuration, I’m okay with having to edit a config file somewhere to add a device, change the dashboard layout etc. I also don’t need the ESPHome part that builds and deploys configurations to devices as I can do that locally on my computer.
Thank you.
How has home assistant become a resource monster? What kind of integrations are you using aside from ESPHome?
Yeah, home assistant is tiny… I’m not sure what he expects? Does he need it to run on a pi zero or something? Lol
Tiny you say… answer to what you asked: https://lemmy.world/comment/7101252
I’m not using any other integration. Isn’t this a resource monster?
I just don’t want to keep running an entire VM with their image. Something more simple that could be used on a LXC / systemd-nspawn container or directly on a base system would be nicer.
It’s half a GB of ram and virtually no CPU usage. You could run it on a Pi 3 with a 16Gb SD card and have resources to spare.
This is just weird.
What is weird is having to waste almost 700MB of ram + 10GB of storage for a simple webui that charts sensor data and only keeps it for 10 days. As a comparison my NAS container runs Samba4, FileBrowser, Syncthing, Transmission, and a few others under 300MB of RAM with pontual spikes on operations.
There’s a lot of difference between a container and a VM. You can install HA on a container, all you have to do is set it up according to the manual install instructions, and work around any hardware interfacing issues that come up. You’ll save 200MB of RAM and will have to do any upgrades manually. Doesn’t seem worth it to me, but to each their own.
What I’m going to do is setup HA Core on a container manually and run without addons / docker. That will be about installing python and should waste way less resources.
??
If you don’t need the addons you don’t need Docker. HA Core is a python script with a few dependencies that can run with pyenv and a simple systemd service unit at every boot.
yeah but that’s not setting up a container that’s just setting up python env
You have found the smallest, tiniest, itty bittiest potatoe to get upset about here. You could run this on a toaster.
I’m not upset, just wondering / looking for way to keep the potato from growing further and/or alternatives.
In what world is this is a resource monster??
If this is what you consider a resource monster you’re gonna have a really, really rough time
This isn’t reasonable at all, 700MB of ram + 10GB of storage for a simple webui that charts sensor data and only keeps it for 10 days.
You need to edit your configuration.yaml file to exclude certain sensors or values. I excluded some of the more chatty sensors that I didn’t need and my disk use went from around 40gb to 150mb
Interesting. I’ll have to check what might be logging so much info.
Do you have the entire hass-os image running in a VM?
Yes.