-
-
Notifications
You must be signed in to change notification settings - Fork 202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
concurrent database access #2180
Comments
HA is and has never officially been supported. I don't think your observed log lines have anything to do with the database file locking. They just indicate that FTL doesn't manage to store queries in the database and keep them in its 24 hours rolling window memory (which it anyway has pre-allocated). Once queries could not be added within a full 24 hours window, they are lost and will not make their way into the database. I have never tried this myself but writing to the database from two independent FTL processes is likely causing other defects, too, e.g., the uniqueness constraint of the query ID will be violated as both processes will think they are the single source of truth. What are you trying to achieve with sharing the long-term query database? I would understand you want to share the Bottom line: Do you still see increasing memory when you run your system exactly as before but just avoid sharing the database file? If so, there may be a memory leak in the database code handling the errors. With all that said, I shall not hide from you that we are close to releasing Pi-hole v6.0 shortly and it very likely already have a fix for what you are seeing. If you are deploying containers, anyway, you could try the |
My only goal is to not have down time when re-deploying. With only one pod/container, there's a brief outage if I change config (to add a new custom DNS entry, for example) and re-deploy. Other than the brief outage during re-deployment, there's no issue if I run only one pod, either with shared ( I tried the I think what you're implying is that another option would be to run two instances with the same config and separate query database, is that right? And it would mean stats and logs would vary in the web interface. I'd be OK with that, but it would need to be setup in the helm chart I think. |
I can't answer for DL6ER, but to avoid downtime most users (including myself and probably other Pi-hole members) configure 2 independent Pi-hole servers (no matter if they are docker, VMs, different computers or a combination of previous options), each Pi-hole with its own IP. Then we configure both IPs as DNS server in the DHCP settings of the router. When one Pi-hole is offline, the devices will use the other. If both are online, each device will decide how they will send/distribute the DNS queries between both servers.
Yes. If you want to show data from both Pi-holes in a single place, maybe you can try to use some service/script to aggregate the 2 individual databases into a single data source. |
Yes, I agree with what @rdwebdesign has said:
I have had no contact with Kubernetes myself, so I cannot comment on this at all. |
Versions
Pi-hole version is v5.18.3 (Latest: v5.18.4)
web version is v5.21 (Latest: v5.21)
FTL version is v5.25.2 (Latest: v5.25.2)
Platform
Expected behavior
I'm using pi-hole in kubernetes with two copies accessing the same shared database, in order to support HA. I expected this to work OK, but I see issues.
Actual behavior / bug
Memory usages increases forever until the pod runs out of memory and is killed. The logs show:
Steps to reproduce
Steps to reproduce the behavior:
Install via k8s and helm, ensuring the
persistentVolumeClaim
is using astorageClass
which supportsaccessModes
set toReadWriteMany
and setting it to that, and then increase replicaCount to 2 or more.Debug Token
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: