Fredrick

Members
  • Posts

    145
  • Joined

  • Last visited

Everything posted by Fredrick

  1. This has happend to me too. Did you ever solve it @Convington?
  2. This didn't work for me. The entire /mnt/user/appdata/influxdb folder has been wiped for me, I don't know how that could have happend during the update. Just to be clear this was already a InfluxDB v2 docker, it just updated to a newer build Any ideas?
  3. I believe the relevant portion of the log is in the spoiler. I don't see any errors indicated here, and the pushover notification definitely says Backup Complete. No trace of what happend with influx either. Running Unraid 6.9.2, plugin dated 2021.03.13 which I see isn't the newest. EDIT: I didn't find the correct line earlier. It does in fact indicate that the source couldn't be found, which is because I've moved my appdata from /mnt/cache/appdata to a new nvme disk at /mnt/cache_nvme/appdata. Oops.. Still no clue why Influxdb messed up so bad EDIT2: It's obviously much better to backup /mnt/user/appdata than pointing it directly to the drives. I don't know what I was thinking when I set it up..
  4. So my monthly backups run on the 25th, and like clockwork I've recived a Pushover notification that it has completed the backup. I've suffered data loss, and I have no idea what happend. My influxdb appdata folder is suddenly completely empty (no settings, no database). I figured I'd just restore the latest backup, but they apparently haven't been stored for the past couple of months.. Anyone got any idea what could have happend? Both with the influxdb and with this plugin. My settings:
  5. I've removed diagnostics from OP. I think I found the problem myself by scouring the logs. There were indications it was caused by "mongod" which afaik is probably the db used by unifi-controller docker. It makes sense cause that docker has been plagued with problems. Lately it suddenly pumps out insane amounts of data cause of some bug. I've tried reinstalling the entire container now and just restoring config from a backup. Hopefully that can help, but I'll keep an eye out. Is there an easy way I can set a new "starting point" for fix common problems? I.e. I'd like it to notify me about new oom-events, but not the ones I've already seen here.
  6. Hi, Every time fix common problems run I'm getting errors that it detects OOM-situations on my server. However I monitor the memory usage, and I'm not seeing anywhere close to full there. Usually hovering around 40%. Is this just a bug in reporting, or is there something I'm not seeing? How can I identify the culprit? Diagnostics attached.
  7. Seems that my Windows VM was logged out, and that prevented a normal shutdown for the plugin. I've made it so it stays logged in to the user account, and it performs well now. Thanks
  8. Hi, I'm trying to set up this plugin to backup a VM that is running. I.e. it needs to be shut off before the backup. The plugin logs that it tries to shut it off, but it's not working. I've tested and the VM can ble cleanly shut down with "Stop" through the WebUI, so theres nothing preventing the VM from being shut off. Theres also nothing in the syslog indicating that the VM has gotten a shutdown-command, so I'm pretty sure this is not working from the plugin. Any pointers? I obviously don't want to enable force shutdown/kill. Thanks!
  9. Thats a shame. I've been working on my backup, and it's nowhere near as good/fresh/complete as it should have been. Is there any reason I shouldn't trust this drive from your point of view? Thanks again
  10. Thanks for getting back to me. Unfortunately it still doesn't find a file system there root@Tower:~# btrfs-select-super -s 1 /dev/sdn1 No valid Btrfs found on /dev/sdn1 Open ctree failed
  11. I suspect the drive didn't get re-formatted correctly when I made it into cache_vms, and that's why btrfs commands are not working. I hope it's okay to try and tag @JorgeB here cause I think he or she is the person for the job. Thanks a lot root@Tower:~# fdisk /dev/sdn -l Disk /dev/sdn: 223.57 GiB, 240057409536 bytes, 468862128 sectors Disk model: KINGSTON SV300S3 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x00000000 Device Boot Start End Sectors Size Id Type /dev/sdn1 2048 468862127 468860080 223.6G 83 Linux root@Tower:~# file -sL /dev/sdn /dev/sdn: DOS/MBR boot sector; partition 1 : ID=0x83, start-CHS (0x0,0,0), end-CHS (0x0,0,0), startsector 2048, 468860080 sectors, extended partition table (last) parted command is not available on unraid it seems.
  12. I've also tried restoring backup superblock as outlined here, but got the same results as OP in that thread.
  13. Hi, I'm upgrading my server, and wanted to run my VMs from a newly created cache pool. I moved my vdisks to the array, formatted the drive as a cache-pool (btrfs) with a single drive, and moved the vdisks back. I then started the VM and checked everything, it was working well. Now I had to do a reboot since I'd filled up my log-vdisk from running the mover (moved plex-directory with loads of files). After rebooting the new cache_vms is showing as unmountable. I've tried step 1 and 2 from here, but it didn't work. root@Tower:/# mount -o usebackuproot,ro /dev/sdn1 /x mount: /x: wrong fs type, bad option, bad superblock on /dev/sdn1, missing codepage or helper program, or other error. root@Tower:/mnt/disk8# btrfs restore -v /dev/sdn1 /mnt/disk8/restore No valid Btrfs found on /dev/sdn1 Could not open root, trying backup super No valid Btrfs found on /dev/sdn1 Could not open root, trying backup super ERROR: superblock bytenr 274877906944 is larger than device size 240056360960 Now is there any chance of getting my images back? One of the VMs runs my home automations and is kinda critical. I've got a windows backup of the important files, but I was hoping to avoid having to set up a new VM and restoring the data. Diagnostics attached Thanks a bunch! tower-diagnostics-20210718-1547.zip
  14. Hi, I wanted to ask just to be sure before doing something very stupid.. I've upgraded my server and with it comes a new nvme drive that I want to use for Dockers. I've created the new cache called cache_nvme, and set Shares "appdata" and "system" to prefer the new cache_nvme. The files are however still left on the old cache, and I've tried to find the "correct" way to move it. Should I: Set the shares to Yes: Cache_nvme Invoke the mover Set the shares to Prefer: cache_nvme Invoke the mover Or is there a better way? I guess I could disable Docker and move with terminal from /mnt/cache/appdata/ to /mnt/cache_nvme/appdata, but that kind of sounds more risky to me. Thanks!
  15. Hey guys, I'm about to upgrade my Unraid server, and wanted to check in with you to see if there is anything I'm forgetting or should change before taking the leap. I'm currently on 6.8.2 running on an old Proliant ML350 G6 - 2x Xeon 5650x, 40GB ECC RAM. Drives are attached via a SAS-expander through an LSI (9207-8e I believe) controller. I'm running about 25 Dockers and 2-4 VMs. I've got a 1TB SSD as the main Cache, a 250GB SSD through Unassigned drives for my VMs and a 1TB spinner throught Unassigned drives for temporary downloads (for the extraction). Now I want to upgrade to something a bit more lightweight (power/noise-wise) while keeping most of the oomph. I'm also getting a NVME drive for database performance. Here is my hardware purchase list. I've got a chassis and a PSU. It's a downgrade on the RAM-capacity, but I've got room to expand if need be. I'm choosing previous generation Ryzen as it's more affordable and accessible than the newer 5600X counterpart with roughly the same performance. Ryzen 7 3700X 2x 16GB Corsair Vengeance RGB PRO SL DDR4-3600 DC C18 GIGABYTE B450 AORUS PRO WD Black SN850 PCIe 4.0 NVMe M.2 - 500GB I plan on upgrading to 6.9 and make use of the new cache pools. This is the plan: NVMe drive for Dockers, including the various database applications (InfluxDB for metrics/Grafana, MariaDB for NextCloud, MySQL for various applications/development). Shares using this drive will prefer cache Existing 1TB SSD as my "main" cache which will recieve writes such as downloads and file transfers. Shares that use this will be set up to move to array Existing 250GB SSD will continue to hold my VMs Now as far as I can tell I should be able to build the new server, move the SAS-controller and USB-drive and it should boot into Unraid. I'll then have to set up the new cache pools and be good to go, right? I'd love some input if you've got some. Thanks!
  16. I ended up shutting the system down from vacation. When I came back I re-seated all cables to SAS-expanders and booted again. Parity was marked as invalid. I did a read-check of the array which went without errors. Then I stopped the array, unassigned the parity drives, started the array, stopped the array, assigned parity drives again. It re-built both parity drives without errors, and is seemingly running well now. I'll pay attention to the RAM usage going forward. Thanks.
  17. Diagnostics attached, thanks EDIT: Diagnostics removed.
  18. Hi guys, I'm on vacation but got some concerning notifications from unRAID l night. Both my parity drives had been disabled within the same minute, I've got errors all over and a warning on my cache drive. I'm afraid to stop the array or reboot. Any advice? Attached screenshot
  19. I don't know why, but this is not working for me. The server.log is spammed with various warnings about mongod not being shut down correctly and needs repairing. Mongod.log on the other hand has other messages thats equally cryptic to me unfortunately.. Seems to still be a version mismatch: Fatal assertion 28579 UnsupportedFormat: Unable to find metadata for table:index-1--7693998512115095491 Index: {name: site_id_1, ns: ace.account} - version too new for this mongod. See http://dochub.mongodb.org/core/3.4-index-downgrade for detailed instructions on how to handle this error. at src/mongo/db/storage/wiredtiger/wiredtiger_index.cpp 268
  20. This is with a Debian bare-metal VPS without GUI so I dont think thats an option unfortunately. I'm trying to contact my influxdb-server using this command: curl -G http://MY-IP:8086/query -u username:password --data-urlencode "q=SHOW DATABASES" The same command works when trying from another computer on the same network, then by using the local IP. I'm obviously using my global IP when trying from VPS. Same username and password works locally. EDIT: Got it! It was my port forwarding that was off, damn Unifi changing the entire UI between each time I edit settings..
  21. Anyone that can help me get access to my influxdb remotely? I want to be able to add some metrics from my VPS to InfluxDB, but I'm struggling with connecting. I've forwarded port 8086 in my firewall to my Unraid server which is set up with the same port. I don't understand but I still cant get to it. Is there anything I need to change in the influxdb configuration files?
  22. I'm having an issue with this where the container keeps writing to the docker.img, is there anything I can do to change the path? Or the retainment policy for logs? Currently sitting at close to 9gb for this image which is just too damn high EDIT: So here's what you do.. Change local.conf.yaml in appdata/loki/conf FROM table_manager: retention_deletes_enabled: false retention_period: 0s to this: table_manager: retention_deletes_enabled: true retention_period: 24h Now it will rotate logs out of retention after 24 hours. You can change the period to whatever you like ofcourse. The container has to be deleted and re-made it seems for the old data to be deleted.
  23. Hey @Natcoso9955 - Thanks for building this and the Promtail one
  24. Hey, I've got this docker setup and running with Organizr as the frontend, it has been working great! Now I'm developing a .php page that I want to try while coding it. Is there an easy solution to use this docker to serve the .php without messing with the rest of my setup? For now I'd like to just have it served locally
  25. This is how I found out. Push notification from "fix common problems"-error. Thanks!