Schulmeister

Members
  • Posts

    58
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Schulmeister's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. Is there a best practice for CachePools ? I would like to have one very fast cachepool but I need a backup every day. Is the fs important ? My idea is as follows: 1) 4TB m2.nvme as Cachepool (no Raid) (there will be Appdata, System, faster OS-Disks for the VMs and fast cache for Handbrake, Lancache, etc.) 2) 4TB m2.nvme mounted as unassigned device for the nightly backup of that Cachedrive 3) a folder on the Array for a weekly backup of the second (Backup)nvme 4) the important stuff an the Array (these folders included) will be copied an a NAS every Sunday after 3) is finished. I would create a cronjob in the VMs that they will shut down at let's say 2 AM. I would create a userscript that backups all VM-Disks, system and the fastcache folders from nvme1 to nvme2 and restarts the VMs I am using the Plugin Appdata Backup from Robin Kluth for the Backup of the Appdata. Is that a good way to go ? What will be the preferable fs for this operation ? Is there a good backup tool for unraid so that maybe I don't have to undergo all the hassle myself ? How do I setup unraid that the Cachefolders are truly fast (10GB Network speed) I know this is a lot and I have searched the forum multiple times but I never found like a walkthrough for a fast and reliable setup like this. Thanks a lot. PS: If you think it is a good idea to create a new Forumstopic of this question, please feel free to copy this...
  2. That did it - I think. I did as you told me, took every HDD out of that Cachepool (deleting the Cachepool) and put the HDDs in Unassigned Devices - thankfully all files were readable and I copied everything to the array. I had this Cachepool of two HDDs (16TB each) only to speed up read/write operations - it is not the way it's intended, i know. Is there a way to have a Disk/Raid/Pool that can speed up things ? Anyway, as always I am very grateful for the help you provide here - it really makes unraid stand out.
  3. root@RD6:~# sgdisk -o -a 8 -n 1:32K:0 /dev/sdi Creating new GPT entries in memory. The operation has completed successfully. root@RD6:~# btrfs fi show Label: none uuid: 431bb516-355f-42bf-9966-f4413b80ad33 Total devices 2 FS bytes used 1.21TiB devid 1 size 3.64TiB used 1.26TiB path /dev/nvme0n1p1 devid 2 size 3.64TiB used 1.26TiB path /dev/nvme1n1p1 Label: none uuid: c9259cf9-d19a-47b4-987e-42b0f5f82617 Total devices 2 FS bytes used 4.82TiB devid 1 size 14.55TiB used 6.22TiB path /dev/sdc1 devid 2 size 14.55TiB used 6.22TiB path /dev/sdi1 Label: none uuid: e474e21e-688b-4eda-8e90-9177c495d366 Total devices 1 FS bytes used 12.08GiB devid 1 size 894.25GiB used 16.02GiB path /dev/sdh1
  4. Hi Jorge, I tried to change the raid1 to single to remove the second drive - well that seemed to cause a problem. It is not that bad as far as there's not very important data on it. The second drive ZL22EVWC is still intact and connected. The PS you wrote is very disturbing: P.S. btrfs is detecting data corruption in multiple pools, xfs also detecting metadata corruption, suggesting you may have an underlying issue: What should i do ?
  5. Hi all, I have a Problem - as usual 🙂 I have a Cachepool with two HDDs running on btrfs, raid1. Usually /mnt/data After an error in the second disk I tried to replace the disk - now everything ist unmountable I tried: root@RD6:~# btrfs rescue super-recover -v /dev/sdc1 All Devices: Device: id = 1, name = /dev/sdc1 Before Recovering: [All good supers]: device name = /dev/sdc1 superblock bytenr = 65536 device name = /dev/sdc1 superblock bytenr = 67108864 device name = /dev/sdc1 superblock bytenr = 274877906944 [All bad supers]: All supers are valid, no need to recover ---------------- root@RD6:~# btrfs check --readonly --force /dev/sdc1 Opening filesystem to check... bad tree block 8152159748096, bytenr mismatch, want=8152159748096, have=0 Couldn't read tree root ERROR: cannot open file system -------------------- btrfs restore --dry-run -d /dev/sdc1 /mnt/disk7/restore/data/ bad tree block 8152159748096, bytenr mismatch, want=8152159748096, have=0 Couldn't read tree root Could not open root, trying backup super bad tree block 8152159748096, bytenr mismatch, want=8152159748096, have=0 Couldn't read tree root Could not open root, trying backup super bad tree block 8152159748096, bytenr mismatch, want=8152159748096, have=0 Couldn't read tree root Could not open root, trying backup super Nothing seems to work. that beeing my third time a "so called" raid filesystems (ZFS and btrfs) failed me I am very disappointed with these things. What is the sense of spending double the money on a security that does not do anything I shall buy myself a NAS and backup everything everyday and stop bothering with btrfs or zfs crap. Thank you all in advance for your time and support. rd6-diagnostics-20240117-1028.zip
  6. I have copied the Files on my local disk - they work fine. I deleted them, ran a scrub again and now everything is fine. I'll have an eye on that - the lockups of the server could have another reason: What does that mean: Oct 27 07:00:52 RD6 nginx: 2023/10/27 07:00:52 [error] 7772#7772: *1110829 open() "/usr/local/emhttp/server-status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /server-status?auto HTTP/1.1", host: "localhost" Oct 27 07:00:52 RD6 nginx: 2023/10/27 07:00:52 [error] 7772#7772: *1110830 open() "/usr/local/emhttp/server-status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /server-status?auto HTTP/1.1", host: "127.0.0.1" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110837 open() "/usr/local/emhttp/server-status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /server-status?auto HTTP/1.1", host: "localhost" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110838 open() "/usr/local/emhttp/server-status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /server-status?auto HTTP/1.1", host: "127.0.0.1" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110839 "/usr/local/emhttp/api/index.html" is not found (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /api/ HTTP/1.1", host: "127.0.0.1" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110842 open() "/usr/local/emhttp/status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /status?full&json HTTP/1.1", host: "localhost" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110843 open() "/usr/local/emhttp/status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /status?full&json HTTP/1.1", host: "127.0.0.1" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110844 open() "/usr/local/emhttp/status/format/json" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /status/format/json HTTP/1.1", host: "127.0.0.1" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110845 open() "/usr/local/emhttp/basic_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /basic_status HTTP/1.1", host: "127.0.0.1" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110846 open() "/usr/local/emhttp/stub_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /stub_status HTTP/1.1", host: "localhost" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110847 open() "/usr/local/emhttp/stub_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /stub_status HTTP/1.1", host: "127.0.0.1" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110848 open() "/usr/local/emhttp/nginx_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /nginx_status HTTP/1.1", host: "127.0.0.1" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110849 open() "/usr/local/emhttp/status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /status HTTP/1.1", host: "127.0.0.1" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110850 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 127.0.0.1, server: , request: "GET /admin/api.php?auth=&version=true HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "127.0.0.1" Oct 27 07:00:55 RD6 nginx: 2023/10/27 07:00:55 [error] 7772#7772: *1110852 open() "/usr/local/emhttp/us" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /us HTTP/1.1", host: "localhost" Oct 27 07:00:55 RD6 nginx: 2023/10/27 07:00:55 [error] 7772#7772: *1110853 open() "/usr/local/emhttp/us" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /us HTTP/1.1", host: "127.0.0.1"
  7. I highly doubt that - both nvme disks are new, so is the pcie-nvme-adaptor It might be caused by the systemlockups, or maybe vice-versa - who knows. What should I do ?
  8. Aaaaaand - down again, I've jinxed it. Around 14:20 today the Unraid system was unresponsive again. I have attached the diags and the syslog file. Maybe this time some insights will be possible. syslog-10.20.30.100.log rd6-diagnostics-20231025-1623.zip
  9. I have - hopefully - found the solution. I got rid of every ZFS-formatted volume. I had the cache formatted in ZFS and one Disk from the array (Spaceinvaderone made two videos on how that is a good idea) I removed the ZFS Cache and added a btrfs one. I reformatted the disk back to xfs. Since 10 days now issues - fingers crossed that was the problem.
  10. That is most likely the error I made. I'll change that
  11. It happened again. I am really happy that I had a qnap-syslog server running, because there are no syslogs under appdata (the folder I selected for the local syslog server) Anyway - the system was totally unresponsive - even local commandline (Monitor/keyboard) I post the diagnostics and the syslog messages rd6-diagnostics-20230928-2206.zip