Schulmeister

Members
  • Posts

    58
  • Joined

  • Last visited

Everything posted by Schulmeister

  1. Is there a best practice for CachePools ? I would like to have one very fast cachepool but I need a backup every day. Is the fs important ? My idea is as follows: 1) 4TB m2.nvme as Cachepool (no Raid) (there will be Appdata, System, faster OS-Disks for the VMs and fast cache for Handbrake, Lancache, etc.) 2) 4TB m2.nvme mounted as unassigned device for the nightly backup of that Cachedrive 3) a folder on the Array for a weekly backup of the second (Backup)nvme 4) the important stuff an the Array (these folders included) will be copied an a NAS every Sunday after 3) is finished. I would create a cronjob in the VMs that they will shut down at let's say 2 AM. I would create a userscript that backups all VM-Disks, system and the fastcache folders from nvme1 to nvme2 and restarts the VMs I am using the Plugin Appdata Backup from Robin Kluth for the Backup of the Appdata. Is that a good way to go ? What will be the preferable fs for this operation ? Is there a good backup tool for unraid so that maybe I don't have to undergo all the hassle myself ? How do I setup unraid that the Cachefolders are truly fast (10GB Network speed) I know this is a lot and I have searched the forum multiple times but I never found like a walkthrough for a fast and reliable setup like this. Thanks a lot. PS: If you think it is a good idea to create a new Forumstopic of this question, please feel free to copy this...
  2. That did it - I think. I did as you told me, took every HDD out of that Cachepool (deleting the Cachepool) and put the HDDs in Unassigned Devices - thankfully all files were readable and I copied everything to the array. I had this Cachepool of two HDDs (16TB each) only to speed up read/write operations - it is not the way it's intended, i know. Is there a way to have a Disk/Raid/Pool that can speed up things ? Anyway, as always I am very grateful for the help you provide here - it really makes unraid stand out.
  3. root@RD6:~# sgdisk -o -a 8 -n 1:32K:0 /dev/sdi Creating new GPT entries in memory. The operation has completed successfully. root@RD6:~# btrfs fi show Label: none uuid: 431bb516-355f-42bf-9966-f4413b80ad33 Total devices 2 FS bytes used 1.21TiB devid 1 size 3.64TiB used 1.26TiB path /dev/nvme0n1p1 devid 2 size 3.64TiB used 1.26TiB path /dev/nvme1n1p1 Label: none uuid: c9259cf9-d19a-47b4-987e-42b0f5f82617 Total devices 2 FS bytes used 4.82TiB devid 1 size 14.55TiB used 6.22TiB path /dev/sdc1 devid 2 size 14.55TiB used 6.22TiB path /dev/sdi1 Label: none uuid: e474e21e-688b-4eda-8e90-9177c495d366 Total devices 1 FS bytes used 12.08GiB devid 1 size 894.25GiB used 16.02GiB path /dev/sdh1
  4. Hi Jorge, I tried to change the raid1 to single to remove the second drive - well that seemed to cause a problem. It is not that bad as far as there's not very important data on it. The second drive ZL22EVWC is still intact and connected. The PS you wrote is very disturbing: P.S. btrfs is detecting data corruption in multiple pools, xfs also detecting metadata corruption, suggesting you may have an underlying issue: What should i do ?
  5. Hi all, I have a Problem - as usual 🙂 I have a Cachepool with two HDDs running on btrfs, raid1. Usually /mnt/data After an error in the second disk I tried to replace the disk - now everything ist unmountable I tried: root@RD6:~# btrfs rescue super-recover -v /dev/sdc1 All Devices: Device: id = 1, name = /dev/sdc1 Before Recovering: [All good supers]: device name = /dev/sdc1 superblock bytenr = 65536 device name = /dev/sdc1 superblock bytenr = 67108864 device name = /dev/sdc1 superblock bytenr = 274877906944 [All bad supers]: All supers are valid, no need to recover ---------------- root@RD6:~# btrfs check --readonly --force /dev/sdc1 Opening filesystem to check... bad tree block 8152159748096, bytenr mismatch, want=8152159748096, have=0 Couldn't read tree root ERROR: cannot open file system -------------------- btrfs restore --dry-run -d /dev/sdc1 /mnt/disk7/restore/data/ bad tree block 8152159748096, bytenr mismatch, want=8152159748096, have=0 Couldn't read tree root Could not open root, trying backup super bad tree block 8152159748096, bytenr mismatch, want=8152159748096, have=0 Couldn't read tree root Could not open root, trying backup super bad tree block 8152159748096, bytenr mismatch, want=8152159748096, have=0 Couldn't read tree root Could not open root, trying backup super Nothing seems to work. that beeing my third time a "so called" raid filesystems (ZFS and btrfs) failed me I am very disappointed with these things. What is the sense of spending double the money on a security that does not do anything I shall buy myself a NAS and backup everything everyday and stop bothering with btrfs or zfs crap. Thank you all in advance for your time and support. rd6-diagnostics-20240117-1028.zip
  6. I have copied the Files on my local disk - they work fine. I deleted them, ran a scrub again and now everything is fine. I'll have an eye on that - the lockups of the server could have another reason: What does that mean: Oct 27 07:00:52 RD6 nginx: 2023/10/27 07:00:52 [error] 7772#7772: *1110829 open() "/usr/local/emhttp/server-status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /server-status?auto HTTP/1.1", host: "localhost" Oct 27 07:00:52 RD6 nginx: 2023/10/27 07:00:52 [error] 7772#7772: *1110830 open() "/usr/local/emhttp/server-status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /server-status?auto HTTP/1.1", host: "127.0.0.1" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110837 open() "/usr/local/emhttp/server-status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /server-status?auto HTTP/1.1", host: "localhost" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110838 open() "/usr/local/emhttp/server-status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /server-status?auto HTTP/1.1", host: "127.0.0.1" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110839 "/usr/local/emhttp/api/index.html" is not found (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /api/ HTTP/1.1", host: "127.0.0.1" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110842 open() "/usr/local/emhttp/status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /status?full&json HTTP/1.1", host: "localhost" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110843 open() "/usr/local/emhttp/status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /status?full&json HTTP/1.1", host: "127.0.0.1" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110844 open() "/usr/local/emhttp/status/format/json" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /status/format/json HTTP/1.1", host: "127.0.0.1" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110845 open() "/usr/local/emhttp/basic_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /basic_status HTTP/1.1", host: "127.0.0.1" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110846 open() "/usr/local/emhttp/stub_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /stub_status HTTP/1.1", host: "localhost" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110847 open() "/usr/local/emhttp/stub_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /stub_status HTTP/1.1", host: "127.0.0.1" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110848 open() "/usr/local/emhttp/nginx_status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /nginx_status HTTP/1.1", host: "127.0.0.1" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110849 open() "/usr/local/emhttp/status" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /status HTTP/1.1", host: "127.0.0.1" Oct 27 07:00:54 RD6 nginx: 2023/10/27 07:00:54 [error] 7772#7772: *1110850 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 127.0.0.1, server: , request: "GET /admin/api.php?auth=&version=true HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "127.0.0.1" Oct 27 07:00:55 RD6 nginx: 2023/10/27 07:00:55 [error] 7772#7772: *1110852 open() "/usr/local/emhttp/us" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /us HTTP/1.1", host: "localhost" Oct 27 07:00:55 RD6 nginx: 2023/10/27 07:00:55 [error] 7772#7772: *1110853 open() "/usr/local/emhttp/us" failed (2: No such file or directory), client: 127.0.0.1, server: , request: "GET /us HTTP/1.1", host: "127.0.0.1"
  7. I highly doubt that - both nvme disks are new, so is the pcie-nvme-adaptor It might be caused by the systemlockups, or maybe vice-versa - who knows. What should I do ?
  8. Aaaaaand - down again, I've jinxed it. Around 14:20 today the Unraid system was unresponsive again. I have attached the diags and the syslog file. Maybe this time some insights will be possible. syslog-10.20.30.100.log rd6-diagnostics-20231025-1623.zip
  9. I have - hopefully - found the solution. I got rid of every ZFS-formatted volume. I had the cache formatted in ZFS and one Disk from the array (Spaceinvaderone made two videos on how that is a good idea) I removed the ZFS Cache and added a btrfs one. I reformatted the disk back to xfs. Since 10 days now issues - fingers crossed that was the problem.
  10. That is most likely the error I made. I'll change that
  11. It happened again. I am really happy that I had a qnap-syslog server running, because there are no syslogs under appdata (the folder I selected for the local syslog server) Anyway - the system was totally unresponsive - even local commandline (Monitor/keyboard) I post the diagnostics and the syslog messages rd6-diagnostics-20230928-2206.zip
  12. I would not go that far. 6.12 is an excellent Update - except for this one problem since 6.12.4 with the unresponsibility. I cannot tell what the problem was - I think I have a problem with the VM-Backup script colliding with mover - but that is just a hunch and I have no evidence whatsoever. I have now the internal and an external syslog-server and fingers crossed we get to the bottom of things. Again, this version is far from "desaster" and I have every confidence that we will solve the problem.
  13. The diagnostic stops there and thats it. sed -ri 's/^(share(Comment|ReadList|WriteList)=")[^"]+/\1.../' '/rd6-diagnostics-20230924-2253/shares/appdata.cfg' 2>/dev/null I will set up syslog server and hope for a solution.. Can I check any logs in order to get to the Problem ?
  14. I have problems with the same config too. The System becomes unresponsive, smb-connection not possible. The VMs run smoothly though. Even cli on the machine itself cannot shut the system down, so hard-reset is the only possibility. After that the server runs a few days without any issues and then it starts again.
  15. Hi all, I did setup a new Server from scratch, everything runs very smoothly and I am using Docker, VMs and SMB Shares. I attach my diagnostics that where taken after hard-resetting the server because the first try stopped ther: mkdir -p /boot/logs mkdir -p '/rd6-diagnostics-20230924-2253/system' '/rd6-diagnostics-20230924-2253/config' '/rd6-diagnostics-20230924-2253/logs' '/rd6-diagnostics-20230924-2253/shares' '/rd6-diagnostics-20230924-2253/smart' '/rd6-diagnostics-20230924-2253/qemu' '/rd6-diagnostics-20230924-2253/xml' top -bn1 -o%CPU 2>/dev/null|todos >'/rd6-diagnostics-20230924-2253/system/top.txt' tail /boot/bz*.sha256 >> '/rd6-diagnostics-20230924-2253/unraid-6.12.4.txt' uptime nproc lscpu 2>/dev/null|todos >'/rd6-diagnostics-20230924-2253/system/lscpu.txt' lsscsi -vgl 2>/dev/null|todos >'/rd6-diagnostics-20230924-2253/system/lsscsi.txt' lspci -knn 2>/dev/null|todos >'/rd6-diagnostics-20230924-2253/system/lspci.txt' lsusb 2>/dev/null|todos >'/rd6-diagnostics-20230924-2253/system/lsusb.txt' free -mth 2>/dev/null|todos >'/rd6-diagnostics-20230924-2253/system/memory.txt' ps -auxf --sort=-pcpu 2>/dev/null|todos >'/rd6-diagnostics-20230924-2253/system/ps.txt' lsof -Pni 2>/dev/null|todos >'/rd6-diagnostics-20230924-2253/system/lsof.txt' lsmod|sort 2>/dev/null|todos >'/rd6-diagnostics-20230924-2253/system/lsmod.txt' df -h 2>/dev/null|todos >'/rd6-diagnostics-20230924-2253/system/df.txt' ip -br a|awk '/^(eth|bond)[0-9]+ /{print $1}'|sort dmidecode -qt2|awk -F: '/^ Manufacturer:/{m=$2};/^ Product Name:/{p=$2} END{print m" -"p}' 2>/dev/null|todos >'/rd6-diagnostics-20230924-2253/system/motherboard.txt' dmidecode -qt0 2>/dev/null|todos >>'/rd6-diagnostics-20230924-2253/system/motherboard.txt' cat /proc/meminfo 2>/dev/null|todos >'/rd6-diagnostics-20230924-2253/system/meminfo.txt' dmidecode --type 17 2>/dev/null|todos >>'/rd6-diagnostics-20230924-2253/system/meminfo.txt' ethtool 'eth0' 2>/dev/null|todos >>'/rd6-diagnostics-20230924-2253/system/ethtool.txt' ethtool -i 'eth0' 2>/dev/null|todos >>'/rd6-diagnostics-20230924-2253/system/ethtool.txt' ethtool 'eth1' 2>/dev/null|todos >>'/rd6-diagnostics-20230924-2253/system/ethtool.txt' ethtool -i 'eth1' 2>/dev/null|todos >>'/rd6-diagnostics-20230924-2253/system/ethtool.txt' ethtool 'eth2' 2>/dev/null|todos >>'/rd6-diagnostics-20230924-2253/system/ethtool.txt' ethtool -i 'eth2' 2>/dev/null|todos >>'/rd6-diagnostics-20230924-2253/system/ethtool.txt' ethtool 'eth3' 2>/dev/null|todos >>'/rd6-diagnostics-20230924-2253/system/ethtool.txt' ethtool -i 'eth3' 2>/dev/null|todos >>'/rd6-diagnostics-20230924-2253/system/ethtool.txt' ethtool 'eth4' 2>/dev/null|todos >>'/rd6-diagnostics-20230924-2253/system/ethtool.txt' ethtool -i 'eth4' 2>/dev/null|todos >>'/rd6-diagnostics-20230924-2253/system/ethtool.txt' ethtool 'eth5' 2>/dev/null|todos >>'/rd6-diagnostics-20230924-2253/system/ethtool.txt' ethtool -i 'eth5' 2>/dev/null|todos >>'/rd6-diagnostics-20230924-2253/system/ethtool.txt' ip -br a|todos >'/rd6-diagnostics-20230924-2253/system/ifconfig.txt' sed -ri 's/(["\[ ])(127|10|172\.1[6-9]|172\.2[0-9]|172\.3[0-1]|192\.168)((\.[0-9]{1,3}){2,3}([/" .]|$))/\1@@@\2\3/g; s/(["\[ ][0-9]{1,3}\.)([0-9]{1,3}\.){2}([0-9]{1,3})([/" .]|$)/\1XXX.XXX.\3\4/g; s/@@@//g' '/rd6-diagnostics-20230924-2253/system/ifconfig.txt' 2>/dev/null sed -ri 's/(["\[ ]([0-9a-f]{1,4}:){4})(([0-9a-f]{1,4}:){3}|:)([0-9a-f]{1,4})([/" .]|$)/\1XXXX:XXXX:XXXX:\5\6/g' '/rd6-diagnostics-20230924-2253/system/ifconfig.txt' 2>/dev/null find /sys/kernel/iommu_groups/ -type l 2>/dev/null|sort -V|todos >'/rd6-diagnostics-20230924-2253/system/iommu_groups.txt' todos '/rd6-diagnostics-20230924-2253/system/cmdline.txt' echo -ne ' /boot ' >>'/rd6-diagnostics-20230924-2253/system/folders.txt';ls -l '/boot'|todos >>'/rd6-diagnostics-20230924-2253/system/folders.txt' echo -ne ' /boot/config ' >>'/rd6-diagnostics-20230924-2253/system/folders.txt';ls -l '/boot/config'|todos >>'/rd6-diagnostics-20230924-2253/system/folders.txt' echo -ne ' /boot/config/plugins ' >>'/rd6-diagnostics-20230924-2253/system/folders.txt';ls -l '/boot/config/plugins'|todos >>'/rd6-diagnostics-20230924-2253/system/folders.txt' echo -ne ' /boot/syslinux ' >>'/rd6-diagnostics-20230924-2253/system/folders.txt';ls -l '/boot/syslinux'|todos >>'/rd6-diagnostics-20230924-2253/system/folders.txt' echo -ne ' /var/log ' >>'/rd6-diagnostics-20230924-2253/system/folders.txt';ls -l '/var/log'|todos >>'/rd6-diagnostics-20230924-2253/system/folders.txt' echo -ne ' /var/log/plugins ' >>'/rd6-diagnostics-20230924-2253/system/folders.txt';ls -l '/var/log/plugins'|todos >>'/rd6-diagnostics-20230924-2253/system/folders.txt' echo -ne ' /boot/extra folder does not exist ' >>'/rd6-diagnostics-20230924-2253/system/folders.txt' echo -ne ' /var/log/packages ' >>'/rd6-diagnostics-20230924-2253/system/folders.txt';ls -l '/var/log/packages'|todos >>'/rd6-diagnostics-20230924-2253/system/folders.txt' echo -ne ' /var/lib/pkgtools/packages ' >>'/rd6-diagnostics-20230924-2253/system/folders.txt';ls -l '/var/lib/pkgtools/packages'|todos >>'/rd6-diagnostics-20230924-2253/system/folders.txt' echo -ne ' /tmp ' >>'/rd6-diagnostics-20230924-2253/system/folders.txt';ls -l '/tmp'|todos >>'/rd6-diagnostics-20230924-2253/system/folders.txt' cp /boot/config/*.{cfg,conf,dat} '/rd6-diagnostics-20230924-2253/config' 2>/dev/null cp /boot/config/go '/rd6-diagnostics-20230924-2253/config/go.txt' 2>/dev/null sed -i -e '/password/c ***line removed***' -e '/user/c ***line removed***' -e '/pass/c ***line removed***' '/rd6-diagnostics-20230924-2253/config/go.txt' sed -ri 's/^((disk|flash)(Read|Write)List.*=")[^"]+/\1.../' '/rd6-diagnostics-20230924-2253/config/*.cfg' 2>/dev/null sed -ri 's/(["\[ ])(127|10|172\.1[6-9]|172\.2[0-9]|172\.3[0-1]|192\.168)((\.[0-9]{1,3}){2,3}([/" .]|$))/\1@@@\2\3/g; s/(["\[ ][0-9]{1,3}\.)([0-9]{1,3}\.){2}([0-9]{1,3})([/" .]|$)/\1XXX.XXX.\3\4/g; s/@@@//g' '/rd6-diagnostics-20230924-2253/config/network.cfg' 2>/dev/null sed -ri 's/(["\[ ]([0-9a-f]{1,4}:){4})(([0-9a-f]{1,4}:){3}|:)([0-9a-f]{1,4})([/" .]|$)/\1XXXX:XXXX:XXXX:\5\6/g' '/rd6-diagnostics-20230924-2253/config/network.cfg' 2>/dev/null /usr/local/emhttp/webGui/scripts/show_interfaces ip|tr -d ' '|tr '#' ' '|tr ',' ' ' >'/rd6-diagnostics-20230924-2253/config/listen.txt' /usr/local/emhttp/webGui/scripts/error_interfaces|sed 's///' >>'/rd6-diagnostics-20230924-2253/config/listen.txt' sed -ri 's/(["\[ ])(127|10|172\.1[6-9]|172\.2[0-9]|172\.3[0-1]|192\.168)((\.[0-9]{1,3}){2,3}([/" .]|$))/\1@@@\2\3/g; s/(["\[ ][0-9]{1,3}\.)([0-9]{1,3}\.){2}([0-9]{1,3})([/" .]|$)/\1XXX.XXX.\3\4/g; s/@@@//g' '/rd6-diagnostics-20230924-2253/config/listen.txt' 2>/dev/null sed -ri 's/(["\[ ]([0-9a-f]{1,4}:){4})(([0-9a-f]{1,4}:){3}|:)([0-9a-f]{1,4})([/" .]|$)/\1XXXX:XXXX:XXXX:\5\6/g' '/rd6-diagnostics-20230924-2253/config/listen.txt' 2>/dev/null sed -ri 's/^(share(Comment|ReadList|WriteList)=")[^"]+/\1.../' '/rd6-diagnostics-20230924-2253/shares/appdata.cfg' 2>/dev/null I cannot tell if it is the reason but it happened the first time after updating to 6.12.4. Can you give me any advice on the matter ? Thanks in Advance rd6-diagnostics-20230925-0024.zip
  16. First of all: Thank you guys very much - you are the best. I managed to get very large amounts of Data back. sadly the NVME was damaged beyond repair and I lost the Data. Shame on me <- no Backup, no pity. I am still puzzled how that could have happened - I did learn a lot from it, mostly to have at least one Backup of my important Data. I scrapped the servcer because mit trust in that machine is gone and I set up a new unraid-Server from Scratch, the copying of roughly 70TB was a schlepp but eventually everything is up and running. I wanted to wrap this up, again thank you for your help.
  17. After a restart it showed that disk1 is again not mountable. Posting diagnostics after starting disk7 rebuild. Fingers crossed everything goes according to plan and my new server will take over soon. Do you have any advice on how to transfer some 60TB to the new server fastest way possible ? I have no trust in the old server whatsoever - he let me down badly hsunraid-diagnostics-20230815-0224.zip
  18. I am really looing my nerves - I am at rebuilding disk6 (of 7) and all of the sudden (checking the files in Lost+Found disk 3,4,5) the windows share is gone :-(((( The /mnt/ Folder shows ?disk1 ?user ?user0 addons disk2 disk3 disk4 disk5 disk6 remotes rootshare znvme I am going crazy. Diags after this. I am waiting for disk6 to be rebuilt before I start new ? I am very concerned about this setup - it took me a week to rebuild 6 disks with the 7th waiting. Please help me hsunraid-diagnostics-20230814-2305.zip
  19. Hi all, I was facing total destruction but thanks to the magnificent support from you guys I am very slowly recovering. All (7) of may Data disks on the Array where destroyed, the parity drive was intact - so I rebuilt everything one disk at a time (with the disks being 16-18 tb, that is still its time). No to my new problem: The files are all in the lost+found folder and I managed to remember a few shared folders but not all of them. Is there a file, a list - something where I can read or copy all of my shares ? Thanks in advance hsunraid-diagnostics-20230813-1640.zip
  20. It wanted the -L and corrected the heck out of the imap - hundrets, thousands of corrections. I hope it's going well - I am now stopping the maintenance array and start in normal mode ?