infinisean

Members
  • Posts

    7
  • Joined

  • Last visited

infinisean's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I installed your plugin and tried a basic configuration. I have 9 btrfs-formatted SSDs mounted and when I run the mergerfs command to create the "pool", it appears to run and exit without error (code 0)... however the new mountpoint does not show up in "df -h", nor can I "cd" to the directory or "ls" it.... when I run umount, it also exits without error, like it correctly unmounted the new mountpoint, and if I re-run umount, it will error saying "not mounted" (as it should). So it appears as though something is either wrong with the options I am using to create the new mount, or there is a bug somewhere. I also tried using fstab and also directly on the command line, both with the same effect. # Disk mounts /dev/sdm1 1.5T 1.5T 30G 99% /mnt/netapp/disk1 /dev/sdo1 1.5T 1.5T 50G 97% /mnt/netapp/disk2 /dev/sdn1 1.5T 1.5T 22G 99% /mnt/netapp/disk3 /dev/sdv1 1.8T 3.8M 1.8T 1% /mnt/netapp/disk4 /dev/sdt1 1.8T 3.8M 1.8T 1% /mnt/netapp/disk5 /dev/sdp1 1.8T 135G 1.7T 8% /mnt/netapp/disk6 /dev/sdl1 1.8T 1.8T 44G 98% /mnt/netapp/disk7 /dev/sdr1 3.5T 3.8M 3.5T 1% /mnt/netapp/disk8 /dev/sds1 1.5T 3.8M 1.5T 1% /mnt/netapp/disk9 /dev/sdu1 3.5T 3.8M 3.5T 1% /mnt/netapp/parity1 (planning to use this with snapraid, once mergerfs is working) # The mergerfs command I used: root@UNAS:/# mergerfs -o allow_other,use_ino,cache.files=partial,dropcacheonclose=true,category.create=mfs,minfreespace=10G,fsname=mergerfs /mnt/appdata/disk* /pool/ root@UNAS:/# ls /pool /bin/ls: cannot access '/pool': No such file or directory root@UNAS:/# umount /pool root@UNAS:/# Any help you can provide would be greatly appreciated. Thanks!
  2. Hello all, I'm running ver. 6.12.6 and I just made a stupid, distracted mistake. I'm looking for advice how to proceed from here. I was trying to move about 1 TB of media files off an NVME cache drive to my array, when I ran the cmd mv /mnt/cache/data/media/movies/* /mnt/user/data/media/movies Yes, I am fully aware I am an idiot and where I went wrong. Please, hold the lectures as I am just looking for how to move forward from here. Yes, I know the first thing everyone will say is "restore from backups", however, while I DO have *most* of my data backed up, I do not have *ALL* of it backed up (long story, again, no lectures please...) and the array is about 16 TB. So, restoring entirely from backups is not possible (and would take approximately six lifetimes to complete, if it were). I'll restore what I need to (and what is possible to) from backups, but I am trying to minimize that process and retain as much of the original (uncorrupted) media files as possible. My questions are: 1. What should I do from here? (I did immediately stop docker / VMs and unmount the array, so as to avoid writing any more data and compounding the problem further)... I think the next step is to scan the filesystem(s) to see what is corrupted/missing... but 2. Should I do it against the individual disks while unmounted? mount the array in maint. mode and run that way? (Note: The individual disks are XFS formatted, and the cache drive is BTRFS) What is the best way to determine exactly which files have been corrupted, if any, and how best to minimize further damage? Lastly, I do not believe the files that were in the cache drive had files with the same names on the array... so, is it possible nothing was corrupted when I ran the (very, very stupid) command that I ran? Could they, along with the rest of my files, still be intact, somehow? (I realize this is a long-shot, but one can hope, right?) Any assistance/advice will be greatly appreciated. Thank you!
  3. I am having this issue as well on 6.12.6 And yes, I have run memtest. all 256 GB of ECC memory tests just fine. In my case, the losf "fault" immediately precedes my 10 NIC loosing link temporarily, which kills all my transfers, so this is causing me a tremendous amount of grief. Can we please get some kind of official attention on this? (more than another canned "have you run memtest?" response, please) It is affecting far too many people across too many versions and for far too long to be any one person's issue... there is clearly a problem here that needs to be addressed.
  4. I have tried everything I can think of to avoid adding more data to my Disk1 but nothing works. I tried with it as the lone "exclude" disk, I tried with all disks except 1 as "included", and I tried both together which is redundant and should not be necessary... and every combination of fill-up method, but in every scenario, disk1 still gets all new writes to the point it fills up, even when I have 8 other disks with plenty of space free. Is there anything I've not covered that would explain why Disk1 keeps getting ALL the damn writes, when it shouldn't? The other drives work fine..... I can manually copy to them, the "unbalanced" app works fine to redistribute data, etc... it's just that the "new" data sent to the /mnt/user/data share does not obey the rules I configure to prefer the other disks under any circumstances. Any suggestions would be greatly appreciated. Thanks!
  5. I did, and that didn't help... but I did figure out the cause, or at least part of it. One of my disks filled up at one point, and after that the bytes counters stopped working. Redistributing some data so that disk was no longer full didn't fix the counters, but after the next reboot (once the disk was no longer full), they started working again. So it seems like it might be some corner-case bug that is only triggered when an individual disk fills up. While that did solve the mystery of the non-functional byte counters, it exposed a different mystery... why that disk filled up to begin with. I have tried everything I can think of to avoid adding more data to my Disk1 but nothing works. I tried with it as the lone "exclude" disk, I tried with all disks except 1 as "included", and I tried both together which is redundant and should not be necessary... and every combination of fill-up method, but in every scenario, disk1 still gets all new writes to the point it fills up, even when I have 8 other disks with plenty of space free. Is there anything I've not covered that would explain why Disk1 keeps getting ALL the damn writes, when it shouldn't? The other drives work fine..... I can manually copy to them, the "unbalanced" app works fine to redistribute data, etc... it's just that the "new" data sent to the /mnt/user/data share does not obey the rules I configure to prefer the other disks under any circumstances. Any suggestions would be greatly appreciated. Thanks!
  6. They are Input/outputs per second (IOPs), and I’m talking like 2k iops per second / 3-400 MB/sec, so not negligible at all.
  7. Hi All, I've been using Unraid for a few years now and love it. Fairly familiar with all of its features, so this issue is a major headscratcher. First, I'll say I have two unraid servers, both running 6.12.6. Primary server does NOT have this issue, secondary does and I can't figure out why... On the "Main" tab, if I am in the "Reads/Writes (IO/s)" mode for the disk counters, they increment as expected while any disk operations are ongoing. When I click to change to "bytes" mode, all disks remain at 0 Bytes/sec, forever, while the same operations are running. I can switch back and forth, in real time white xfers are running, one mode works and one shows constant zero. I can't think of a reason for this behavior, nor can I find a setting that affects the counters and I've been trying for a while. - I've stopped/started the array, rebooted, etc. No change. - The array is started and has no errors, parity is valid (one disk) - Docker running or not (no difference) - If I log into the server via SSH or the terminal through the WebUI and run "iostak -k 1", I see the disk reads/writes in bytes per second, just as I would expect, so the data is clearly accessible, it is just not making it to the WebUI for some reason. Has anyone else seen this? I'm hoping it's something simple I am overlooking, but I can't imagine what it could be at this point. Any assistance would be greatly appreciated. Thanks!