• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

1358 profile views

shEiD's Achievements


Apprentice (3/14)



  1. Does doing a New Config mean that all the settings, Docker containers and VMs will be gone? I mean, does New Config reset ONLY the disks (array/cache/pools), or does it reset everything? I need to replace 1 failing drive with a new one, and I do NOT have parity. Everything from the failing drive is already copied over to the new drive (rsync). And, no parity means there's nothing to rebuild. Preferably, I would love to simply swap the drives and keep all my setup intact...
  2. It seems fio does not work... I only get "Illegal instruction" on any command. unRAID Version: 6.9.2 NerdPack Version: 2021.08.11
  3. Yes, I have jq in my system, but I think I installed it with NerdPack, iirc. Or am I misremembering? I could swear, jq was in NerdPack before ๐Ÿค”, but maybe I'm wrong... Does this mean, that now jq comes with unRAID as standard, so it was removed from NP?
  4. Same here - jq is missing - has it been removed? Please, add it back ๐Ÿ™
  5. Here's the diagnostics with a running array. Although, I have already rebooted normally one more time, without starting the array - just to avoid the parity check. I mean I decided enough is enough - 2 parity checks in 2 days (0 errors), I feel 3rd time would be the same. There should have never actually been any writes to the array, when the server hanged... Sorry, if that messed up the diagnostics... did it? I actually thought, if you don't capture stuff before restart - all the info is useless anyways, as unraid is always loaded to memory and completely resets on reboot? Yep, that's how have set them, by using the units: from the smallest 3GB to the largest of 500GB. I usually tend to not fill any drives past 90% on my main server. And yes, on this backup server 500GB is normally way to much for 3TB drives, but meh - this was just temporary. This is as much a backup server, as it is a testing server. That's why I'm copying everything out to the main one. I want to create and properly test multiple btrfs cache pools using these drives. I would love sooooo much to use btrfs pools, if it wasn't so "anecdotally scary unreliable" and did not have so much warnings not to trust in it, all over the internets ๐Ÿ˜Ÿ
  6. I had to force reboot to get the diagnostics.
  7. My secondary/backup/testing server just "hanged" for a 3rd time in 2 days. This never happened before and it was working perfectly. I am in the process of copying all the files of that backup server (about 8-9TB) onto my main server. My plans was to copy in stages, one large folder (about 0.5TB - 1TB) at a time. All those 3 times the backup server "hanged" was during a large rsync transfer. At first everything was perfectly fine - I did 2 or 3 rsync transfers, about 1.2TB in total. I fired up another rsync and went to sleep. The next morning I noticed that the backup server has "hanged". My SSH session to the backup server was disconnected. WebUI was not working, I tried to ping it: Pinging with 32 bytes of data: Reply from Destination host unreachable. Reply from Destination host unreachable. The monitor on the backup server showed black screen and keyboard did nothing. All I could think of at this time, was to force shutdown with a power button. I turned it off and on again. Server booted normally and did a parity check (~9 hours) - no errors. Although I had no idea what happened, I was happy that parity check returned with 0 errors. The only linux experience I had is unraid, so after a couple of hours of googling, all I could think of, was to setup remote syslog. So that's what I did. I've setup both of my unraid machines to act as a remote syslog server for each other, hoping this could help me to find out what was the problem. I fired up another rsync and went on with my day. A couple of hours into the transfer the backup server hanged again. Exactly the same - unresponsive and unreachable. I looked at the remote syslog - and did not see anything that explained what happened, at least to me (as linux illiterate as I am). Well, something weird - there was a ton of those share cache full messages, and it was the last message in the syslog before the backup server hanged: Sep 2 19:19:39 unGiga shfs: share cache full Sep 2 19:19:39 unGiga shfs: share cache full Sep 2 19:19:39 unGiga shfs: share cache full Sep 2 19:19:39 unGiga shfs: share cache full Sep 2 19:19:39 unGiga shfs: share cache full Sep 2 19:20:47 unGiga ool www[5686]: /usr/local/emhttp/plugins/recycle.bin/scripts/rc.recycle.bin 'empty' Sep 2 19:20:48 unGiga Recycle Bin: User: Recycle Bin has been emptied Sep 2 19:20:53 unGiga ool www[5810]: /usr/local/emhttp/plugins/recycle.bin/scripts/rc.recycle.bin 'clear' Sep 2 19:27:14 unGiga shfs: share cache full That did not help at all... My cache pool on backup server was actually pretty empty. Again, I did some googling. And again, I forced-turned-off my backup server with the power button, and turned it back on. But this time, I logged in on the server itself and launched syslog tail, hoping that the monitor will stay working and I could see the errors, if same crap happened again. tail -f /var/log/syslog Parity check - 0 errors. ๐Ÿ‘ I was always using rsync on the backup server itself, to copy files into the locally mounted main server's share. Because those multiple log messages said something about share, I decided to switch it up. I fired up another rsync transfer, but this time I was copying over SSH, and not into the mounted share. And went to sleep. When I woke up, I found that the backup server has hanged again, for the 3rd time. And... everything was the same. Remote syslog showed nothing informative (at least to me) again, ping failed again, and the monitor was black again. The keyboard did not work, I tried CTRL+C, any other keys - nothing. Then I came here and started writing this post, asking for help. ๐Ÿคช The server is still "powered on". I decided to not force power down this time, in case there's anything can/need to be done in the process of trying to find out what the hell is going on. I have ran a ~24 hour memory test on the backup server about 6 months ago - perfectly fine, no errors. I've attached the whole remote syslog. Thanks in advance for any help.
  8. @Iker I tried to make sense of it, but failed miserably. 15TB total space says I have a normal RAID1. 10TB free space says I have RAID1C3. How can I actually see what type I have - a console command or webUI? EDIT is this it? Data, RAID1: total=799.00GiB, used=787.33GiB System, RAID1: total=32.00MiB, used=128.00KiB Metadata, RAID1: total=1.00GiB, used=930.06MiB GlobalReserve, single: total=512.00MiB, used=0.00B @tjb_altf4 Omg, thank you. Although that does not particularly inspire confidence in btrfs whatsoever ๐Ÿ˜‰ Fingers crossed, I won't get punished for going with btrfs over zfs ๐Ÿคž
  9. Like the title says. The weirdest thing - there's %33 of free space is magically missing on a brand new and empty cache pool. 15TB total - 537MB used = 10TB free ๐Ÿคช Help please.
  10. I've just installed and started using the File Integrity plugin today. I manually started a `Build` process on 7 (out of 28) drives in my array disk1 had least amount of files, so it has already finished But, the UI shows some nonsense ๐Ÿค” it shows disk1 as a circle, not green checkmark, even though it has just finished the build, and is up-to-date it shows disks 4, 5, 9 and 10 with a green checkmark, even though the builds are clearly still running and aren't finished it shows disks 7, 12, 17, 18, 19, 22, 23, 24, 26, 27 and 28 with a green checkmark, even though the build process has never been run on these disk... I mean, WAT? ๐Ÿ˜ณ Can it be, that am I really not understanding what the circle/checkmark/cross means? Or is this a bug?
  11. Linux newb here, so... sorry for a probably silly question: I would like to install fd. But it seems nerdpack has a really old version of fd: ``` fd-6.2.0-x86_64-1_slonly.txz ``` 6.2.0 if from Jan 3, 2018 ๐Ÿค” Current version is 8.2.1 Basically, how does this work in unraid? Do I need to ask here, in nerdpack thread for someone to "update" the included fd package?
  12. @olehj Thank You so much for this plugin. Awesome job ๐Ÿ‘ A little feature request, maybe... It would be nice if `Comment` could be displayed more prominently - larger font size and bold.
  13. @bidmead Awesome looking annotations ๐Ÿคฉ What program are you using to do this?
  14. I just finished running Parity Check with `Write corrections to parity` and updated the parity. The log shows a list of corrections, like these: Jan 21 13:58:50 unGiga kernel: md: recovery thread: P corrected, sector=5455701888 Jan 21 13:58:50 unGiga kernel: md: recovery thread: P corrected, sector=5455701896 Jan 21 13:58:50 unGiga kernel: md: recovery thread: P corrected, sector=5455701904 Jan 21 13:58:50 unGiga kernel: md: recovery thread: P corrected, sector=5455701912 Jan 21 13:58:50 unGiga kernel: md: recovery thread: P corrected, sector=5455701920 I guess, that means that parity has been updated. What is kind of confusing, is that the results in the UI and the log look exactly the same, as if I had run a read-only check... Should I run a read-only check again? To make sure that I get the desired 0 errors result?