• Posts

  • Joined

  • Last visited

About NLS

  • Birthday 03/30/1974


  • Gender
  • URL
  • Location

Recent Profile Visitors

2284 profile views

NLS's Achievements


Collaborator (7/14)



  1. I am confused. Someone should probably EDIT the first post of this thread to point to the latest news about this plugin, its fate, how to manually achieve the same effect, instead of browsing through so many pages. I for would love an updated version of nerdpack, as it made some things much easier.
  2. Is it possible to add Gigabyte X570-UD support to System Temperature plugin? I recently changed server motherboard (erm... twice) and now I miss motherboard temperature and it doesn't detect the sensor device to load the driver. It actually tries to load the previous one (and I see an error during boot for exactly this reason) too.
  3. I thought of that but I don't think this is comparable to full actual surface test.
  4. Seems it was indeed the cable. Which SUCKS as for whoever followed my three latest threads, can see I got all kinds of cable errors. And OK parity disk is on normal SATA cable that goes on mobo. The others are 1-to-4 SAS/SATA cables that cannot be replaced very fast (need to be ordered from ebay etc.). Anyway... now parity builds the emulated disk, with 0 errors until now (3.2%). Knock on wood. (then will replace the parity with bigger new, rebuild, then replace the very old temp 3TB that builds right now, with new bigger and build YET again, then use the old 4TB parity to replace one more older 3TB data disk and yes... rebuild AGAIN) Note: I never a reply on how someone could possibly (surface?) check the parity disk.
  5. I am going to wait for the parity build to finish (it is more than 70% now). I am not close to the server anyway. Or should I just stop it so it doesn't come online? Assuming it is the cable, the proper procedure (after replacing) is what? How can I enforce to rebuild disk 9 from the start? Delete whatever partition it created? Also is there any disk check appropriate for parity? (before starting rebuild)
  6. ...erm actually just noticed in the GUI... Current operation started on Wednesday, 28-09-2022, 07:34 (today) Elapsed time: 4 hours, 51 minutes Estimated finish: 2 hours, 32 minutes Finding 219564171 errors The number is a bit unrealistic. So, could again be a cable issue, and is that on parity? Yes it is on parity as I also see this: Parity WDC_WD40EFRX-68N32N0_WD-WCC7K6SYT2RN - 4 TB (sdb) * 493 984 963 5944 224 437 515 So, about half the reads (!?) fail? Any ideas? Also is there any disk test appropriate for the parity? (that I understand doesn't have a proper filesystem?)
  7. So my syslog got full, while rebuilding a disk from parity (as seen in my previous threads). The rebuild is still around 50% and progressing without any report of issues in the GUI, although it did pop up about a single error (probably bad sector in parity?)... but from that point no further issues and if I didn't notice the log getting full I would think things are ok. So the lines that fille up syslog are as follows: Sep 28 10:17:59 <my server> kernel: md: disk0 read error, sector=2169270496 (with the sector keep changing in the every line) Last entry (because it filled up) was 70 minutes ago (10:17:59 or something, local) and I am about 4 hours in the rebuild already. So I looked to find the FIRST such entry in the log. What I found was very interesting. The first entry in the log, more than 1.5 million lines above the last, WAS THE SAME MINUTE (10:17:06). It actuallly "burst" 1.7 million lines in the same limit, so I double it correctly identifies the error. (Is it realistic to find 1.7 million sectors with problem within 50 seconds? Plus who know how many more after log was full?) Also I am not sure which is "disk0" as I don't have that anywhere. Is it the parity? What is happening? I post a truncated version of the log. syslog.txt
  8. Yes didn't realize at the time that it was the same issue practically. Thanks.
  9. See here what happened next: (different issue, thus different thread - plus will contact support)
  10. This is a continuation of what happened here: ...but a different issue that I suspect will need the assistance of LimeTech themselves. So... a short version of what led to the CURRENT issue: 1- Two disks broke down (I broke them down... this is where the previous thread stops). 2- I managed to fix one of the two (hardware fix - actual data not touched), so I re-installed it and wanted system to start in non-redundant state and emulate the missing one. (I have done it before more than once on other cases, I know how it works perfectly well) 3- BUT when I first started the system with the fixed disk, somehow (probably cable issue) another disk (a third one), reported many disk errors. NOTE: This is the only time I actually started the array (the third disk didn't report issues before starting the array). 4- I brought system down, checked cables, restarted system. The third disk reported ok this time, BUT ANOTHER DISK (fourth one!), reported missing. 5- That was again a cable re-seat issue. I restarted the system after cable check... AND HERE IS THE ISSUE... All disks (except the one actually broken down and missing) reported ok this time BUT for some reason after that boot the array started automatically (didn't boot stopped to allow me to handle any issues like the missing disk). Somewhere between steps 3, 4, 5 above, the broken down missing disk (that was supposed to be emulated) SOMEHOW (I didn't do it by hand) showed as "unassigned"!!! Which means that the system actually thinks the array is originally with one LESS disk, not emulating the missing one! Again, I didn't do that by hand (remember I manually started the array ONLY on step 2, not on the next reboots), but somehow the system got confused between different missing disks. So IS THERE A WAY to somehow tell the system that the "unassigned" position needs to be emulated again??? The parity should be untouched so the emulation of the missing disk should work (and parity should actually be INVALID with the current supposed unassigned position and the system doesn't even know that yet). I immediately stopped the array so that parity remains untouched. I don't think any write took place in the less-than-a-minute time that it was automatically brought online. I have on record the full ID (I believe) and serial of the missing disk if I need to manually enter it somewhere. I should be able to somehow edit the configuration manually??? Heeeelp? EDIT: I have ordered replacement disk(s) already. If I make a new configuration, assign all old disks same order, put the new empty one in place of the missing (but now somehow "unassigned") one and tell it to trust parity as valid (which probably is), WILL it actually rebuild the missing disk? Or something somewhere, even though the parity is calculated for 11 data disks + parity, now "believes" the system should have 10 data disks + parity??? EDIT #2: I *MAY* have resolved this. I found an older 3TB disk (as the missing one) in my drawer. I followed the process described bv @JorgeB in the other thread... and while disk is not shown in dashboard as emulated - IT IS emulated (shows as not installed but does allow me to browse the emulated contents of the disk)... So, right now I am making a copy of the most vital data of that emulated disk to a USB disk. After that, I will actually "assign" that old 3TB disk back to the array to be re-built from parity. Then after parity syncs, I will replace the disks with the new ones I have on order (one by one to allow re-sync each time). Seems UNRAID, although scary at times, is more resilient than meets the eye.
  11. OK clear. I still can only hope #1 gets "properly" implemented for disaster situations some time. In my case, I don't want to destroy my parity yet, as it is yet undetermined if the disk will recover, cloned or what.
  12. Experts in the room: Is it possible to achieve #1 partially, by going to maintenance mode and then using unassigned devices to mount cache? (and not mount the other disks at all) Will I be able to then start some containers and vms? Also I think UD also allows to mount as read-only. Can I use that to access my remaining data without affecting parity?
  13. I know - I even have a couple of test VMs on the array on purpose. This won't "hurt" though, just those will fail if started. It is a very useful function to have for people that need it. Ability to work in production environment even partially is very important.
  14. Maybe you guys are right. That said, the first idea is super helpful and I really hope it gets added. (and is not destructive - even if array is fully normal, the only difference the user would notice is that the shares are read only) It is actually an in-between of normal operation and maintenance mode. Maintenance mode doesn't mount any disk or cache. This one would mount every disk available in array (and start it even with disks and data missing), but read-only (so that parity is not killed), so that people can access important (remaining) data AND cache (and normally domains, system and appdata - if they still reside in cache) read/write so that VMs and containers can start.