xoC

Members
  • Posts

    39
  • Joined

  • Last visited

xoC's Achievements

Noob

Noob (1/14)

3

Reputation

  1. Thanks for your answer. Sometime ago, I did a mistake and began to move files from the GUI (not with Rsync) and it just filled my disks with full copies. I've deleted the non important backups, but looking at the overall size of the backups and considering my backup is from a folder where I only add files, never delete, it is way bigger than it should be if I didn't mess with a bad moving command. Since then, I've followed your commands in that topic to move all my backups will be on the Share / Disks I want with the Rsync command. Is it possible then to run a "check" and if a file is there multiple times (from a same source path), full copies of the same file in different folders, to "convert" all this copies to one hardlink each and shrink my backup size ? edit : oh and one more question, to better understand the "system" : if a backup share is on multiple disk, when we move with Rsync from one disk to another, can hardlinks from one drive link to data on another disk ? Because when we do "rsync --archive --hard-links --remove-source-files" it only speaks about disks, not shares. How could it know ?
  2. Small question : if I want to manually delete a backup from day X, can I naively use delete from the unraid GUI or do I need an Rsync delete command ?
  3. I have the same thing happening since an update somewhere in september/october IIRC. Usually at these times, the dashboard shows this : Dashboard becomes unresponsive. Everything else also. Can't even power off it doesn't respond (even from a one push on the HW button). And the terminal doesn't load so I can see what happens with HTOP... Edit : Since the last few months, I managed to get some diagnostics during one of this hang, showing nothing also. Edit2 : it was before September actually :
  4. Hi. Yes the Dynamix one. I'm disabling it to test, thanks.
  5. Hello, My CPU activity is a bit weird. See following screenshot from just freshly rebooted server : Is it normal ?
  6. Do you think it's a big enough reason to get a warranty replacement ?
  7. Ok... I bought that disk in march 2023, so it seems like a pretty bad one
  8. So, we're back at smart errors. Disk was at "reported uncorrect = 1" when I re-plugged it (before rebuilding the array). I acknowledged the issue and let the server run, it rebuilt and there was no error. This night I received a mail after parity check saying it failed, with "Disk 5 - ST4000VN006-3CW104_ZW603BKR (sdk) - active 32 C (disk has read errors) [NOK]" with reported uncorrect gone to 2. System log show some read errors on disk 5, on sectors close to each other, but no more disk reset with the new controller. It is a quite recent disk BTW. Nov 26 04:15:21 NAStorm kernel: ata22.00: exception Emask 0x0 SAct 0x7f SErr 0x0 action 0x0 Nov 26 04:15:21 NAStorm kernel: ata22.00: error: { UNC } Nov 26 04:15:21 NAStorm kernel: I/O error, dev sdk, sector 146077392 op 0x0:(READ) flags 0x0 phys_seg 59 prio class 2 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077328 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077336 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077344 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077352 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077360 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077368 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077376 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077384 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077392 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077400 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077408 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077416 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077424 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077432 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077440 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077448 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077456 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077464 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077472 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077480 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077488 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077496 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077504 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077512 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077520 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077528 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077536 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077544 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077552 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077560 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077568 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077576 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077584 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077592 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077600 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077608 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077616 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077624 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077632 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077640 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077648 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077656 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077664 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077672 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077680 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077688 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077696 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077704 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077712 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077720 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077728 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077736 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077744 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077752 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077760 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077768 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077776 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077784 Nov 26 04:15:21 NAStorm kernel: md: disk5 read error, sector=146077792 Nov 26 04:30:15 NAStorm root: Fix Common Problems: Error: disk5 (ST4000VN006-3CW104_ZW603BKR) has read errors I'm attaching current diagnostics. nastorm-diagnostics-20231127-1642.zip
  9. Thanks. Currently rebuilding the disabled disk, let's hope it will be good this time !
  10. So I've bought a 6 port card based on ASM1166, upgraded the firmware to the latest. How's the procedure to migrate disk ? I have in mind to move all the ones connected to the maxwell & jmicron chipsets. Can I just move the 4 disks at the same time, restart and it will be recognized ?
  11. It's one of the controller from the motherboard (which has 3 controllers managing 10 ports). Maybe it is failing ?
  12. Hello ! So, it worked for ~20 days and then I got some error. It was late september, and I had too much work and no time to check, so my server has been powered down since that time. I managed to get the diagnostics before shutting down, they are attached. Thanks in advance. nastorm-diagnostics-20230927-1715.zip
  13. So it seems to work with everything plugged back as it was since 2+ years. Could it be possible that the corrupted file system was just preventing rebuilds, as it just tried again and again to mount the disk during rebuild ? Anyway, I'll monitor closely the next few days and thanks a lot for your answers.