Mover problem after FS conversion


Recommended Posts

I recently completed file system conversion on all my data disks following the guide from the wiki. All seemingly went well. However, the next scheduled mover task seemed to be hung up (kicked off @ 0130, still running @ 0730). I panicked and stopped mover via CLI. The server would not seem to reboot. I issued POWERDOWN via CLI and it eventually shut down, although uncleanly. After the resulting parity check (0 errors), the next scheduled mover task also seemed frozen. At this point, I launched MC and looked on each Data disk for any duplicates of something on the Cache disk. I found a couple of "***.partial" files that I don't think would be a problem, but deleted anyway because the downloads are not important. After verifying there is nothing on the Cache that's already on an array disk, I invoked mover again. According to the log, it seems to start on the first file and then just sits there. On the main UI page, there appears to be very slow write activity to an array disk, as well as reads on the cache. Below is the NetData for the array disk (which is shingled):

image.png.e468e5a9b0a8461eb4fac3b5d9fc04db.png

Link to comment

I did experience super slow transfer speeds with Rsync on this drive during the FS conversion process. IIRC it took almost 2 full days to move just ~5.5tb to it. I chalked it up to it being an SMR, WORM drive.

 

I stopped the mover task and added Disk 2 to the exclusion list for my media share (Data). I mover again and now it seems to be progressing normally. I'm guessing this is a HDD issue and not an unRaid issue? I'll try for an RMA but the HDD in question is shucked from a Seagate Expansion external I bought in 2018. Maybe I can fill up the drive manually (with several overnight sessions I'm sure) and continue to use it as read-only?

Link to comment

Your load average is basically sky high

 14:01:32 up 1 day, 22:37,  0 users,  load average: 18.98, 19.33, 19.05  Cores: 8

On an 8 core machine, 100% utilization of the cpu means a load of 8.  You're running at the time of the diagnostics ~19 which means that for every process running, there's another 2.5 also wanting to run but they can't as the CPU is already running full tilt.

 

 

Link to comment
7 hours ago, VelcroBP said:

I chalked it up to it being an SMR, WORM drive.

If CPU load return normal after you stop written to SMR disk, then it is SMR issue.

 

7 hours ago, VelcroBP said:

continue to use it as read-only?

Not a bad idea.

If you can re-health the disk then you may use it back as normal disk ( but still be some kind of write once disk ), seems disks not support TRIM, so you may try secure erase to re-health it.

RMA ... also fine

 

Does this normal usage keeping ?

/dev/md2        7.3T  5.2T  2.2T  71% /mnt/disk2

Link to comment

CPU load is coming back down since mover stopped trying to use Disk 2.

 

9 hours ago, Vr2Io said:

If you can re-health the disk then you may use it back as normal disk

Does this normal usage keeping ?

/dev/md2        7.3T  5.2T  2.2T  71% /mnt/disk2

what is "re-health"?

Also: I'm not sure what you're asking about normal usage?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.