Jump to content

'user' directory gone


Go to solution Solved by JorgeB,

Recommended Posts

I was transferring data from an external hdd to a share I created using sonarr preferred formatting: /mnt/user/data/media/...

At some point, the data transfer grinded to a halt around 7mbps and was stuck around that. I left it like that for a couple days but read about ways to optimize this speed and saw a couple that I implemented... while the transfer was still taking place. Bad move. I turned share settings from cache prefer to cache no and changed disk setting tunable (write_md_method) from auto to reconstruct write.

 

I believe it was when I changed the cache setting that the move of data stopped with error, and the entire user directory is now gone including all the shares that were there within.

 

I'm brand new to unraid so apologies if I'm not explaining this well. Any help understanding what happened or how to remedy this problem would be extremely apprciated.

 

 

origin-diagnostics-20221001-0213.zip

Edited by wherzwaldo93
Added diagnostic
Link to comment

Thanks a ton, that worked.

 

Oh damn I guess it's good I'm moving the data then. On a similar note, I thought a major benefit of having a cache pool was to facilitate faster transfers of data? Yet, all the forum posts that I read about large/fast transfers of data instruct turning cache setting to no. Why is this?

Link to comment
13 hours ago, wherzwaldo93 said:

I thought a major benefit of having a cache pool was to facilitate faster transfers of data? Yet, all the forum posts that I read about large/fast transfers of data instruct turning cache setting to no. Why is this?

Depends on the size of your cache pool, and the amount of data you are transferring.

 

If your cache pool has enough free space to hold all the data in this particular batch, then leave cache:YES and if the mover is scheduled when you aren't using the server, it will all get moved to the array at the slower array speed while you sleep.

 

If you have more data than will fit in the pool, it will end up either giving you an out of space error or start copying directly to the slower array when it runs out of space, depending on your settings. Then when the mover is scheduled, it will move the data from the pool to the array freeing up the pool space. Either way, the transfer will take longer than if you just sent it directly to the array in the first place.

 

Writing to the array is slower than writing to the cache pool in a typical system, but the data needs to end up on the array eventually anyway.

 

tldr: Using cache pools to write to a parity protected array share don't speed up the parity array, it's just a fast temporary spot to put the data and let the server deal with it later. The data still has to get written at array speed, but you don't have to wait on it.

 

Here's an extreme example. You have 10TB of data to transfer, and a 128GB cache pool. The first 128GB writes fast, then the transfer slows down, and the pool shows full. You manually run the mover, now the array is receiving writes from the mover, and your transfer slows down even further. The mover can't empty the pool as fast as you can fill it, so you are stuck at a snails pace.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...