User share only filling one disk - split level issue?


Recommended Posts

Hi everyone,

 

I'm new to Unraid and have spent the last couple of weeks researching, then building my server (6.9.2) and migrating data across from a QNAP TS-421.

 

It's been a journey of discovery, which these forums have been extremely useful in helping me navigate new territory. I have resisted posting anything until now, so I'll give a bit of a braindump followed by my current issue.

 

I originally had 3 x 3TB drives in the QNAP, which I upgraded about a year ago to 4 x 4TB. So I had 3 x 3TB drives (now about 9 years old) sitting around not being used and 4 x 4TB drives (less than 1 year old) in a RAID5 in the QNAP. I wanted to eventually have all 7 drives in Unraid to unleash the full capacity of all. I also bought a new 6 TB drive as a parity, making 8 drives in total.

 

The first step was to install the 3 x 3TB drives in the new Unraid server, after which I had 9 TB free space. Because my QNAP was currently setup with a share called 'Multimedia' and folders for 'Movies' and 'TV Shows', I created a user share on Unraid called 'Multimedia' which had access to all 3 drives with the 'high water' allocation method and split level 'auto split any directory'. In this share I created two folders 'Movies' and 'TV Shows'. Once this was setup I began to migrate my data from the 4x 4TB drives on the QNAP using rsync.

 

First issue I found was that although the QNAP calculated the size of 'Movies' and 'TV Shows' at just about 8.4 TB, once I transferred 'Movies' to Unraid, I found that it was about 10% larger on Unraid. So I knew that I wasn't going to fit everything from those two folders across the 9TB share. I resolved this by using a couple of 1 TB portable drives to take the overflow from 'TV Shows' plus all the other smaller folders I had in 'Multimedia'. The 3 x 3TB drives appeared to be progressively filled equally, which i assume was the high water allocation method doing its job.

 

Now I had copied everything off the QNAP, either onto the Unraid or portable drives, I removed the 4 x 4TB drives from the QNAP and added them to the Unraid array. I was pleasantly surprised to find the clearing process performed on all 4 drives concurrently and only took 8 hours.

 

The second step was to migrate everything back on to the Unraid. My end goal was to have my movies and TV shows spread across the 16 TB of new 4 TB drives as these are the folders getting the most read/write action and I was wary of the age of the 3 TB drives. So I setup a new share called 'Movies' and selected two of the empty 4 TB drives as included. I set the allocation method as 'high water' and the split level as 'automatically split only the top level directory'.

 

Using rsync again, I transferred the movie folders from share Multimedia\Movies (on the 3 x 3TB drives) to share Movies\ (on two of the 4TB drives) however the system filled the first 4 TB drive and did not continue to fill the second, despite the system saying that the share has 4TB free space. rsync says that there is no space on the Movies share.

 

1. Is this problem related to split level? My folder structure on the in the Multimedia share is Multimedia\Movies\Movie <ABC>\Movie <files>. In the new share I've essentially reduced a level so it's Movies\Movie <ABC>\Movie <files>

2. Should I worry myself about HDD age by separating high use shares from older drives? Or as a newbie am I getting too technical and I should just run one share across all drives and let the system figure it all out?

 

Thanks!

Link to comment

Ahhh yeah of course, so if I delete all the empty folders then use rsync again, will it know to go to the next drive to create the missing folders?

 

Or should I just delete the whole thing and redo rsync with the split level set to all?

 

Or should i be using something other than rsync to move files between shares (it looks like Krusader also creates all folders first)?

 

Thanks!

Edited by randommonth
Link to comment

Another technique I have used. Just move the empty folders to other disks so they will be filled there. When I did my initial load of my backup server I noticed rsync had created all folders in advance on one disk, and it was filling them in alphabetical order, so I just started at the other end of the alphabet moving empty folders to other disks.

Link to comment
  • 10 months later...

I feel like I'm in similar situation but I have my Split level to automatically split any directory as required (allocation is high water). My 4tb drive is 3tb full but my two other 1tb drives haven't been touched yet (if it helps, the 4tb drive is Disk 1 and the other two are Disk 9 and 10 - I am currently moving data using rsync from unassigned drives to the array and then slowly adding these drives to the array in order from large to small). 

 

Edit (a few minutes later): I think I may have misunderstood how high allocation works. I was thinking it would go until one drive is 50% full then the next drive until it is 50% full and then circle back to 75% and so on. But from this graphic (Allocation Method) I *think* I understand now that my high water level is, at first, 2tb. Because my other drives are 1tb, they don't have 2tb "free". Then it goes back to the 4tb drive, changes the high water level to 1tb. Fill up to then and then moves on to the other drives. But because they are 1tb drives, they will have met that 1tb high water level already. The array will then go back to drive 1, set the high water level to 500gb, fill up to then, and then it will actually start writing to the other 1tb drives (until 500gb remains). 

^-- Is that right? 

 

Edit 2: In the tooltip for Allocation Method, it reads:

Quote

The goal of High-water is to write as much data as possible to each disk (in order to minimize how often disks need to be spun up), while at the same time, try to keep the same amount of free space on each disk (in order to distribute data evenly across the array).

 

I think that last parenthetical is a bit misleading (or ripe for confusion) when an array is built with drives of vastly different sizes as the data might not really be distributed evenly across the array. 

Edited by Shu
Link to comment

Since you don't say anything about any disks between disk1 and disk9 and 10 I am going to pretend they don't exist for the example.

 

It is going to stay with disk1 for a while since the others are so much smaller.

 

I would expect disk1 to be used until it gets to 500GB remaining, then disk9 until 500GB remaining, then disk10 until 500GB remaining, then back to disk1 until 250GB remaining, etc.

 

If other disks do exist and have free space then of course other things could happen

Link to comment
8 minutes ago, trurl said:

Since you don't say anything about any disks between disk1 and disk9 and 10 I am going to pretend they don't exist for the example.

 

It is going to stay with disk1 for a while since the others are so much smaller.

 

I would expect disk1 to be used until it gets to 500GB remaining, then disk9 until 500GB remaining, then disk10 until 500GB remaining, then back to disk1 until 250GB remaining, etc.

 

If other disks do exist and have free space then of course other things could happen

Thanks, trurl! It took a couple of days for me but it's starting to make sense :)

 

And you are correct, disks 2 through 8 have yet to be added. I am rsync-ing my other 4tb drive right now and that will get slotting into the disk 2 spot.

Link to comment
5 hours ago, Shu said:

rsync-ing my other 4tb drive right now and that will get slotting into the disk 2 spot.

Not sure what you mean by resyncing with regards to a disk not yet added to the array but it doesn't seem like the right way to go. 

Link to comment
2 hours ago, trurl said:

Not sure what you mean by resyncing with regards to a disk not yet added to the array but it doesn't seem like the right way to go. 

I'm using the "rsync" cli utility to copy (move, really) files from my old server to this array (I'm setting up unraid for the first time this week). I used to use StableBit Drivepool which, similarly, had complete files written to disk and I had my largest folder with a 2x duplication rule. I believe "rsync" is the right utility to use when there is the high likelihood for many duplicate files.

 

I used the Unassigned Devices plugin to mount the unassigned disks to be able to read from them.

 

I hope this makes sense

Edited by Shu
Link to comment
9 hours ago, trurl said:

So you are moving files to a disk already in the array?

I'm moving files from an unassigned disk to a share (not any particular disk). The command I am using is: 

 

rsync -a -H -v --remove-source-files --progress "/mnt/disks/06_WD_Red_NAS_3tb/[main folder]/" "/mnt/user/[share name]/"

 

(The parts in the brackets are shortened for laziness.) I am experiencing some crashes/freezes from time to time; not sure why. My unraid computer will still be on but the webGUI inaccessible. (When I plug a monitor directly, firefox opens but "localhost" never loads. (This info isn't necessarily relevant to the rsync topic, though; I don't think). 

Edited by Shu
Link to comment
11 hours ago, trurl said:

Okay, it crashed again last night. I attached the syslogs here. I do notice an Out of Memory error starting at 01:52:48 with the last entry being at 02:20:51.

 

My server hardware is: 

Ryzen 5 3600

MSI x570-a pro

16gb ddr4 ram

I do have a gpu but it's a very old one that I used just for video output when I was on windows (not sure if that's needed with unraid)

LSI HBA

syslog

Link to comment
4 hours ago, JorgeB said:

Lots of call traces, start by running memtest, then a lot of OOM errors, you need to limit resources.

When I boot my server, I select Memtest86+ but it doesn't seem to be working? Computer reboots and i get taken back to automatic boot screen. Not sure why

 

Edit: Seems like something with UFEI vs Legacy boot? Will need to research this more (not really sure I understand the difference btwn the two)

 

Edit 2: I made a bootable USB for memtest86. Its running now. Are there any settings or specific tests to run or just the default?

Edited by Shu
Link to comment
8 hours ago, JorgeB said:

Lots of call traces, start by running memtest, then a lot of OOM errors, you need to limit resources.

 

Memtest86 passed (4 runs, about 3 hours) zero errors.

 

Can I ask what you mean by "limit resources"?

 

Link to comment
2 hours ago, trurl said:

Any VMs?

No, nor any docker containers. I am still trying to transfer all my data so I haven't had the opportunity to set anything up yet

Edit: If it helps, it happened again earlier today (before I did the memtest). Here are those logs (happens near 13:59:30)

syslog (2)

Edited by Shu
Link to comment

Update: I have since finished using rsync and I now run the server headless. Haven't seen an issue yet (but am only at 10 hours uptime for now). One thing odd I noticed was that as I was using rsync, my main pc (from where I ran the CLI), my gpu memory would go up. LIkely a correlation and not causation but I didn't have time to test. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.