Arandomdood Posted October 16, 2019 Share Posted October 16, 2019 I have one problem. Currently I have 3 drives in my unRAID array and they're around 97% full. They used to do 180MBps, now they do barely 75 MBps. I'll soon add another drive. Can I then use unbalance to somehow balance data between all drives, so that they're each around 70% full? Thank you. Quote Link to comment
jbrodriguez Posted October 16, 2019 Author Share Posted October 16, 2019 23 minutes ago, Arandomdood said: I have one problem. Currently I have 3 drives in my unRAID array and they're around 97% full. They used to do 180MBps, now they do barely 75 MBps. I'll soon add another drive. Can I then use unbalance to somehow balance data between all drives, so that they're each around 70% full? Thank you. Short Answer: no Long Answer: no Sorry, not possible at this time. Quote Link to comment
_Shorty Posted October 17, 2019 Share Posted October 17, 2019 53 minutes ago, Arandomdood said: I have one problem. Currently I have 3 drives in my unRAID array and they're around 97% full. They used to do 180MBps, now they do barely 75 MBps. I'll soon add another drive. Can I then use unbalance to somehow balance data between all drives, so that they're each around 70% full? Thank you. I don't think there is any upside to doing this. A downside is spreading out your data can mean more drives being spun up when they may not have been had you not spread the data out. Just add a new drive and let it do its thing. Quote Link to comment
NewDisplayName Posted October 17, 2019 Share Posted October 17, 2019 Thats what i suggested some time ago, he refuse to implement it... Quote Link to comment
JonathanM Posted October 17, 2019 Share Posted October 17, 2019 18 minutes ago, nuhll said: Thats what i suggested some time ago, he refuse to implement it... This app is just a pretty front end to rsync commands, feel free to write your own implementation and support it, or pay someone to do it if you can't or don't want to do it yourself. The logic required to balance files across drives is not trivial, especially when you don't want to split the contents of folders arbitrarily. Quote Link to comment
_Shorty Posted October 17, 2019 Share Posted October 17, 2019 I'm curious. What would be the logic behind the operation? What do you think you would gain by spreading data out? Quote Link to comment
NewDisplayName Posted October 17, 2019 Share Posted October 17, 2019 Just now, _Shorty said: I'm curious. What would be the logic behind the operation? What do you think you would gain by spreading data out? Speed, full drives are slower. Quote Link to comment
squirrelslikenuts Posted October 17, 2019 Share Posted October 17, 2019 2 hours ago, nuhll said: Speed, full drives are slower. Example: WD RED WD40EFRX, slows to below 100 MB/s after 3600 GB Quote Link to comment
Dissones4U Posted October 17, 2019 Share Posted October 17, 2019 (edited) 8 hours ago, jbrodriguez said: Short Answer: no Long Answer: no Sorry, not possible at this time. What am I missing, based on the first post in this thread it sounds like unbalance can do exactly what Arandomdood is describing. In fact, I was about to install it myself for the same purpose but I wanted to read the support page first. The first post says: Quote Description unBALANCE helps you manage space in your unRAID array, via two different operating modes: Scatter Transfer data out of a disk, into one or more disks Gather Consolidate data from a user share into a single disk By moving a share in its entirety to the new disk he could achieve his goal (unless I'm missing something, which is certainly possible) Some of the use cases are: • Empty a disk, in order to change filesystems (read kizer's example) • Move all seasons of a tv show into a single disk <=== Wouldn't this speed up performance by spinning only one disk? • Move a specific folder from a disk to another disk <=== By moving specific folders (if not the entire share) one could balance all disks as desired? • Split your movies/tv shows/games folder from a disk to other disks Edited October 17, 2019 by Dissones4U Quote Link to comment
_Shorty Posted October 17, 2019 Share Posted October 17, 2019 9 hours ago, nuhll said: Speed, full drives are slower. Full drives are slower? Not really. The graph shared above is typical of any hard drive. They're fastest at the start of the disk because there are more sectors per cylinder in the outer cylinders, and then as you move to the inner cylinders there are less sectors per cylinder, so they're a little slower there. But this has nothing to do with the drives being full. It just has to do with the location that you're reading/writing at the time. You could be reading/writing to the inner cylinders even when the drive is completely empty, for example. You'd still be seeing that relatively slower performance even though the drive is empty. You're will probably be wasting more time and energy moving files around than you'll save by just leaving the system alone to do its job. Consolidating some similar/related files can stop you having to spin up multiple drives and lower your time waiting around for drives to spin up, but other than that I don't really see any upside to micromanaging the files. 1 Quote Link to comment
Dissones4U Posted October 17, 2019 Share Posted October 17, 2019 So how full is too full, two of my four disks are over 93% full? I was considering moving files to my new empty disk and making the disks around 85% full. Is that a waste of time? Quote Link to comment
jpotrz Posted October 17, 2019 Share Posted October 17, 2019 (edited) 12 minutes ago, Dissones4U said: So how full is too full, two of my four disks are over 93% full? I was considering moving files to my new empty disk and making the disks around 85% full. Is that a waste of time? Waste of time.* I have all my shares set to "fill up" with roughly a 10GB minumum free space. 10 out of 11 disks are at 99% and ~10GB remaining. When the 11th disk hits 90%, then I will order a new one. No sense in having free space sitting on disks and for that matter empty disks sitting around doing nothing. I'm not overly concerned about getting every drop of speed out of my drives. Under normal usage (99.99% of the time) nothing comes close to saturating the speed of the drive even at 99% full. *there are differing thoughts on this though. Edited October 17, 2019 by jpotrz Quote Link to comment
_Shorty Posted October 17, 2019 Share Posted October 17, 2019 There is a difference between dedicated network storage and local storage that you are actively using. You need a certain amount of free workspace on your local machine to ensure it continues to operate properly in all conditions, as it might need some space for temporary files or extra pagefile space at any given time, but you do not have that limitation on a network machine dedicated to storing files. Filling such drives until they are full is not a concern, and is making things as efficient as possible. Empty space is wasted space in this instance. I can't think of any reason for maintaining a large amount of free space on such drives. As for performance falling off, well, that's how hard drives have always been. More sectors in outer cylinders than there are in inner cylinders dictate this. It's (usually) rotating at a constant speed, so more sectors passing the read heads in the outer cylinders translates directly to more speed, and as you work your way in the number of sectors reduces and so does the transfer speed. Actively going out of your way to avoid that is a bit much. Why not simply by a 10 TB drive and only use the first half then? I mean, you're then multiplying the cost of your storage space by two, since you are ignoring half of it, but hey, at least the performance drop off is gone, right? The unRAID system does a pretty nice job of using up the space you give it. I wouldn't be so concerned with micromanaging it, as your return on that time investment is rather small. Both for you thinking about how to manage it, and for the time/energy used to constantly move files around. Consolidating a few things here and there in order to avoid spinning up more drives can certainly be helpful, and save you having to wait a bit longer each time you access that stuff, but beyond that I wouldn't worry about how things are stored. Just doesn't seem worth it. 1 Quote Link to comment
NewDisplayName Posted October 17, 2019 Share Posted October 17, 2019 (edited) Normal its not a big problem, if you write with 90mb/s or 120mb/s... but since were all geeks here... it must be automated my usecase is when i have to replace a drive, i copy all contents to the other drives... and then the balance mode would be really nice. Because now u have a few really full drives, and when they fail... you get the idea. But i must say this issue isnt so great anymore, because the dev pointed one option out where you can set how many % it should leave empty while using scatter/gather. Its default is very low, thats why i had the problems. I dont mean its usefull to swing the data around just for some MB/s... that was not intented. Edited October 17, 2019 by nuhll Quote Link to comment
_Shorty Posted October 17, 2019 Share Posted October 17, 2019 When they fail I replace it and let it rebuild from parity. Quote Link to comment
NewDisplayName Posted October 17, 2019 Share Posted October 17, 2019 Yes i know, but if another drive fails, all data is lost. Quote Link to comment
_Shorty Posted October 17, 2019 Share Posted October 17, 2019 Unless you have two parity drives, of course. I wouldn't think the chance of failure while moving files to other drives versus rebuilding would be much different, if at all. Quote Link to comment
Dissones4U Posted October 18, 2019 Share Posted October 18, 2019 okay cool, I thought that I may not need to maintain empty space because it's not like I defrag it or anything but I always err on the side of caution when I'm not sure. So fill'er up it is, thanks guys! Quote Link to comment
_Shorty Posted October 18, 2019 Share Posted October 18, 2019 On the subject of defragging, while it does happen, linux machines tend to avoid fragmentation better than Windows boxes did pre-NTFS. NTFS is better about it than previous filesystems there. Linux filesystems do a pretty good job of avoiding it to begin with, though it is still possible. But, typically, it doesn't happen often enough or to a large enough extent to really worry about defragmenting on linux machines. Quote Link to comment
JonathanM Posted October 18, 2019 Share Posted October 18, 2019 40 minutes ago, _Shorty said: On the subject of defragging, while it does happen, linux machines tend to avoid fragmentation better than Windows boxes did pre-NTFS. NTFS is better about it than previous filesystems there. Linux filesystems do a pretty good job of avoiding it to begin with, though it is still possible. But, typically, it doesn't happen often enough or to a large enough extent to really worry about defragmenting on linux machines. If you fill drives and leave them, fragmentation really is a total non issue. However, if you insist on shuffling data from drive to drive, it can start to be an issue, especially if you upgrade the drives using the normal unraid method where a smaller drive gets cloned to a larger drive and the partition is expanded. After several years of deleting and putting more data on, it can really speed up a drive to copy everything off, format, and put the data back. Reiserfs was especially bad about it, XFS seems to be little better, and I have no first hand experience with long term use of BTRFS to comment on it. Quote Link to comment
RGauld Posted October 19, 2019 Share Posted October 19, 2019 any word on a fix on this issue yet? I notice that with the new unRaid version 6.8.0-rc3 it still won't work... 1 Quote Link to comment
jbrodriguez Posted October 19, 2019 Author Share Posted October 19, 2019 Well, it's a release candidate, so there may still be changes about (it has happened in the past), that's why I generally support only stable releases. In any case, I'm looking to take a stab at this tomorrow. 1 Quote Link to comment
RGauld Posted October 20, 2019 Share Posted October 20, 2019 thank you! I appreciate it! Quote Link to comment
jbrodriguez Posted October 20, 2019 Author Share Posted October 20, 2019 (edited) I know what the issue is, it's a permissions issue. The plugin creates a log file on /boot/logs/unbalance.log and also writes a config file at /boot/config/unbalance.{conf,cfg} /boot and /boot logs have root:root ownership. By default, the plugin settings page runs the plugin as the nobody user. You can change the user you want unbalance to run as, but you can't choose root (this was a design decision I made) So when run as nobody, unbalance fails to create the log file at /boot/logs and exits. That's why it works when you run it from the command line: you're running it as root. I figure that the new user system in. 6.8.x enforces some permissions, so access to this folder is not allowed for a user that has less privileges than root (nobody in this case). I have to think how to solve this, but for the time being, the workaround is the workaround Some ideas to fix it: - allow unbalance to run as root (from the plugin settings page) - wip If anyone has ideas on how to fix this, I'd be glad to hear them 👍 Edited October 20, 2019 by jbrodriguez 1 Quote Link to comment
Dazog Posted October 26, 2019 Share Posted October 26, 2019 any chance in an update can you make the icon white to match unraid's new theme :0 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.